Paper Group ANR 486
The Hessian Estimation Evolution Strategy. Environmental Adaptation of Robot Morphology and Control through Real-world Evolution. Learning From Strategic Agents: Accuracy, Improvement, and Causality. Active stereo vision three-dimensional reconstruction by RGB dot pattern projection and ray intersection. A Bayesian Approach to Conversational Recomm …
The Hessian Estimation Evolution Strategy
Title | The Hessian Estimation Evolution Strategy |
Authors | Tobias Glasmachers, Oswin Krause |
Abstract | We present a novel black box optimization algorithm called Hessian Estimation Evolution Strategy. The algorithm updates the covariance matrix of its sampling distribution by directly estimating the curvature of the objective function. This algorithm design is targeted at twice continuously differentiable problems. For this, we extend the cumulative step-size adaptation algorithm of the CMA-ES to mirrored sampling. We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed. We also show that the algorithm is surprisingly robust when its core assumption of a twice continuously differentiable objective function is violated. The approach yields a new evolution strategy with competitive performance, and at the same time it also offers an interesting alternative to the usual covariance matrix update mechanism. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13256v1 |
https://arxiv.org/pdf/2003.13256v1.pdf | |
PWC | https://paperswithcode.com/paper/the-hessian-estimation-evolution-strategy |
Repo | |
Framework | |
Environmental Adaptation of Robot Morphology and Control through Real-world Evolution
Title | Environmental Adaptation of Robot Morphology and Control through Real-world Evolution |
Authors | Tønnes F. Nygaard, Charles P. Martin, David Howard, Jim Torresen, Kyrre Glette |
Abstract | Robots operating in the real world will experience a range of different environments and tasks. It is essential for the robot to have the ability to adapt to its surroundings to work efficiently in changing conditions. Evolutionary robotics aims to solve this by optimizing both the control and body (morphology) of a robot, allowing adaptation to internal, as well as external factors. Most work in this field has been done in physics simulators, which are relatively simple and not able to replicate the richness of interactions found in the real world. Solutions that rely on the complex interplay between control, body, and environment are therefore rarely found. In this paper, we rely solely on real-world evaluations and apply evolutionary search to yield combinations of morphology and control for our mechanically self-reconfiguring quadruped robot. We evolve solutions on two very different physical surfaces and analyze the results in terms of both control and morphology. We then transition to two previously unseen surfaces to demonstrate the generality of our method. We find that the evolutionary search adapts both control and body to the different physical environments, yielding significantly different morphology-controller configurations. Moreover, we observe that the solutions found by our method work well on previously unseen terrains. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13254v1 |
https://arxiv.org/pdf/2003.13254v1.pdf | |
PWC | https://paperswithcode.com/paper/environmental-adaptation-of-robot-morphology |
Repo | |
Framework | |
Learning From Strategic Agents: Accuracy, Improvement, and Causality
Title | Learning From Strategic Agents: Accuracy, Improvement, and Causality |
Authors | Yonadav Shavit, Benjamin Edelman, Brian Axelrod |
Abstract | In many predictive decision-making scenarios, such as credit scoring and academic testing, a decision-maker must construct a model (predicting some outcome) that accounts for agents’ incentives to “game” their features in order to receive better decisions. Whereas the strategic classification literature generally assumes that agents’ outcomes are not causally dependent on their features (and thus strategic behavior is a form of lying), we join concurrent work in modeling agents’ outcomes as a function of their changeable attributes. Our formulation is the first to incorporate a crucial phenomenon: when agents act to change observable features, they may as a side effect perturb hidden features that causally affect their true outcomes. We consider three distinct desiderata for a decision-maker’s model: accurately predicting agents’ post-gaming outcomes (accuracy), incentivizing agents to improve these outcomes (improvement), and, in the linear setting, estimating the visible coefficients of the true causal model (causal precision). As our main contribution, we provide the first algorithms for learning accuracy-optimizing, improvement-optimizing, and causal-precision-optimizing linear regression models directly from data, without prior knowledge of agents’ possible actions. These algorithms circumvent the hardness result of Miller et al. (2019) by allowing the decision maker to observe agents’ responses to a sequence of decision rules, in effect inducing agents to perform causal interventions for free. |
Tasks | Decision Making |
Published | 2020-02-24 |
URL | https://arxiv.org/abs/2002.10066v1 |
https://arxiv.org/pdf/2002.10066v1.pdf | |
PWC | https://paperswithcode.com/paper/learning-from-strategic-agents-accuracy |
Repo | |
Framework | |
Active stereo vision three-dimensional reconstruction by RGB dot pattern projection and ray intersection
Title | Active stereo vision three-dimensional reconstruction by RGB dot pattern projection and ray intersection |
Authors | Yongcan Shuang, Zhenzhou Wang |
Abstract | Active stereo vision is important in reconstructing objects without obvious textures. However, it is still very challenging to extract and match the projected patterns from two camera views automatically and robustly. In this paper, we propose a new pattern extraction method and a new stereo vision matching method based on our novel structured light pattern. Instead of using the widely used 2D disparity to calculate the depths of the objects, we use the ray intersection to compute the 3D shapes directly. Experimental results showed that the proposed approach could reconstruct the 3D shape of the object significantly more robustly than state of the art methods that include the widely used disparity based active stereo vision method, the time of flight method and the structured light method. In addition, experimental results also showed that the proposed approach could reconstruct the 3D motions of the dynamic shapes robustly. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13322v2 |
https://arxiv.org/pdf/2003.13322v2.pdf | |
PWC | https://paperswithcode.com/paper/active-stereo-vision-three-dimensional |
Repo | |
Framework | |
A Bayesian Approach to Conversational Recommendation Systems
Title | A Bayesian Approach to Conversational Recommendation Systems |
Authors | Francesca Mangili, Denis Broggini, Alessandro Antonucci, Marco Alberti, Lorenzo Cimasoni |
Abstract | We present a conversational recommendation system based on a Bayesian approach. A probability mass function over the items is updated after any interaction with the user, with information-theoretic criteria optimally shaping the interaction and deciding when the conversation should be terminated and the most probable item consequently recommended. Dedicated elicitation techniques for the prior probabilities of the parameters modeling the interactions are derived from basic structural judgements. Such prior information can be combined with historical data to discriminate items with different recommendation histories. A case study based on the application of this approach to \emph{stagend.com}, an online platform for booking entertainers, is finally discussed together with an empirical analysis showing the advantages in terms of recommendation quality and efficiency. |
Tasks | Recommendation Systems |
Published | 2020-02-12 |
URL | https://arxiv.org/abs/2002.05063v1 |
https://arxiv.org/pdf/2002.05063v1.pdf | |
PWC | https://paperswithcode.com/paper/a-bayesian-approach-to-conversational |
Repo | |
Framework | |
Towards Social Identity in Socio-Cognitive Agents
Title | Towards Social Identity in Socio-Cognitive Agents |
Authors | Diogo Rato, Samuel Mascarenhas, Rui Prada |
Abstract | Current architectures for social agents are designed around some specific units of social behaviour that address particular challenges. Although their performance might be adequate for controlled environments, deploying these agents in the wild is difficult. Moreover, the increasing demand for autonomous agents capable of living alongside humans calls for the design of more robust social agents that can cope with diverse social situations. We believe that to design such agents, their sociality and cognition should be conceived as one. This includes creating mechanisms for constructing social reality as an interpretation of the physical world with social meanings and selective deployment of cognitive resources adequate to the situation. We identify several design principles that should be considered while designing agent architectures for socio-cognitive systems. Taking these remarks into account, we propose a socio-cognitive agent model based on the concept of Cognitive Social Frames that allow the adaptation of an agent’s cognition based on its interpretation of its surroundings, its Social Context. Our approach supports an agent’s reasoning about other social actors and its relationship with them. Cognitive Social Frames can be built around social groups, and form the basis for social group dynamics mechanisms and construct of Social Identity. |
Tasks | |
Published | 2020-01-20 |
URL | https://arxiv.org/abs/2001.07142v1 |
https://arxiv.org/pdf/2001.07142v1.pdf | |
PWC | https://paperswithcode.com/paper/towards-social-identity-in-socio-cognitive |
Repo | |
Framework | |
Optimal No-regret Learning in Repeated First-price Auctions
Title | Optimal No-regret Learning in Repeated First-price Auctions |
Authors | Yanjun Han, Zhengyuan Zhou, Tsachy Weissman |
Abstract | We study online learning in repeated first-price auctions with censored feedback, where a bidder, only observing the winning bid at the end of each auction, learns to adaptively bid in order to maximize her cumulative payoff. To achieve this goal, the bidder faces a challenging dilemma: if she wins the bid–the only way to achieve positive payoffs–then she is not able to observe the highest bid of the other bidders, which we assume is iid drawn from an unknown distribution. This dilemma, despite being reminiscent of the exploration-exploitation trade-off in contextual bandits, cannot directly be addressed by the existing UCB or Thompson sampling algorithms in that literature, mainly because contrary to the standard bandits setting, when a positive reward is obtained here, nothing about the environment can be learned. In this paper, by exploiting the structural properties of first-price auctions, we develop the first learning algorithm that achieves $O(\sqrt{T}\log^2 T)$ regret bound when the bidder’s private values are stochastically generated. We do so by providing an algorithm on a general class of problems, which we call monotone group contextual bandits, where the same regret bound is established under stochastically generated contexts. Further, by a novel lower bound argument, we characterize an $\Omega(T^{2/3})$ lower bound for the case where the contexts are adversarially generated, thus highlighting the impact of the contexts generation mechanism on the fundamental learning limit. Despite this, we further exploit the structure of first-price auctions and develop a learning algorithm that operates sample-efficiently (and computationally efficiently) in the presence of adversarially generated private values. We establish an $O(\sqrt{T}\log^5 T)$ regret bound for this algorithm, hence providing a complete characterization of optimal learning guarantees for this problem. |
Tasks | Multi-Armed Bandits |
Published | 2020-03-22 |
URL | https://arxiv.org/abs/2003.09795v1 |
https://arxiv.org/pdf/2003.09795v1.pdf | |
PWC | https://paperswithcode.com/paper/optimal-no-regret-learning-in-repeated-first |
Repo | |
Framework | |
Ginger Cannot Cure Cancer: Battling Fake Health News with a Comprehensive Data Repository
Title | Ginger Cannot Cure Cancer: Battling Fake Health News with a Comprehensive Data Repository |
Authors | Enyan Dai, Yiwei Sun, Suhang Wang |
Abstract | Nowadays, Internet is a primary source of attaining health information. Massive fake health news which is spreading over the Internet, has become a severe threat to public health. Numerous studies and research works have been done in fake news detection domain, however, few of them are designed to cope with the challenges in health news. For instance, the development of explainable is required for fake health news detection. To mitigate these problems, we construct a comprehensive repository, FakeHealth, which includes news contents with rich features, news reviews with detailed explanations, social engagements and a user-user social network. Moreover, exploratory analyses are conducted to understand the characteristics of the datasets, analyze useful patterns and validate the quality of the datasets for health fake news detection. We also discuss the novel and potential future research directions for the health fake news detection. |
Tasks | Fake News Detection |
Published | 2020-01-27 |
URL | https://arxiv.org/abs/2002.00837v2 |
https://arxiv.org/pdf/2002.00837v2.pdf | |
PWC | https://paperswithcode.com/paper/ginger-cannot-cure-cancer-battling-fake |
Repo | |
Framework | |
Kernel based analysis of massive data
Title | Kernel based analysis of massive data |
Authors | Hrushikesh N Mhaskar |
Abstract | Dealing with massive data is a challenging task for machine learning. An important aspect of machine learning is function approximation. In the context of massive data, some of the commonly used tools for this purpose are sparsity, divide-and-conquer, and distributed learning. In this paper, we develop a very general theory of approximation by networks, which we have called eignets, to achieve local, stratified approximation. The very massive nature of the data allows us to use these eignets to solve inverse problems such as finding a good approximation to the probability law that governs the data, and finding the local smoothness of the target function near different points in the domain. In fact, we develop a wavelet-like representation using our eignets. Our theory is applicable to approximation on a general locally compact metric measure space. Special examples include approximation by periodic basis functions on the torus, zonal function networks on a Euclidean sphere (including smooth ReLU networks), Gaussian networks, and approximation on manifolds. We construct pre-fabricated networks so that no data-based training is required for the approximation. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13226v1 |
https://arxiv.org/pdf/2003.13226v1.pdf | |
PWC | https://paperswithcode.com/paper/kernel-based-analysis-of-massive-data |
Repo | |
Framework | |
Bag of biterms modeling for short texts
Title | Bag of biterms modeling for short texts |
Authors | Anh Phan Tuan, Bach Tran, Thien Nguyen Huu, Linh Ngo Van, Khoat Than |
Abstract | Analyzing texts from social media encounters many challenges due to their unique characteristics of shortness, massiveness, and dynamic. Short texts do not provide enough context information, causing the failure of the traditional statistical models. Furthermore, many applications often face with massive and dynamic short texts, causing various computational challenges to the current batch learning algorithms. This paper presents a novel framework, namely Bag of Biterms Modeling (BBM), for modeling massive, dynamic, and short text collections. BBM comprises of two main ingredients: (1) the concept of Bag of Biterms (BoB) for representing documents, and (2) a simple way to help statistical models to include BoB. Our framework can be easily deployed for a large class of probabilistic models, and we demonstrate its usefulness with two well-known models: Latent Dirichlet Allocation (LDA) and Hierarchical Dirichlet Process (HDP). By exploiting both terms (words) and biterms (pairs of words), the major advantages of BBM are: (1) it enhances the length of the documents and makes the context more coherent by emphasizing the word connotation and co-occurrence via Bag of Biterms, (2) it inherits inference and learning algorithms from the primitive to make it straightforward to design online and streaming algorithms for short texts. Extensive experiments suggest that BBM outperforms several state-of-the-art models. We also point out that the BoB representation performs better than the traditional representations (e.g, Bag of Words, tf-idf) even for normal texts. |
Tasks | |
Published | 2020-03-26 |
URL | https://arxiv.org/abs/2003.11948v1 |
https://arxiv.org/pdf/2003.11948v1.pdf | |
PWC | https://paperswithcode.com/paper/bag-of-biterms-modeling-for-short-texts |
Repo | |
Framework | |
Planning as Inference in Epidemiological Models
Title | Planning as Inference in Epidemiological Models |
Authors | Frank Wood, Andrew Warrington, Saeid Naderiparizi, Christian Weilbach, Vaden Masrani, William Harvey, Adam Scibior, Boyan Beronov, Ali Nasseri |
Abstract | In this work we demonstrate how existing software tools can be used to automate parts of infectious disease-control policy-making via performing inference in existing epidemiological dynamics models. The kind of inference tasks undertaken include computing, for planning purposes, the posterior distribution over putatively controllable, via direct policy-making choices, simulation model parameters that give rise to acceptable disease progression outcomes. Neither the full capabilities of such inference automation software tools nor their utility for planning is widely disseminated at the current time. Timely gains in understanding about these tools and how they can be used may lead to more fine-grained and less economically damaging policy prescriptions, particularly during the current COVID-19 pandemic. |
Tasks | |
Published | 2020-03-30 |
URL | https://arxiv.org/abs/2003.13221v1 |
https://arxiv.org/pdf/2003.13221v1.pdf | |
PWC | https://paperswithcode.com/paper/planning-as-inference-in-epidemiological |
Repo | |
Framework | |
Interpolation under latent factor regression models
Title | Interpolation under latent factor regression models |
Authors | Florentina Bunea, Seth Strimas-Mackey, Marten Wegkamp |
Abstract | This work studies finite-sample properties of the risk of the minimum-norm interpolating predictor in high-dimensional regression models. If the effective rank of the covariance matrix $\Sigma$ of the $p$ regression features is much larger than the sample size $n$, we show that the min-norm interpolating predictor is not desirable, as its risk approaches the risk of trivially predicting the response by $0$. However, our detailed finite sample analysis reveals, surprisingly, that this behavior is not present when the regression response and the features are jointly low-dimensional, and follow a widely used factor regression model. Within this popular model class, and when the effective rank of $\Sigma$ is smaller than $n$, while still allowing for $p \gg n$, both the bias and the variance terms of the excess risk can be controlled, and the risk of the minimum-norm interpolating predictor approaches optimal benchmarks. Moreover, through a detailed analysis of the bias term, we exhibit model classes under which our upper bound on the excess risk approaches zero, while the corresponding upper bound in the recent work arXiv:1906.11300v3 diverges. Furthermore, we show that minimum-norm interpolating predictors analyzed under factor regression models, despite being model-agnostic, can have similar risk to model-assisted predictors based on principal components regression, in the high-dimensional regime. |
Tasks | |
Published | 2020-02-06 |
URL | https://arxiv.org/abs/2002.02525v2 |
https://arxiv.org/pdf/2002.02525v2.pdf | |
PWC | https://paperswithcode.com/paper/interpolation-under-latent-factor-regression |
Repo | |
Framework | |
Deep Learning Assisted CSI Estimation for Joint URLLC and eMBB Resource Allocation
Title | Deep Learning Assisted CSI Estimation for Joint URLLC and eMBB Resource Allocation |
Authors | Hamza Khan, M. Majid Butt, Sumudu Samarakoon, Philippe Sehier, Mehdi Bennis |
Abstract | Multiple-input multiple-output (MIMO) is a key for the fifth generation (5G) and beyond wireless communication systems owing to higher spectrum efficiency, spatial gains, and energy efficiency. Reaping the benefits of MIMO transmission can be fully harnessed if the channel state information (CSI) is available at the transmitter side. However, the acquisition of transmitter side CSI entails many challenges. In this paper, we propose a deep learning assisted CSI estimation technique in highly mobile vehicular networks, based on the fact that the propagation environment (scatterers, reflectors) is almost identical thereby allowing a data driven deep neural network (DNN) to learn the non-linear CSI relations with negligible overhead. Moreover, we formulate and solve a dynamic network slicing based resource allocation problem for vehicular user equipments (VUEs) requesting enhanced mobile broadband (eMBB) and ultra-reliable low latency (URLLC) traffic slices. The formulation considers a threshold rate violation probability minimization for the eMBB slice while satisfying a probabilistic threshold rate criterion for the URLLC slice. Simulation result shows that an overhead reduction of 50% can be achieved with 12% increase in threshold violations compared to an ideal case with perfect CSI knowledge. |
Tasks | |
Published | 2020-03-12 |
URL | https://arxiv.org/abs/2003.05685v1 |
https://arxiv.org/pdf/2003.05685v1.pdf | |
PWC | https://paperswithcode.com/paper/deep-learning-assisted-csi-estimation-for |
Repo | |
Framework | |
Near-optimal Reinforcement Learning in Factored MDPs: Oracle-Efficient Algorithms for the Non-episodic Setting
Title | Near-optimal Reinforcement Learning in Factored MDPs: Oracle-Efficient Algorithms for the Non-episodic Setting |
Authors | Ziping Xu, Ambuj Tewari |
Abstract | We study reinforcement learning in factored Markov decision processes (FMDPs) in the non-episodic setting. We focus on regret analyses providing both upper and lower bounds. We propose two near-optimal and oracle-efficient algorithms for FMDPs. Assuming oracle access to an FMDP planner, they enjoy a Bayesian and a frequentist regret bound respectively, both of which reduce to the near-optimal bound $\widetilde{O}(DS\sqrt{AT})$ for standard non-factored MDPs. Our lower bound depends on the span of the bias vector rather than the diameter $D$ and we show via a simple Cartesian product construction that FMDPs with a bounded span can have an arbitrarily large diameter, which suggests that bounds with a dependence on diameter can be extremely loose. We, therefore, propose another algorithm that only depends on span but relies on a computationally stronger oracle. Our algorithms outperform the previous near-optimal algorithms on computer network administrator simulations. |
Tasks | |
Published | 2020-02-06 |
URL | https://arxiv.org/abs/2002.02302v1 |
https://arxiv.org/pdf/2002.02302v1.pdf | |
PWC | https://paperswithcode.com/paper/near-optimal-reinforcement-learning-in-3 |
Repo | |
Framework | |
Generative-based Airway and Vessel Morphology Quantification on Chest CT Images
Title | Generative-based Airway and Vessel Morphology Quantification on Chest CT Images |
Authors | Pietro Nardelli, James C. Ross, Raúl San José Estépar |
Abstract | Accurately and precisely characterizing the morphology of small pulmonary structures from Computed Tomography (CT) images, such as airways and vessels, is becoming of great importance for diagnosis of pulmonary diseases. The smaller conducting airways are the major site of increased airflow resistance in chronic obstructive pulmonary disease (COPD), while accurately sizing vessels can help identify arterial and venous changes in lung regions that may determine future disorders. However, traditional methods are often limited due to image resolution and artifacts. We propose a Convolutional Neural Regressor (CNR) that provides cross-sectional measurement of airway lumen, airway wall thickness, and vessel radius. CNR is trained with data created by a generative model of synthetic structures which is used in combination with Simulated and Unsupervised Generative Adversarial Network (SimGAN) to create simulated and refined airways and vessels with known ground-truth. For validation, we first use synthetically generated airways and vessels produced by the proposed generative model to compute the relative error and directly evaluate the accuracy of CNR in comparison with traditional methods. Then, in-vivo validation is performed by analyzing the association between the percentage of the predicted forced expiratory volume in one second (FEV1%) and the value of the Pi10 parameter, two well-known measures of lung function and airway disease, for airways. For vessels, we assess the correlation between our estimate of the small-vessel blood volume and the lungs’ diffusing capacity for carbon monoxide (DLCO). The results demonstrate that Convolutional Neural Networks (CNNs) provide a promising direction for accurately measuring vessels and airways on chest CT images with physiological correlates. |
Tasks | Computed Tomography (CT) |
Published | 2020-02-13 |
URL | https://arxiv.org/abs/2002.05702v2 |
https://arxiv.org/pdf/2002.05702v2.pdf | |
PWC | https://paperswithcode.com/paper/generative-based-airway-and-vessel-morphology |
Repo | |
Framework | |