Authors:
Buddhika Nettasinghe;Vikram Krishnamurthy;
Pages: 1 - 14 Abstract: Depending on the initial adopters of an innovation, it can either lead to a large number of people adopting that innovation or, it might die away quickly without spreading. Therefore, an idea central to many application domains, such as viral marketing, message spreading, etc., is influence maximization: selecting a set of initial adopters from a social network that can cause a massive spread of an innovation (or, more generally an idea, a product or a message). To this end, we consider the problem of randomized influence maximization over a Markovian graph process: given a fixed set of individuals whose connectivity graph is evolving as a Markov chain, estimate the probability distribution (over this fixed set of nodes) that samples an individual that can initiate the largest information cascade (in expectation). Further, it is assumed that the sampling process affects the evolution of the graph, i.e., the sampling distribution and the transition probability matrix are functionally dependent. In this setup, recursive stochastic optimization algorithms are presented to estimate the optimal sampling distribution for two cases: 1) transition probabilities of the graph are unknown but, the graph can be observed perfectly; 2) transition probabilities of the graph are known, but the graph is observed in noise. These algorithms consist of a neighborhood size estimation algorithm combined with a variance reduction method, a Bayesian filter, and a stochastic gradient algorithm. Convergence of the algorithms is established theoretically, and numerical results are provided to illustrate how the algorithms work. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Swatantra Kafle;Vipul Gupta;Bhavya Kailkhura;Thakshila Wimalajeewa;Pramod K. Varshney;
Pages: 15 - 30 Abstract: In this paper, we study the problem of joint sparse support recovery with 1-b quantized compressive measurements in a distributed sensor network. Multiple nodes in the network are assumed to observe sparse signals having the same but unknown sparse support. Each node quantizes its measurement vector element-wise to 1 b. First, we consider that all the quantized measurements are available at a central fusion center. We derive performance bounds for sparsity pattern recovery using 1-bit quantized measurements from multiple sensors when the maximum likelihood decoder is employed. We further develop two computationally tractable algorithms for joint sparse support recovery in the centralized setting. One algorithm minimizes a cost function defined as the sum of the likelihood function and the $l_{1,infty }$ quasi-norm, while the other algorithm extends the binary iterative hard thresholding algorithm to the multiple measurement vector case. Second, we consider a decentralized setting where each node transmits 1-b measurements to its one-hop neighbors. The basic idea behind the algorithms developed in the decentralized setting is to embed collaboration among nodes and fusion strategies. We show that even with noisy 1-b compressed measurements, joint support recovery can be carried out accurately in both centralized and decentralized settings. We further show that the performance of the proposed 1-bit compressive sensing-based algorithms is very close to that of their real-valued counterparts except when the signal-to-noise ratio is very small. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Dušan Jakovetić;
Pages: 31 - 46 Abstract: Recently, there has been significant progress in the development of distributed first-order methods. In particular, Shi et al. (2015) on the one hand and Qu and Li (2017) and Nedic et al. (2016) on the other hand propose two different types of methods that are designed from very different perspectives. They achieve both exact and linear convergence when a constant step size is used—a favorable feature that was not achievable by most prior methods. In this paper, we unify, generalize, and improve convergence speed of the methods by Shi et al. (2015), Qu and Li (2017), and Nedic et al. (2016), when the underlying network is static and undirected. We first carry out a unifying primal-dual analysis that sheds light on how these methods compare. The analysis reveals that a major difference between the methods is on how the primal error affects the dual error along iterations. We, then, capitalize on the insights from the analysis to derive a novel method that can reduce the negative effect of the primal error on the dual error. We establish for the proposed generalized method global R-linear convergence rate under strongly convex costs with Lipschitz continuous gradients. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Jiani Liu;Elvin Isufi;Geert Leus;
Pages: 47 - 60 Abstract: In the field of signal processing on graphs, graph filters play a crucial role in processing the spectrum of graph signals. This paper proposes two different strategies for designing autoregressive moving average (ARMA) graph filters on both directed and undirected graphs. The first approach is inspired by Prony's method, which considers a modified error between the modeled and the desired frequency response. The second technique is based on an iterative approach, which finds the filter coefficients by iteratively minimizing the true error (instead of the modified error) between the modeled and the desired frequency response. The performance of the proposed algorithms is evaluated and compared with finite impulse response (FIR) graph filters, on both synthetic and real data. The obtained results show that ARMA filters outperform FIR filters in terms of approximation accuracy and they are suitable for graph signal interpolation, compression, and prediction. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Xianye Bu;Hongli Dong;Zidong Wang;Hongjian Liu;
Pages: 61 - 69 Abstract: This paper is concerned with a new non-fragile distributed fault estimation problem for a class of time-varying systems through sensor networks over a finite horizon. Both randomly occurring nonlinearities and randomly occurring gain variations, whose occurrences are governed by random variables obeying certain probabilistic distributions on the interval $[0,1]$ are taken into simultaneous consideration. The fault estimation is based on the information not only from the individual sensor, but also from its neighboring ones according to the given topology of the sensor network. The aim of this paper is to design a non-fragile distributed fault estimator such that, in the presence of certain drifts/variations/perturbations of the gain parameters during the implementation, a prescribed average $H_{infty }$ performance constraint on the estimation error dynamics is satisfied. Based on stochastic analysis techniques, the gain parameters are calculated by solving a series of recursive linear matrix inequalities over a finite horizon. Moreover, a simulation example is provided to show the effectiveness of the proposed fault estimation scheme. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Benedito J. B. Fonseca;
Pages: 70 - 85 Abstract: In a distributed sensor system to detect a target emitter in a large region of interest, only a small subset of sensors is close enough to the emitter to collect strong measurements, which motivates the use of fusion rules based on the scan statistic. However, it is difficult to design the scan statistic fusion rule because the analytical derivation of the probability of false alarm is intractable. Elaborate upper bounds for such a probability are available; however, they are difficult to apply and are applicable only in certain scenarios and conditions. Simple upper bounds are also available; however, they are usually dismissed for being too loose and leading to too conservative designs. This paper argues that, when designing sensor detection systems at typical configurations, one of these simple upper bounds (the product bound) is not as loose and the resulting design is not as conservative as generally considered. A simple lower bound is also proposed to allow a designer evaluate how conservative the resulting design is. This paper further shows that, for a large class of distributions, the design based on the product bound becomes less and less conservative as the size of the region increases. Application of the results to scenarios with irregular sensor placement is also presented. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Pirathayini Srikantha;Deepa Kundur;
Pages: 86 - 99 Abstract: Rapid advancements in smart grid technologies have brought about the proliferation of intelligent and actuating power system components such as distributed generation, storage, and smart appliance units. Capitalizing fully on the potential benefits of these systems for sustainable and economical power generation, management, and delivery is currently a significant challenge due to issues of scalability, intermittency, and heterogeneity of the associated networks. In particular, vertically integrated and centralized power system management is no longer tractable for optimally coordinating these diverse devices at large scale while also accounting for the underlying complex physical grid constraints. To address these challenges, we propose a hierarchical signal processing framework for optimal power flow management whereby the cyber-physical network relationships of the modern grid are leveraged to enable intelligent decision-making by individual devices based on local constraints and external information. Decentralized and distributed techniques based on convex optimization and game theoretic constructs are employed for information exchanges and decision-making at each tier of the proposed framework. It is shown via theoretical and simulation studies that our technique allows for the seamless integration of power components into the grid with low computational and communication overhead while maintaining optimal, sustainable, and feasible grid operations. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Ibrahim El Khalil Harrane;Rémi Flamary;Cédric Richard;
Pages: 100 - 112 Abstract: The rise of digital and mobile communications has recently made the world more connected and networked, resulting in an unprecedented volume of data flowing between sources, data centers, or processes. While these data may be processed in a centralized manner, it is often more suitable to consider distributed strategies such as diffusion as they are scalable and can handle large amounts of data by distributing tasks over networked agents. Although, it is relatively simple to implement diffusion strategies over a cluster, it appears to be challenging to deploy them in an ad hoc network with limited energy budget for communication. In this paper, we introduce a diffusion LMS strategy that significantly reduces communication costs without compromising the performance. Then, we analyze the proposed algorithm in the mean and mean-square sense. Next, we conduct numerical experiments to confirm the theoretical findings. Finally, we perform large scale simulations to test the algorithm efficiency in a scenario where energy is limited. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
César Asensio-Marco;Baltasar Beferull-Lozano;
Pages: 113 - 126 Abstract: Signal and information processing tasks over Wireless Sensor Networks can be successfully accomplished by means of a distributed implementation among the nodes. Existing distributed schemes are commonly based on iterative strategies that imply a huge demand of one-hop transmissions, which must be efficiently processed by the lower layers of the nodes. At the link layer, general purpose medium access (MAC) policies for wireless communications usually focus on avoiding collisions. These existing approaches result in a reduction of the number of simultaneous transmissions, and an underutilization of the channel as a consequence. This leads to a decrease in the performance of the distributed tasks, since an efficient channel occupation is not generally accomplished. In this paper, we propose a new MAC protocol that, besides focusing on the reliability of the communications, provides an efficient channel occupation. While the former has a direct impact on the energy consumption of usually battery powered devices, the latter affects the performance of the distributed task executed. We include both aspects in a global utility function that the nodes, relying just on local available information, aim to increase at every communication step. Furthermore, our proposal combines in a unique framework both unicast and broadcast scenarios. Through several simulation results, we show that our adaptive protocol outperforms the related literature. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Jianping He;Lin Cai;Chengcheng Zhao;Peng Cheng;Xinping Guan;
Pages: 127 - 138 Abstract: Privacy-preserving average consensus aims to guarantee the privacy of initial states and asymptotic consensus on the exact average of the initial values. In this paper, it is achieved by adding variance-decaying and zero-sum random noises to the consensus process. However, there is lack of theoretical analysis to quantify the degree of the data privacy protection. In this paper, we introduce the maximum disclosure probability that other nodes can infer one node's initial state within a given small interval to quantify the data privacy. We utilize a novel privacy definition, named $(alpha, beta)$-data-privacy, to depict the relationship between the maximum disclosure probability and the estimation accuracy. Then, we prove that the general privacy-preserving average consensus provides $(alpha, beta)$-data-privacy, and obtain the closed-form expression of the relationship between $alpha$ and $beta$ given the noise distribution. We reveal that the added noise with a uniform distribution is optimal in terms of achieving the highest $(alpha, beta)$-data-privacy. We also prove that under what condition, the data-privacy will be compromised. Finally, an optimal privacy-preserving average consensus algorithm is proposed to achieve the highest $(alpha, beta)$-data-privacy. Simulations verify the analytical results. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Pin-Yu Chen;Chun-Chen Tu;Paishun Ting;Ya-Yun Lo;Danai Koutra;Alfred O. Hero;
Pages: 139 - 151 Abstract: Patterns of event propagation in online social networks provide novel insights on the modeling and analysis of information dissemination over networks and physical systems. This paper studies the importance of follower links for event propagation on Twitter. Three recent event propagation traces are collected with the Twitter user language field being used to identify the Network of Networks (NoN) structure embedded in the Twitter follower networks. We first formulate event propagation on Twitter as an iterative state equation, and then propose an effective score function on follower links accounting for the containment of event propagation via link removals. Furthermore, we find that utilizing the NoN model can successfully identify influential follower links such that their removals lead to a remarkable reduction in event propagation on Twitter follower networks. Experimental results find that the between-network follower links, though only account for a small portion of the total follower links, are crucial to event propagation on Twitter. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Matthew Nokleby;Waheed U. Bajwa;
Pages: 152 - 167 Abstract: Motivated by machine learning applications in networks of sensors, Internet-of-Things devices, and autonomous agents, we propose techniques for distributed stochastic convex learning from high-rate data streams. The setup involves a network of nodes—each one of which has a stream of data arriving at a constant rate—that solve a stochastic convex optimization problem by collaborating with each other over rate-limited communication links. To this end, we present and analyze two algorithms—termed distributed stochastic approximation mirror descent (D-SAMD) and accelerated distributed stochastic approximation mirror descent (AD-SAMD)—that are based on two stochastic variants of mirror descent and in which nodes collaborate via approximate averaging of the local noisy subgradients using distributed consensus. Our main contributions are:1) bounds on the convergence rates of D-SAMD and AD-SAMD in terms of the number of nodes, network topology, and ratio of the data streaming and communication rates; and 2) sufficient conditions for order-optimum convergence of these algorithms. In particular, we show that for sufficiently well-connected networks, distributed learning schemes can obtain order-optimum convergence even if the communications rate is small. Furthermore, we find that the use of accelerated methods significantly enlarges the regime, in which order-optimum convergence is achieved; this is in contrast to the centralized setting, where accelerated methods usually offer only a modest improvement. Finally, we demonstrate the effectiveness of the proposed algorithms using numerical experiments. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Jing Yan;Ziqiang Xu;Xiaoyuan Luo;Cailian Chen;Xinping Guan;
Pages: 168 - 180 Abstract: This paper investigates the problem of target localization in underwater sensor networks, subjected to limited measurement range and accuracy. The localization process is mainly divided into two phases, i.e., distance estimation and position solving. In the first phase, some sensor nodes cannot acquire the direct distance measurements of target, due to the high-dynamic and strong-noise characteristics of underwater environment. Based on this, we formulate the distance measurement as a closed-loop control problem, and then a proportional-integral estimator is designed for sensors to acquire the distance information through indirect measurements. With the estimated distance information, a consensus-based unscented Kalman filtering (UKF) algorithm is proposed in the second phase to localize the target, where direct and indirect measurements are fused to reduce the influence of malicious data. Moreover, stability conditions are provided to show that the distance estimator can stabilize the closed-loop system, while the boundedness analyses are demonstrated to guarantee the localization accuracy. Finally, simulation results reveal that the proposed distance estimator can extend the measurement range of sensors by comparing with the single direct measurement. Meanwhile, the consensus-based UKF algorithm can effectively improve the localization accuracy. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)
Authors:
Musa Furkan Keskin;Osman Erdem;Sinan Gezici;
Pages: 181 - 197 Abstract: Light emitting diode (LED) based visible light positioning networks can provide accurate location information in environments where the global positioning system (GPS) suffers from severe signal degradation and/or cannot achieve high precision, such as indoor scenarios. In this paper, we propose to employ cooperative localization for hybrid infrared/visible light networks that involve multiple LED transmitters having known locations (e.g., on the ceiling) and visible light communication (VLC) units equipped with both LEDs and photodetectors (PDs) for the purpose of cooperation. In the considered scenario, downlink transmissions from LEDs on the ceiling to VLC units occur via visible light signals, while the infrared spectrum is utilized for device-to-device communications among VLC units. First, we derive the Cramér–Rao lower bound and the maximum likelihood estimator (MLE) for the localization of VLC units in the proposed cooperative scenario. To tackle the nonconvex structure of the MLE, we adopt a set-theoretic approach by formulating the problem of cooperative localization as a quasiconvex feasibility problem, where the aim is to find a point inside the intersection of convex constraint sets constructed as the sublevel sets of quasiconvex functions resulting from the Lambertian formula. Next, we devise two feasibility-seeking algorithms based on iterative gradient projections to solve the feasibility problem. Both algorithms are amenable to distributed implementation, thereby avoiding high-complexity centralized approaches. Capitalizing on the concept of quasi-Fejér convergent sequences, we carry out a formal convergence analysis to prove that the proposed algorithms converge to a solution of the feasibility problem in the consistent case. Numerical examples illustrate the improvements in localization performance achieved via cooperation among VLC units and evidence the convergence of the proposed algorit-ms to true VLC unit locations in both the consistent and inconsistent cases. PubDate:
March 2019
Issue No:Vol. 5, No. 1 (2019)