Entropy [SJR: 0.59] [H-I: 30] [5 followers] Follow Open Access journal ISSN (Print) 1099-4300 Published by MDPI [156 journals] |
- Entropy, Vol. 19, Pages 487: Real-Time Robust Voice Activity Detection
Using the Upper Envelope Weighted Entropy Measure and the Dual-Rate
Adaptive Nonlinear Filter
Authors: Wei Ong, Alan Tan, V. Vengadasalam, Cheah Tan, Thean Ooi
First page: 487
Abstract: Voice activity detection (VAD) is a vital process in voice communication systems to avoid unnecessary coding and transmission of noise. Most of the existing VAD algorithms continue to suffer high false alarm rates and low sensitivity when the signal-to-noise ratio (SNR) is low, at 0 dB and below. Others are developed to operate in offline mode or are impractical for implementation in actual devices due to high computational complexity. This paper proposes the upper envelope weighted entropy (UEWE) measure as a means to enable high separation of speech and non-speech segments in voice communication. The asymmetric nonlinear filter (ANF) is employed in UEWE to extract the adaptive weight factor that is subsequently used to compensate the noise effect. In addition, this paper also introduces a dual-rate adaptive nonlinear filter (DANF) with high adaptivity to rapid time-varying noise for computation of the decision threshold. Performance comparison with standard and recent VADs shows that the proposed algorithm is superior especially in real-time practical applications.
Citation: Entropy
PubDate: 2017-10-28
DOI: 10.3390/e19110487
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 537: Gravitational Contribution to the Heat Flux
in a Simple Dilute Fluid: An Approach Based on General Relativistic
Kinetic Theory to First Order in the Gradients
Authors: Dominique Brun-Battistini, Alfredo Sandoval-Villalbazo, Ana Garcia-Perciante
First page: 537
Abstract: Richard C. Tolman analyzed the relation between a temperature gradient and a gravitational field in an equilibrium situation. In 2012, Tolman’s law was generalized to a non-equilibrium situation for a simple dilute relativistic fluid. The result in that scenario, obtained by introducing the gravitational force through the molecular acceleration, couples the heat flux with the metric coefficients and the gradients of the state variables. In the present paper it is shown, by explicitly describing the single particle orbits as geodesics in Boltzmann’s equation, that a gravitational field drives a heat flux in this type of system. The calculation is devoted solely to the gravitational field contribution to this heat flux in which a Newtonian limit to the Schwarzschild metric is assumed. The corresponding transport coefficient, which is obtained within a relaxation approximation, corresponds to the dilute fluid in a weak gravitational field. The effect is negligible in the non-relativistic regime, as evidenced by the direct evaluation of the corresponding limit.
Citation: Entropy
PubDate: 2017-10-28
DOI: 10.3390/e19110537
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 560: Partial and Entropic Information
Decompositions of a Neuronal Modulatory Interaction
Authors: Jim Kay, Robin Ince, Benjamin Dering, William Phillips
First page: 560
Abstract: Information processing within neural systems often depends upon selective amplification of relevant signals and suppression of irrelevant signals. This has been shown many times by studies of contextual effects but there is as yet no consensus on how to interpret such studies. Some researchers interpret the effects of context as contributing to the selective receptive field (RF) input about which neurons transmit information. Others interpret context effects as affecting transmission of information about RF input without becoming part of the RF information transmitted. Here we use partial information decomposition (PID) and entropic information decomposition (EID) to study the properties of a form of modulation previously used in neurobiologically plausible neural nets. PID shows that this form of modulation can affect transmission of information in the RF input without the binary output transmitting any information unique to the modulator. EID produces similar decompositions, except that information unique to the modulator and the mechanistic shared component can be negative when modulating and modulated signals are correlated. Synergistic and source shared components were never negative in the conditions studied. Thus, both PID and EID show that modulatory inputs to a local processor can affect the transmission of information from other inputs. Contrary to what was previously assumed, this transmission can occur without the modulatory inputs becoming part of the information transmitted, as shown by the use of PID with the model we consider. Decompositions of psychophysical data from a visual contrast detection task with surrounding context suggest that a similar form of modulation may also occur in real neural systems.
Citation: Entropy
PubDate: 2017-10-26
DOI: 10.3390/e19110560
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 571: Transport Coefficients from Large Deviation
Functions
Authors: Chloe Gao, David Limmer
First page: 571
Abstract: We describe a method for computing transport coefficients from the direct evaluation of large deviation functions. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which are scaled cumulant generating functions analogous to the free energies. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green–Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
Citation: Entropy
PubDate: 2017-10-25
DOI: 10.3390/e19110571
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 572: A Behavioural Analysis of Complexity in
Socio-Technical Systems under Tension Modelled by Petri Nets
Authors: Martin Ibl, Jan Čapek
First page: 572
Abstract: Complexity analysis of dynamic systems provides a better understanding of the internal behaviours that are associated with tension and efficiency, which in the socio-technical systems may lead to innovation. One of the popular approaches for the assessment of complexity is associated with self-similarity. The dynamic component of dynamic systems represents the relationships and interactions among the inner elements (and its surroundings) and fully describes its behaviour. The approach used in this work addresses complexity analysis in terms of system behaviour, i.e., the so-called behavioural analysis of complexity. The self-similarity of a system (structural or behavioural) can be determined, for example, using fractal geometry, whose toolbox provides a number of methods for the measurement of the so-called fractal dimension. Other instruments for measuring the self-similarity in a system, include the Hurst exponent and the framework of complex system theory in general. The approach introduced in this work defines the complexity analysis in a social-technical system under tension. The proposed procedure consists of modelling the key dynamic components of a discrete event dynamic system by any definition of Petri nets. From the stationary probabilities, one can then decide whether the system is self-similar using the abovementioned tools. In addition, the proposed approach allows for finding the critical values (phase transitions) of the analysed systems.
Citation: Entropy
PubDate: 2017-10-25
DOI: 10.3390/e19110572
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 573: A New Definition of t-Entropy for Transfer
Operators
Authors: Victor Bakhtin, Andrei Lebedev
First page: 573
Abstract: This article presents a new definition of t-entropy that makes it more explicit and simplifies the process of its calculation.
Citation: Entropy
PubDate: 2017-10-25
DOI: 10.3390/e19110573
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 574: Forewarning Model of Regional Water Resources
Carrying Capacity Based on Combination Weights and Entropy Principles
Authors: Rongxing Zhou, Zhengwei Pan, Juliang Jin, Chunhui Li, Shaowei Ning
First page: 574
Abstract: As a new development form for evaluating the regional water resources carrying capacity, forewarning regional water resources of their carrying capacities is an important adjustment and control measure for regional water security management. Up to now, most research on this issue have been qualitative analyses, with a lack of quantitative research. For this reason, an index system for forewarning regional water resources of their carrying capacities and grade standards, has been established in Anhui Province, China, in this paper. Subjective weights of forewarning indices can be calculated using a fuzzy analytic hierarchy process, based on an accelerating genetic algorithm, while objective weights of forewarning indices can be calculated by using a projection pursuit method, based on an accelerating genetic algorithm. These two kinds of weights can be combined into combination weights of forewarning indices, by using the minimum relative information entropy principle. Furthermore, a forewarning model of regional water resources carrying capacity, based on entropy combination weight, is put forward. The model can fully integrate subjective and objective information in the process of forewarning. The results show that the calculation results of the model are reasonable and the method has high adaptability. Therefore, this model is worth studying and popularizing.
Citation: Entropy
PubDate: 2017-10-25
DOI: 10.3390/e19110574
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 575: The Isolated Electron: De Broglie’s Hidden
Thermodynamics, SU(2) Quantum Yang-Mills Theory, and a Strongly Perturbed
BPS Monopole
Authors: Ralf Hofmann
First page: 575
Abstract: Based on a recent numerical simulation of the temporal evolution of a spherically perturbed BPS monopole, SU(2) Yang-Mills thermodynamics, Louis de Broglie’s deliberations on the disparate Lorentz transformations of the frequency of an internal “clock” on one hand and the associated quantum energy on the other hand, and postulating that the electron is represented by a figure-eight shaped, self-intersecting center vortex loop in SU(2) Quantum Yang-Mills theory we estimate the spatial radius R 0 of this self-intersection region in terms of the electron’s Compton wave length λ C . This region, which is immersed into the confining phase, constitutes a blob of deconfining phase of temperature T 0 mildly above the critical temperature T c carrying a frequently perturbed BPS monopole (with a magnetic-electric dual interpretation of its charge w.r.t. U(1)⊂SU(2)). We also establish a quantitative relation between rest mass m 0 of the electron and SU(2) Yang-Mills scale Λ , which in turn is defined via T c . Surprisingly, R 0 turns out to be comparable to the Bohr radius while the core size of the monopole matches λ C , and the correction to the mass of the electron due to Coulomb energy is about 2%.
Citation: Entropy
PubDate: 2017-10-26
DOI: 10.3390/e19110575
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 576: Feynman’s Ratchet and Pawl with Ecological
Criterion: Optimal Performance versus Estimation with Prior Information
Authors: Varinder Singh, Ramandeep Johal
First page: 576
Abstract: We study the optimal performance of Feynman’s ratchet and pawl, a paradigmatic model in nonequilibrium physics, using ecological criterion as the objective function. The analysis is performed by two different methods: (i) a two-parameter optimization over internal energy scales; and (ii) a one-parameter optimization of the estimate for the objective function, after averaging over the prior probability distribution (Jeffreys’ prior) for one of the uncertain internal energy scales. We study the model for both engine and refrigerator modes. We derive expressions for the efficiency/coefficient of performance (COP) at maximum ecological function. These expressions from the two methods are found to agree closely with equilibrium situations. Furthermore, the expressions obtained by the second method (with estimation) agree with the expressions obtained in finite-time thermodynamic models.
Citation: Entropy
PubDate: 2017-10-26
DOI: 10.3390/e19110576
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 577: Stability and Complexity Analysis of a
Dual-Channel Closed-Loop Supply Chain with Delayed Decision under
Government Intervention
Authors: Daoming Dai, Fengshan Si, Jing Wang
First page: 577
Abstract: This paper constructs a continuous dual-channel closed-loop supply chain (DCLSC) model with delayed decision under government intervention. The existence conditions of the local stability of the equilibrium point are discussed. We analyze the influence of delay parameters, the adjustment speed of wholesale price, recovery rate of waste products, direct price, carbon quota subsidy, and carbon tax on the stability and complexity of model by using bifurcation diagram, entropy diagram, attractor, and time series diagram and so on. Besides, the delay feedback control method is adopted to control the unstable or chaotic system effectively. The main conclusions of this paper show that the variables mentioned above must be within a reasonable range. Otherwise, the model will lose stability or enter chaos. The government can effectively adjust manufacturers' profit through carbon tax and carbon quota subsidy, and encourage manufacturers to reduce carbon emissions and increase the remanufacturing of waste products.
Citation: Entropy
PubDate: 2017-10-26
DOI: 10.3390/e19110577
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 578: A Kernel-Based Intuitionistic Fuzzy C-Means
Clustering Using a DNA Genetic Algorithm for Magnetic Resonance Image
Segmentation
Authors: Wenke Zang, Weining Zhang, Wenqian Zhang, Xiyu Liu
First page: 578
Abstract: MRI segmentation is critically important for clinical study and diagnosis. Existing methods based on soft clustering have several drawbacks, including low accuracy in the presence of image noise and artifacts, and high computational cost. In this paper, we introduce a new formulation of the MRI segmentation problem as a kernel-based intuitionistic fuzzy C-means (KIFCM) clustering problem and propose a new DNA-based genetic algorithm to obtain the optimal KIFCM clustering. While this algorithm searches the solution space for the optimal model parameters, it also obtains the optimal clustering, therefore the optimal MRI segmentation. We perform empirical study by comparing our method with six state-of-the-art soft clustering methods using a set of UCI (University of California, Irvine) datasets and a set of synthetic and clinic MRI datasets. The preliminary results show that our method outperforms other methods in both the clustering metrics and the computational efficiency.
Citation: Entropy
PubDate: 2017-10-27
DOI: 10.3390/e19110578
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 579: Thermodynamic Modelling of Supersonic Gas
Ejector with Droplets
Authors: Sergio Croquer, Sébastien Poncet, Zine Aidoun
First page: 579
Abstract: This study presents a thermodynamic model for determining the entrainment ratio and double choke limiting pressure of supersonic ejectors within the context of heat driven refrigeration cycles, with and without droplet injection, at the constant area section of the device. Input data include the inlet operating conditions and key geometry parameters (primary throat, mixing section and diffuser outlet diameter), whereas output information includes the ejector entrainment ratio, maximum double choke compression ratio, ejector efficiency, exergy efficiency and exergy destruction index. In single-phase operation, the ejector entrainment ratio and double choke limiting pressure are determined with a mean accuracy of 18 % and 2.5 % , respectively. In two-phase operation, the choked mass flow rate across convergent-divergent nozzles is estimated with a deviation of 10 % . An analysis on the effect of droplet injection confirms the hypothesis that droplet injection reduces by 8 % the pressure and Mach number jumps associated with shock waves occuring at the end of the constant area section. Nonetheless, other factors such as the mixing of the droplets with the main flow are introduced, resulting in an overall reduction by 11 % of the ejector efficiency and by 15 % of the exergy efficiency.
Citation: Entropy
PubDate: 2017-10-30
DOI: 10.3390/e19110579
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 580: Thermodynamic Analysis for Buoyancy-Induced
Couple Stress Nanofluid Flow with Constant Heat Flux
Authors: Samuel Adesanya, Hammed Ogunseye, J. Falade, R.S. Lebelo
First page: 580
Abstract: This paper addresses entropy generation in the flow of an electrically-conducting couple stress nanofluid through a vertical porous channel subjected to constant heat flux. By using the Buongiorno model, equations for momentum, energy, and nanofluid concentration are modelled, solved using homotopy analysis and furthermore, solved numerically. The variations of significant fluid parameters with respect to fluid velocity, temperature, nanofluid concentration, entropy generation, and irreversibility ratio are investigated, presented graphically, and discussed based on physical laws.
Citation: Entropy
PubDate: 2017-10-29
DOI: 10.3390/e19110580
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 581: Entropy Production in Stochastics
Authors: Demetris Koutsoyiannis
First page: 581
Abstract: While the modern definition of entropy is genuinely probabilistic, in entropy production the classical thermodynamic definition, as in heat transfer, is typically used. Here we explore the concept of entropy production within stochastics and, particularly, two forms of entropy production in logarithmic time, unconditionally (EPLT) or conditionally on the past and present having been observed (CEPLT). We study the theoretical properties of both forms, in general and in application to a broad set of stochastic processes. A main question investigated, related to model identification and fitting from data, is how to estimate the entropy production from a time series. It turns out that there is a link of the EPLT with the climacogram, and of the CEPLT with two additional tools introduced here, namely the differenced climacogram and the climacospectrum. In particular, EPLT and CEPLT are related to slopes of log-log plots of these tools, with the asymptotic slopes at the tails being most important as they justify the emergence of scaling laws of second-order characteristics of stochastic processes. As a real-world application, we use an extraordinary long time series of turbulent velocity and show how a parsimonious stochastic model can be identified and fitted using the tools developed.
Citation: Entropy
PubDate: 2017-10-30
DOI: 10.3390/e19110581
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 582: Challenging Recently Published Parameter Sets
for Entropy Measures in Risk Prediction for End-Stage Renal Disease
Patients
Authors: Stefan Hagmair, Martin Bachler, Matthias Braunisch, Georg Lorenz, Christoph Schmaderer, Anna-Lena Hasenau, Lukas Stülpnagel, Axel Bauer, Kostantinos Rizas, Siegfried Wassertheurer, Christopher Mayer
First page: 582
Abstract: Heart rate variability (HRV) analysis is a non-invasive tool for assessing cardiac health. Entropy measures quantify the chaotic properties of HRV, but they are sensitive to the choice of their required parameters. Previous studies therefore have performed parameter optimization, targeting solely their particular patient cohort. In contrast, this work aimed to challenge entropy measures with recently published parameter sets, without time-consuming optimization, for risk prediction in end-stage renal disease patients. Approximate entropy, sample entropy, fuzzy entropy, fuzzy measure entropy, and corrected approximate entropy were examined. In total, 265 hemodialysis patients from the ISAR (rISk strAtification in end-stage Renal disease) study were analyzed. Throughout a median follow-up time of 43 months, 70 patients died. Fuzzy entropy and corrected approximate entropy (CApEn) provided significant hazard ratios, which remained significant after adjustment for clinical risk factors from literature if an entropy maximizing threshold parameter was chosen. Revealing results were seen in the subgroup of patients with heart disease (HD) when setting the radius to a multiple of the data’s standard deviation ( r = 0.2 · σ ); all entropies, except CApEn, predicted mortality significantly and remained significant after adjustment. Therefore, these two parameter settings seem to reflect different cardiac properties. This work shows the potential of entropy measures for cardiovascular risk stratification in cohorts the parameters were not optimized for, and it provides additional insights into the parameter choice.
Citation: Entropy
PubDate: 2017-10-31
DOI: 10.3390/e19110582
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 583: Instance Selection for Classifier Performance
Estimation in Meta Learning
Authors: Marcin Blachnik
First page: 583
Abstract: Building an accurate prediction model is challenging and requires appropriate model selection. This process is very time consuming but can be accelerated with meta-learning–automatic model recommendation by estimating the performances of given prediction models without training them. Meta-learning utilizes metadata extracted from the dataset to effectively estimate the accuracy of the model in question. To achieve that goal, metadata descriptors must be gathered efficiently and must be informative to allow the precise estimation of prediction accuracy. In this paper, a new type of metadata descriptors is analyzed. These descriptors are based on the compression level obtained from the instance selection methods at the data-preprocessing stage. To verify their suitability, two types of experiments on real-world datasets have been conducted. In the first one, 11 instance selection methods were examined in order to validate the compression–accuracy relation for three classifiers: k-nearest neighbors (kNN), support vector machine (SVM), and random forest. From this analysis, two methods are recommended (instance-based learning type 2 (IB2), and edited nearest neighbor (ENN)) which are then compared with the state-of-the-art metaset descriptors. The obtained results confirm that the two suggested compression-based meta-features help to predict accuracy of the base model much more accurately than the state-of-the-art solution.
Citation: Entropy
PubDate: 2017-11-01
DOI: 10.3390/e19110583
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 584: Single-Cell Reprogramming in Mouse Embryo
Development through a Critical Transition State
Authors: Masa Tsuchiya, Alessandro Giuliani, Kenichi Yoshikawa
First page: 584
Abstract: Our previous work on the temporal development of the genome-expression profile in single-cell early mouse embryo indicated that reprogramming occurs via a critical transition state, where the critical-regulation pattern of the zygote state disappears. In this report, we unveil the detailed mechanism of how the dynamic interaction of thermodynamic states (critical states) enables the genome system to pass through the critical transition state to achieve genome reprogramming right after the late 2-cell state. Self-organized criticality (SOC) control of overall expression provides a snapshot of self-organization and explains the coexistence of critical states at a certain experimental time point. The time-development of self-organization is dynamically modulated by changes in expression flux between critical states through the cell nucleus milieu, where sequential global perturbations involving activation-inhibition of multiple critical states occur from the middle 2-cell to the 4-cell state. Two cyclic fluxes act as feedback flow and generate critical-state coherent oscillatory dynamics. Dynamic perturbation of these cyclic flows due to vivid activation of the ensemble of low-variance expression (sub-critical state) genes allows the genome system to overcome a transition state during reprogramming. Our findings imply that a universal mechanism of long-term global RNA oscillation underlies autonomous SOC control, and the critical gene ensemble at a critical point (CP) drives genome reprogramming. Identification of the corresponding molecular players will be essential for understanding single-cell reprogramming.
Citation: Entropy
PubDate: 2017-11-02
DOI: 10.3390/e19110584
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 585: A Refined Composite Multivariate Multiscale
Fuzzy Entropy and Laplacian Score-Based Fault Diagnosis Method for Rolling
Bearings
Authors: Jinde Zheng, Deyu Tu, Haiyang Pan, Xiaolei Hu, Tao Liu, Qingyun Liu
First page: 585
Abstract: The vibration signals of rolling bearings are often nonlinear and non-stationary. Multiscale entropy (MSE) has been widely applied to measure the complexity of nonlinear mechanical vibration signals, however, at present only the single channel vibration signals are used for fault diagnosis by many scholars. In this paper multiscale entropy in multivariate framework, i.e., multivariate multiscale entropy (MMSE) is introduced to machinery fault diagnosis to improve the efficiency of fault identification as much as possible through using multi-channel vibration information. MMSE evaluates the multivariate complexity of synchronous multi-channel data and is an effective method for measuring complexity and mutual nonlinear dynamic relationship, but its statistical stability is poor. Refined composite multivariate multiscale fuzzy entropy (RCMMFE) was developed to overcome the problems existing in MMSE and was compared with MSE, multiscale fuzzy entropy, MMSE and multivariate multiscale fuzzy entropy by analyzing simulation data. Finally, a new fault diagnosis method for rolling bearing was proposed based on RCMMFE for fault feature extraction, Laplacian score and particle swarm optimization support vector machine (PSO-SVM) for automatic fault mode identification. The proposed method was compared with the existing methods by analyzing experimental data analysis and the results indicate its effectiveness and superiority.
Citation: Entropy
PubDate: 2017-11-02
DOI: 10.3390/e19110585
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 586: Discovering Potential Correlations via
Hypercontractivity
Authors: Hyeji Kim, Weihao Gao, Sreeram Kannan, Sewoong Oh, Pramod Viswanath
First page: 586
Abstract: Discovering a correlation from one variable to another variable is of fundamental scientific and practical interest. While existing correlation measures are suitable for discovering average correlation, they fail to discover hidden or potential correlations. To bridge this gap, (i) we postulate a set of natural axioms that we expect a measure of potential correlation to satisfy; (ii) we show that the rate of information bottleneck, i.e., the hypercontractivity coefficient, satisfies all the proposed axioms; (iii) we provide a novel estimator to estimate the hypercontractivity coefficient from samples; and (iv) we provide numerical experiments demonstrating that this proposed estimator discovers potential correlations among various indicators of WHO datasets, is robust in discovering gene interactions from gene expression time series data, and is statistically more powerful than the estimators for other correlation measures in binary hypothesis testing of canonical examples of potential correlations.
Citation: Entropy
PubDate: 2017-11-02
DOI: 10.3390/e19110586
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 587: The Application of Dual-Tree Complex Wavelet
Transform (DTCWT) Energy Entropy in Misalignment Fault Diagnosis of
Doubly-Fed Wind Turbine (DFWT)
Authors: Yancai Xiao, Yi Hong, Xiuhai Chen, Weijia Chen
First page: 587
Abstract: Misalignment is one of the common faults for the doubly-fed wind turbine (DFWT), and the normal operation of the unit will be greatly affected under this state. Because it is difficult to obtain a large number of misaligned fault samples of wind turbines in practice, ADAMS and MATLAB are used to simulate the various misalignment conditions of the wind turbine transmission system to obtain the corresponding stator current in this paper. Then, the dual-tree complex wavelet transform is used to decompose and reconstruct the characteristic signal, and the dual-tree complex wavelet energy entropy is obtained from the reconstructed coefficients to form the feature vector of the fault diagnosis. Support vector machine is used as classifier and particle swarm optimization is used to optimize the relevant parameters of support vector machine (SVM) to improve its classification performance. The results show that the method proposed in this paper can effectively and accurately classify the misalignment of the transmission system of the wind turbine and improve the reliability of the fault diagnosis.
Citation: Entropy
PubDate: 2017-11-02
DOI: 10.3390/e19110587
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 588: Correction: Kolchinsky, A. and Tracey, B.D.
Authors: Artemy Kolchinsky, Brendan Tracey
First page: 588
Abstract: Following the publication of our paper [1], we uncovered a mistake in the derivation of two formulas in the manuscript.[...]
Citation: Entropy
PubDate: 2017-11-03
DOI: 10.3390/e19110588
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 589: The Mean Field Theories of Magnetism and
Turbulence
Authors: Peter W. Egolf, Kolumban Hutter
First page: 589
Abstract: In the last few decades a series of experiments have revealed that turbulence is a cooperative and critical phenomenon showing a continuous phase change with the critical Reynolds number at its onset. However, the applications of phase transition models, such as the Mean Field Theory (MFT), the Heisenberg model, the XY model, etc. to turbulence, have not been realized so far. Now, in this article, a successful analogy to magnetism is reported, and it is shown that a Mean Field Theory of Turbulence (MFTT) can be built that reveals new results. In analogy to compressibility in fluids and susceptibility in magnetic materials, the vorticibility (the authors of this article propose this new name in analogy to response functions, derived and given names in other fields) of a turbulent flowing fluid is revealed, which is identical to the relative turbulence intensity. By analogy to magnetism, in a natural manner, the Curie Law of Turbulence was discovered. It is clear that the MFTT is a theory describing equilibrium flow systems, whereas for a long time it is known that turbulence is a highly non-equilibrium phenomenon. Nonetheless, as a starting point for the development of thermodynamic models of turbulence, the presented MFTT is very useful to gain physical insight, just as Kraichnan’s turbulent energy spectra of 2-D and 3-D turbulence are, which were developed with equilibrium Boltzmann-Gibbs thermodynamics and only recently have been generalized and adapted to non-equilibrium and intermittent turbulent flow fields.
Citation: Entropy
PubDate: 2017-11-03
DOI: 10.3390/e19110589
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 590: Multiscale Sample Entropy of Cardiovascular
Signals: Does the Choice between Fixed- or Varying-Tolerance among Scales
Influence Its Evaluation and Interpretation'
Authors: Paolo Castiglioni, Paolo Coruzzi, Matteo Bini, Gianfranco Parati, Andrea Faini
First page: 590
Abstract: Multiscale entropy (MSE) quantifies the cardiovascular complexity evaluating Sample Entropy (SampEn) on coarse-grained series at increasing scales τ. Two approaches exist, one using a fixed tolerance r at all scales (MSEFT), the other a varying tolerance r(τ) adjusted following the standard-deviation changes after coarse graining (MSEVT). The aim of this study is to clarify how the choice between MSEFT and MSEVT influences quantification and interpretation of cardiovascular MSE, and whether it affects some signals more than others. To achieve this aim, we considered 2-h long beat-by-beat recordings of inter-beat intervals and of systolic and diastolic blood pressures in male (N = 42) and female (N = 42) healthy volunteers. We compared MSE estimated with fixed and varying tolerances, and evaluated whether the choice between MSEFT and MSEVT estimators influence quantification and interpretation of sex-related differences. We found substantial discrepancies between MSEFT and MSEVT results, related to the degree of correlation among samples and more important for heart rate than for blood pressure; moreover the choice between MSEFT and MSEVT may influence the interpretation of gender differences for MSE of heart rate. We conclude that studies on cardiovascular complexity should carefully choose between fixed- or varying-tolerance estimators, particularly when evaluating MSE of heart rate.
Citation: Entropy
PubDate: 2017-11-04
DOI: 10.3390/e19110590
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 591: A Connection Entropy Approach to Water
Resources Vulnerability Analysis in a Changing Environment
Authors: Zhengwei Pan, Juliang Jin, Chunhui Li, Shaowei Ning, Rongxing Zhou
First page: 591
Abstract: This paper establishes a water resources vulnerability framework based on sensitivity, natural resilience and artificial adaptation, through the analyses of the four states of the water system and its accompanying transformation processes. Furthermore, it proposes an analysis method for water resources vulnerability based on connection entropy, which extends the concept of contact entropy. An example is given of the water resources vulnerability in Anhui Province of China, which analysis illustrates that, overall, vulnerability levels fluctuated and showed apparent improvement trends from 2001 to 2015. Some suggestions are also provided for the improvement of the level of water resources vulnerability in Anhui Province, considering the viewpoint of the vulnerability index.
Citation: Entropy
PubDate: 2017-11-06
DOI: 10.3390/e19110591
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 592: Spatial Optimization of Agricultural Land Use
Based on Cross-Entropy Method
Authors: Lina Hao, Xiaoling Su, Vijay Singh, Olusola Ayantobo
First page: 592
Abstract: An integrated optimization model was developed for the spatial distribution of agricultural crops in order to efficiently utilize agricultural water and land resources simultaneously. The model is based on the spatial distribution of crop suitability, spatial distribution of population density, and agricultural land use data. Multi-source remote sensing data are combined with constraints of optimal crop area, which are obtained from agricultural cropping pattern optimization model. Using the middle reaches of the Heihe River basin as an example, the spatial distribution of maize and wheat were optimized by minimizing cross-entropy between crop distribution probabilities and desired but unknown distribution probabilities. Results showed that the area of maize should increase and the area of wheat should decrease in the study area compared with the situation in 2013. The comprehensive suitable area distribution of maize is approximately in accordance with the distribution in the present situation; however, the comprehensive suitable area distribution of wheat is not consistent with the distribution in the present situation. Through optimization, the high proportion of maize and wheat area was more concentrated than before. The maize area with more than 80% allocation concentrates on the south of the study area, and the wheat area with more than 30% allocation concentrates on the central part of the study area. The outcome of this study provides a scientific basis for farmers to select crops that are suitable in a particular area.
Citation: Entropy
PubDate: 2017-11-07
DOI: 10.3390/e19110592
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 593: Correction: Abdollahzadeh Jamalabadi, M.Y.;
Hooshmand, P.; Bagheri, N.; KhakRah, H.; Dousti, M. Numerical Simulation
of Williamson Combined Natural and Forced Convective Fluid Flow between
Parallel Vertical Walls with Slip Effects and Radiative Heat Transfer in a
Porous Medium. Entropy 2016, 18, 147
Authors: Mohammad Abdollahzadeh Jamalabadi, Payam Hooshmand, Navid Bagheri, HamidReza KhakRah, Majid Dousti
First page: 593
Abstract: The authors wish to make the following correction to this paper [...]
Citation: Entropy
PubDate: 2017-11-07
DOI: 10.3390/e19110593
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 594: A Novel Derivation of the Time Evolution of
the Entropy for Macroscopic Systems in Thermal Non-Equilibrium
Authors: Enrico Sciubba, Federico Zullo
First page: 594
Abstract: The paper discusses how the two thermodynamic properties, energy (U) and exergy (E), can be used to solve the problem of quantifying the entropy of non-equilibrium systems. Both energy and exergy are a priori concepts, and their formal dependence on thermodynamic state variables at equilibrium is known. Exploiting the results of a previous study, we first calculate the non-equilibrium exergy En-eq can be calculated for an arbitrary temperature distributions across a macroscopic body with an accuracy that depends only on the available information about the initial distribution: the analytical results confirm that En-eq exponentially relaxes to its equilibrium value. Using the Gyftopoulos-Beretta formalism, a non-equilibrium entropy Sn-eq(x,t) is then derived from En-eq(x,t) and U(x,t). It is finally shown that the non-equilibrium entropy generation between two states is always larger than its equilibrium (herein referred to as “classical”) counterpart. We conclude that every iso-energetic non-equilibrium state corresponds to an infinite set of non-equivalent states that can be ranked in terms of increasing entropy. Therefore, each point of the Gibbs plane corresponds therefore to a set of possible initial distributions: the non-equilibrium entropy is a multi-valued function that depends on the initial mass and energy distribution within the body. Though the concept cannot be directly extended to microscopic systems, it is argued that the present formulation is compatible with a possible reinterpretation of the existing non-equilibrium formulations, namely those of Tsallis and Grmela, and answers at least in part one of the objections set forth by Lieb and Yngvason. A systematic application of this paradigm is very convenient from a theoretical point of view and may be beneficial for meaningful future applications in the fields of nano-engineering and biological sciences.
Citation: Entropy
PubDate: 2017-11-07
DOI: 10.3390/e19110594
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 595: On Work and Heat in Time-Dependent Strong
Coupling
Authors: Erik Aurell
First page: 595
Abstract: This paper revisits the classical problem of representing a thermal bath interacting with a system as a large collection of harmonic oscillators initially in thermal equilibrium. As is well known, the system then obeys an equation, which in the bulk and in the suitable limit tends to the Kramers–Langevin equation of physical kinetics. I consider time-dependent system-bath coupling and show that this leads to an additional harmonic force acting on the system. When the coupling is switched on and switched off rapidly, the force has delta-function support at the initial and final time. I further show that the work and heat functionals as recently defined in stochastic thermodynamics at strong coupling contain additional terms depending on the time derivative of the system-bath coupling. I discuss these terms and show that while they can be very large if the system-bath coupling changes quickly, they only give a finite contribution to the work that enters in Jarzynski’s equality. I also discuss that these corrections to standard work and heat functionals provide an explanation for non-standard terms in the change of the von Neumann entropy of a quantum bath interacting with a quantum system found in an earlier contribution (Aurell and Eichhorn, 2015).
Citation: Entropy
PubDate: 2017-11-07
DOI: 10.3390/e19110595
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 596: An Entropy-Based Adaptive Hybrid Particle
Swarm Optimization for Disassembly Line Balancing Problems
Authors: Shanli Xiao, Yujia Wang, Hui Yu, Shankun Nie
First page: 596
Abstract: In order to improve the product disassembly efficiency, the disassembly line balancing problem (DLBP) is transformed into a problem of searching for the optimum path in the directed and weighted graph by constructing the disassembly hierarchy information graph (DHIG). Then, combining the characteristic of the disassembly sequence, an entropy-based adaptive hybrid particle swarm optimization algorithm (AHPSO) is presented. In this algorithm, entropy is introduced to measure the changing tendency of population diversity, and the dimension learning, crossover and mutation operator are used to increase the probability of producing feasible disassembly solutions (FDS). Performance of the proposed methodology is tested on the primary problem instances available in the literature, and the results are compared with other evolutionary algorithms. The results show that the proposed algorithm is efficient to solve the complex DLBP.
Citation: Entropy
PubDate: 2017-11-07
DOI: 10.3390/e19110596
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 597: Comparison of Two Entropy Spectral Analysis
Methods for Streamflow Forecasting in Northwest China
Authors: Zhenghong Zhou, Juanli Ju, Xiaoling Su, Vijay Singh, Gengxi Zhang
First page: 597
Abstract: Monthly streamflow has elements of stochasticity, seasonality, and periodicity. Spectral analysis and time series analysis can, respectively, be employed to characterize the periodical pattern and the stochastic pattern. Both Burg entropy spectral analysis (BESA) and configurational entropy spectral analysis (CESA) combine spectral analysis and time series analysis. This study compared the predictive performances of BESA and CESA for monthly streamflow forecasting in six basins in Northwest China. Four criteria were selected to evaluate the performances of these two entropy spectral analyses: relative error (RE), root mean square error (RMSE), coefficient of determination (R2), and Nash–Sutcliffe efficiency coefficient (NSE). It was found that in Northwest China, both BESA and CESA forecasted monthly streamflow well with strong correlation. The forecast accuracy of BESA is higher than CESA. For the streamflow with weak correlation, the conclusion is the opposite.
Citation: Entropy
PubDate: 2017-11-07
DOI: 10.3390/e19110597
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 598: Accelerating the Computation of Entropy
Measures by Exploiting Vectors with Dissimilarity
Authors: Yun Lu, Mingjiang Wang, Rongchao Peng, Qiquan Zhang
First page: 598
Abstract: In the diagnosis of neurological diseases and assessment of brain function, entropy measures for quantifying electroencephalogram (EEG) signals are attracting ever-increasing attention worldwide. However, some entropy measures, such as approximate entropy (ApEn), sample entropy (SpEn), multiscale entropy and so on, imply high computational costs because their computations are based on hundreds of data points. In this paper, we propose an effective and practical method to accelerate the computation of these entropy measures by exploiting vectors with dissimilarity (VDS). By means of the VDS decision, distance calculations of most dissimilar vectors can be avoided during computation. The experimental results show that, compared with the conventional method, the proposed VDS method enables a reduction of the average computation time of SpEn in random signals and EEG signals by 78.5% and 78.9%, respectively. The computation times are consistently reduced by about 80.1~82.8% for five kinds of EEG signals of different lengths. The experiments further demonstrate the use of the VDS method not only to accelerate the computation of SpEn in electromyography and electrocardiogram signals but also to accelerate the computations of time-shift multiscale entropy and ApEn in EEG signals. All results indicate that the VDS method is a powerful strategy for accelerating the computation of entropy measures and has promising application potential in the field of biomedical informatics.
Citation: Entropy
PubDate: 2017-11-08
DOI: 10.3390/e19110598
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 599: Sparse Coding Algorithm with Negentropy and
Weighted ℓ1-Norm for Signal Reconstruction
Authors: Yingxin Zhao, Zhiyang Liu, Yuanyuan Wang, Hong Wu, Shuxue Ding
First page: 599
Abstract: Compressive sensing theory has attracted widespread attention in recent years and sparse signal reconstruction has been widely used in signal processing and communication. This paper addresses the problem of sparse signal recovery especially with non-Gaussian noise. The main contribution of this paper is the proposal of an algorithm where the negentropy and reweighted schemes represent the core of an approach to the solution of the problem. The signal reconstruction problem is formalized as a constrained minimization problem, where the objective function is the sum of a measurement of error statistical characteristic term, the negentropy, and a sparse regularization term, ℓp-norm, for 0 < p < 1. The ℓp-norm, however, leads to a non-convex optimization problem which is difficult to solve efficiently. Herein we treat the ℓp -norm as a serious of weighted ℓ1-norms so that the sub-problems become convex. We propose an optimized algorithm that combines forward-backward splitting. The algorithm is fast and succeeds in exactly recovering sparse signals with Gaussian and non-Gaussian noise. Several numerical experiments and comparisons demonstrate the superiority of the proposed algorithm.
Citation: Entropy
PubDate: 2017-11-08
DOI: 10.3390/e19110599
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 600: Understanding the Fractal Dimensions of Urban
Forms through Spatial Entropy
Authors: Yanguang Chen, Jiejing Wang, Jian Feng
First page: 600
Abstract: The spatial patterns and processes of cities can be described with various entropy functions. However, spatial entropy always depends on the scale of measurement, and it is difficult to find a characteristic value for it. In contrast, fractal parameters can be employed to characterize scale-free phenomena and reflect the local features of random multi-scaling structure. This paper is devoted to exploring the similarities and differences between spatial entropy and fractal dimension in urban description. Drawing an analogy between cities and growing fractals, we illustrate the definitions of fractal dimension based on different entropy concepts. Three representative fractal dimensions in the multifractal dimension set, capacity dimension, information dimension, and correlation dimension, are utilized to make empirical analyses of the urban form of two Chinese cities, Beijing and Hangzhou. The results show that the entropy values vary with the measurement scale, but the fractal dimension value is stable is method and study area are fixed; if the linear size of boxes is small enough (e.g., <1/25), the linear correlation between entropy and fractal dimension is significant (based on the confidence level of 99%). Further empirical analysis indicates that fractal dimension is close to the characteristic values of spatial entropy. This suggests that the physical meaning of fractal dimension can be interpreted by the ideas from entropy and scaling and the conclusion is revealing for future spatial analysis of cities.
Citation: Entropy
PubDate: 2017-11-09
DOI: 10.3390/e19110600
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 601: Secret Sharing and Shared Information
Authors: Johannes Rauh
First page: 601
Abstract: Secret sharing is a cryptographic discipline in which the goal is to distribute information about a secret over a set of participants in such a way that only specific authorized combinations of participants together can reconstruct the secret. Thus, secret sharing schemes are systems of variables in which it is very clearly specified which subsets have information about the secret. As such, they provide perfect model systems for information decompositions. However, following this intuition too far leads to an information decomposition with negative partial information terms, which are difficult to interpret. One possible explanation is that the partial information lattice proposed by Williams and Beer is incomplete and has to be extended to incorporate terms corresponding to higher-order redundancy. These results put bounds on information decompositions that follow the partial information framework, and they hint at where the partial information lattice needs to be improved.
Citation: Entropy
PubDate: 2017-11-09
DOI: 10.3390/e19110601
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 602: Revealing Tripartite Quantum Discord with
Tripartite Information Diagram
Authors: Wei-Ting Lee, Che-Ming Li
First page: 602
Abstract: A new measure based on the tripartite information diagram is proposed for identifying quantum discord in tripartite systems. The proposed measure generalizes the mutual information underlying discord from bipartite to tripartite systems, and utilizes both one-particle and two-particle projective measurements to reveal the characteristics of the tripartite quantum discord. The feasibility of the proposed measure is demonstrated by evaluating the tripartite quantum discord for systems with states close to Greenberger–Horne–Zeilinger, W, and biseparable states. In addition, the connections between tripartite quantum discord and two other quantum correlations—namely genuine tripartite entanglement and genuine tripartite Einstein–Podolsky–Rosen steering—are briefly discussed. The present study considers the case of quantum discord in tripartite systems. However, the proposed framework can be readily extended to general N-partite systems.
Citation: Entropy
PubDate: 2017-11-10
DOI: 10.3390/e19110602
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 603: Thermodynamics, Statistical Mechanics and
Entropy
Authors: Robert Swendsen
First page: 603
Abstract: The proper definition of thermodynamics and the thermodynamic entropy is discussed in the light of recent developments. The postulates for thermodynamics are examined critically, and some modifications are suggested to allow for the inclusion of long-range forces (within a system), inhomogeneous systems with non-extensive entropy, and systems that can have negative temperatures. Only the thermodynamics of finite systems are considered, with the condition that the system is large enough for the fluctuations to be smaller than the experimental resolution. The statistical basis for thermodynamics is discussed, along with four different forms of the (classical and quantum) entropy. The strengths and weaknesses of each are evaluated in relation to the requirements of thermodynamics. Effects of order 1 / N , where N is the number of particles, are included in the discussion because they have played a significant role in the literature, even if they are too small to have a measurable effect in an experiment. The discussion includes the role of discreteness, the non-zero width of the energy and particle number distributions, the extensivity of models with non-interacting particles, and the concavity of the entropy with respect to energy. The results demonstrate the validity of negative temperatures.
Citation: Entropy
PubDate: 2017-11-10
DOI: 10.3390/e19110603
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 604: Rate Distortion Functions and Rate Distortion
Function Lower Bounds for Real-World Sources
Authors: Jerry Gibson
First page: 604
Abstract: Although Shannon introduced the concept of a rate distortion function in 1948, only in the last decade has the methodology for developing rate distortion function lower bounds for real-world sources been established. However, these recent results have not been fully exploited due to some confusion about how these new rate distortion bounds, once they are obtained, should be interpreted and should be used in source codec performance analysis and design. We present the relevant rate distortion theory and show how this theory can be used for practical codec design and performance prediction and evaluation. Examples for speech and video indicate exactly how the new rate distortion functions can be calculated, interpreted, and extended. These examples illustrate the interplay between source models for rate distortion theoretic studies and the source models underlying video and speech codec design. Key concepts include the development of composite source models per source realization and the application of conditional rate distortion theory.
Citation: Entropy
PubDate: 2017-11-11
DOI: 10.3390/e19110604
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 605: On the Uniqueness Theorem for Pseudo-Additive
Entropies
Authors: Petr Jizba, Jan Korbel
First page: 605
Abstract: The aim of this paper is to show that the Tsallis-type (q-additive) entropic chain rule allows for a wider class of entropic functionals than previously thought. In particular, we point out that the ensuing entropy solutions (e.g., Tsallis entropy) can be determined uniquely only when one fixes the prescription for handling conditional entropies. By using the concept of Kolmogorov–Nagumo quasi-linear means, we prove this with the help of Darótzy’s mapping theorem. Our point is further illustrated with a number of explicit examples. Other salient issues, such as connections of conditional entropies with the de Finetti–Kolmogorov theorem for escort distributions and with Landsberg’s classification of non-extensive thermodynamic systems are also briefly discussed.
Citation: Entropy
PubDate: 2017-11-12
DOI: 10.3390/e19110605
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 608: Robust and Sparse Regression via
γ-Divergence
Authors: Takayuki Kawashima, Hironori Fujisawa
First page: 608
Abstract: In high-dimensional data, many sparse regression methods have been proposed. However, they may not be robust against outliers. Recently, the use of density power weight has been studied for robust parameter estimation, and the corresponding divergences have been discussed. One such divergence is the γ -divergence, and the robust estimator using the γ -divergence is known for having a strong robustness. In this paper, we extend the γ -divergence to the regression problem, consider the robust and sparse regression based on the γ -divergence and show that it has a strong robustness under heavy contamination even when outliers are heterogeneous. The loss function is constructed by an empirical estimate of the γ -divergence with sparse regularization, and the parameter estimate is defined as the minimizer of the loss function. To obtain the robust and sparse estimate, we propose an efficient update algorithm, which has a monotone decreasing property of the loss function. Particularly, we discuss a linear regression problem with L 1 regularization in detail. In numerical experiments and real data analyses, we see that the proposed method outperforms past robust and sparse methods.
Citation: Entropy
PubDate: 2017-11-13
DOI: 10.3390/e19110608
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 609: Maximum Entropy-Copula Method for
Hydrological Risk Analysis under Uncertainty: A Case Study on the Loess
Plateau, China
Authors: Aijun Guo, Jianxia Chang, Yimin Wang, Qiang Huang, Zhihui Guo
First page: 609
Abstract: Copula functions have been extensively used to describe the joint behaviors of extreme hydrological events and to analyze hydrological risk. Advanced marginal distribution inference, for example, the maximum entropy theory, is particularly beneficial for improving the performance of the copulas. The goal of this paper, therefore, is twofold; first, to develop a coupled maximum entropy-copula method for hydrological risk analysis through deriving the bivariate return periods, risk, reliability and bivariate design events; and second, to reveal the impact of marginal distribution selection uncertainty and sampling uncertainty on bivariate design event identification. Particularly, the uncertainties involved in the second goal have not yet received significant consideration. The designed framework for hydrological risk analysis related to flood and extreme precipitation events is exemplarily applied in two catchments of the Loess plateau, China. Results show that (1) distribution derived by the maximum entropy principle outperforms the conventional distributions for the probabilistic modeling of flood and extreme precipitation events; (2) the bivariate return periods, risk, reliability and bivariate design events are able to be derived using the coupled entropy-copula method; (3) uncertainty analysis highlights the fact that appropriate performance of marginal distribution is closely related to bivariate design event identification. Most importantly, sampling uncertainty causes the confidence regions of bivariate design events with return periods of 30 years to be very large, overlapping with the values of flood and extreme precipitation, which have return periods of 10 and 50 years, respectively. The large confidence regions of bivariate design events greatly challenge its application in practical engineering design.
Citation: Entropy
PubDate: 2017-11-15
DOI: 10.3390/e19110609
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 610: Capacity Bounds on the Downlink of Symmetric,
Multi-Relay, Single-Receiver C-RAN Networks
Authors: Shirin Saeedi Bidokhti, Gerhard Kramer, Shlomo Shamai
First page: 610
Abstract: The downlink of symmetric Cloud Radio Access Networks (C-RANs) with multiple relays and a single receiver is studied. Lower and upper bounds are derived on the capacity. The lower bound is achieved by Marton’s coding, which facilitates dependence among the multiple-access channel inputs. The upper bound uses Ozarow’s technique to augment the system with an auxiliary random variable. The bounds are studied over scalar Gaussian C-RANs and are shown to meet and characterize the capacity for interesting regimes of operation.
Citation: Entropy
PubDate: 2017-11-14
DOI: 10.3390/e19110610
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 611: Multilevel Coding for the Full-Duplex
Decode-Compress-Forward Relay Channel
Authors: Ahmed Abotabl, Aria Nosratinia
First page: 611
Abstract: The Decode-Compress-Forward (DCF) is a generalization of Decode-Forward (DF) and Compress-Forward (CF). This paper investigates conditions under which DCF offers gains over DF and CF, addresses the problem of coded modulation for DCF, and evaluates the performance of DCF coded modulation implemented via low-density parity-check (LDPC) codes and polar codes. We begin by revisiting the achievable rate of DCF in discrete memoryless channels under backward decoding. We then study coded modulation for the decode-compress-forward via multi-level coding. We show that the proposed multilevel coding approaches the known achievable rates of DCF. The proposed multilevel coding is implemented (and its performance verified) via a combination of standard DVB-S2 LDPC codes, and polar codes whose design follows the method of Blasco-Serrano.
Citation: Entropy
PubDate: 2017-11-14
DOI: 10.3390/e19110611
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 613: Entropy Applications to Water Monitoring
Network Design: A Review
Authors: Jongho Keum, Kurt Kornelsen, James Leach, Paulin Coulibaly
First page: 613
Abstract: Having reliable water monitoring networks is an essential component of water resources and environmental management. A standardized process for the design of water monitoring networks does not exist with the exception of the World Meteorological Organization (WMO) general guidelines about the minimum network density. While one of the major challenges in the design of optimal hydrometric networks has been establishing design objectives, information theory has been successfully adopted to network design problems by providing measures of the information content that can be deliverable from a station or a network. This review firstly summarizes the common entropy terms that have been used in water monitoring network designs. Then, this paper deals with the recent applications of the entropy concept for water monitoring network designs, which are categorized into (1) precipitation; (2) streamflow and water level; (3) water quality; and (4) soil moisture and groundwater networks. The integrated design method for multivariate monitoring networks is also covered. Despite several issues, entropy theory has been well suited to water monitoring network design. However, further work is still required to provide design standards and guidelines for operational use.
Citation: Entropy
PubDate: 2017-11-15
DOI: 10.3390/e19110613
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 614: How to Identify the Most Powerful Node in
Complex Networks' A Novel Entropy Centrality Approach
Authors: Tong Qiao, Wei Shan, Chang Zhou
First page: 614
Abstract: Centrality is one of the most studied concepts in network analysis. Despite an abundance of methods for measuring centrality in social networks has been proposed, each approach exclusively characterizes limited parts of what it implies for an actor to be “vital” to the network. In this paper, a novel mechanism is proposed to quantitatively measure centrality using the re-defined entropy centrality model, which is based on decompositions of a graph into subgraphs and analysis on the entropy of neighbor nodes. By design, the re-defined entropy centrality which describes associations among node pairs and captures the process of influence propagation can be interpreted explained as a measure of actor potential for communication activity. We evaluate the efficiency of the proposed model by using four real-world datasets with varied sizes and densities and three artificial networks constructed by models including Barabasi-Albert, Erdos-Renyi and Watts-Stroggatz. The four datasets are Zachary’s karate club, USAir97, Collaboration network and Email network URV respectively. Extensive experimental results prove the effectiveness of the proposed method.
Citation: Entropy
PubDate: 2017-11-15
DOI: 10.3390/e19110614
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 615: Random Walk Null Models for Time Series Data
Authors: Daryl DeFord, Katherine Moore
First page: 615
Abstract: Permutation entropy has become a standard tool for time series analysis that exploits the temporal and ordinal relationships within data. Motivated by a Kullback–Leibler divergence interpretation of permutation entropy as divergence from white noise, we extend pattern-based methods to the setting of random walk data. We analyze random walk null models for correlated time series and describe a method for determining the corresponding ordinal pattern distributions. These null models more accurately reflect the observed pattern distributions in some economic data. This leads us to define a measure of complexity using the deviation of a time series from an associated random walk null model. We demonstrate the applicability of our methods using empirical data drawn from a variety of fields, including to a variety of stock market closing prices.
Citation: Entropy
PubDate: 2017-11-15
DOI: 10.3390/e19110615
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 616: Effects of Endwall Fillet and Bulb on the
Temperature Uniformity of Pin-Fined Microchannel
Authors: Zhiliang Pan, Ping Li, Jinxing Li, Yanping Li
First page: 616
Abstract: Endwall fillet and bulb structures are proposed in this research to improve the temperature uniformity of pin-fined microchannels. The periodical laminar flow and heat transfer performances are investigated under different Reynolds numbers and radius of fillet and bulb. The results show that at a low Reynolds number, both the fillet and the bulb structures strengthen the span-wise and the normal secondary flow in the channel, eliminate the high temperature area in the pin-fin, improve the heat transfer performance of the rear of the cylinder, and enhance the thermal uniformity of the pin-fin surface and the outside wall. Compared to traditional pin-fined microchannels, the flow resistance coefficient f of the pin-fined microchannels with fillet, as well as a bulb with a 2 μm or 5 μm radius, does not increase significantly, while, f of the pin-fined microchannels with a 10 μm or 15 μm bulb increases notably. Moreover, Nu has a maximum increase of 16.93% for those with fillet and 20.65% for those with bulb, and the synthetic thermal performance coefficient TP increases by 16.22% at most for those with fillet and 15.67% at most for those with bulb. At last, as the Reynolds number increases, heat transfer improvement of the fillet and bulb decreases.
Citation: Entropy
PubDate: 2017-11-15
DOI: 10.3390/e19110616
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 617: On Lower Bounds for Statistical Learning
Theory
Authors: Po-Ling Loh
First page: 617
Abstract: In recent years, tools from information theory have played an increasingly prevalent role in statistical machine learning. In addition to developing efficient, computationally feasible algorithms for analyzing complex datasets, it is of theoretical importance to determine whether such algorithms are “optimal” in the sense that no other algorithm can lead to smaller statistical error. This paper provides a survey of various techniques used to derive information-theoretic lower bounds for estimation and learning. We focus on the settings of parameter and function estimation, community recovery, and online learning for multi-armed bandits. A common theme is that lower bounds are established by relating the statistical learning problem to a channel decoding problem, for which lower bounds may be derived involving information-theoretic quantities such as the mutual information, total variation distance, and Kullback–Leibler divergence. We close by discussing the use of information-theoretic quantities to measure independence in machine learning applications ranging from causality to medical imaging, and mention techniques for estimating these quantities efficiently in a data-driven manner.
Citation: Entropy
PubDate: 2017-11-15
DOI: 10.3390/e19110617
Issue No: Vol. 19, No. 11 (2017)
- Entropy, Vol. 19, Pages 490: Analysis of Entropy Generation in Flow of
Methanol-Based Nanofluid in a Sinusoidal Wavy Channel
Authors: Muhammad Qasim, Zafar Hayat Khan, Ilyas Khan, Qasem Al-Mdallal
First page: 490
Abstract: The entropy generation due to heat transfer and fluid friction in mixed convective peristaltic flow of methanol-Al2O3 nano fluid is examined. Maxwell’s thermal conductivity model is used in analysis. Velocity and temperature profiles are utilized in the computation of the entropy generation number. The effects of involved physical parameters on velocity, temperature, entropy generation number, and Bejan number are discussed and explained graphically.
Citation: Entropy
PubDate: 2017-10-08
DOI: 10.3390/e19100490
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 506: Entropy Generation in Thermal Radiative
Loading of Structures with Distinct Heaters
Authors: Mohammad Abdollahzadeh Jamalabadi, Mohammad Safaei, Abdullah Alrashed, Truong Nguyen, Enio Bandarra Filho
First page: 506
Abstract: Thermal loading by radiant heaters is used in building heating and hot structure design applications. In this research, characteristics of the thermal radiative heating of an enclosure by a distinct heater are investigated from the second law of thermodynamics point of view. The governing equations of conservation of mass, momentum, and energy (fluid and solid) are solved by the finite volume method and the semi-implicit method for pressure linked equations (SIMPLE) algorithm. Radiant heaters are modeled by constant heat flux elements, and the lower wall is held at a constant temperature while the other boundaries are adiabatic. The thermal conductivity and viscosity of the fluid are temperature-dependent, which leads to complex partial differential equations with nonlinear coefficients. The parameter study is done based on the amount of thermal load (presented by heating number) as well as geometrical configuration parameters, such as the aspect ratio of the enclosure and the radiant heater number. The results present the effect of thermal and geometrical parameters on entropy generation and the distribution field. Furthermore, the effect of thermal radiative heating on both of the components of entropy generation (viscous dissipation and heat dissipation) is investigated.
Citation: Entropy
PubDate: 2017-09-29
DOI: 10.3390/e19100506
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 516: Complexity Analysis of Neonatal EEG Using
Multiscale Entropy: Applications in Brain Maturation and Sleep Stage
Classification
Authors: Ofelie De Wel, Mario Lavanga, Alexander Dorado, Katrien Jansen, Anneleen Dereymaeker, Gunnar Naulaers, Sabine Van Huffel
First page: 516
Abstract: Automated analysis of the electroencephalographic (EEG) data for the brain monitoring of preterm infants has gained attention in the last decades. In this study, we analyze the complexity of neonatal EEG, quantified using multiscale entropy. The aim of the current work is to investigate how EEG complexity evolves during electrocortical maturation and whether complexity features can be used to classify sleep stages. First , we developed a regression model that estimates the postmenstrual age (PMA) using a combination of complexity features. Then, these features are used to build a sleep stage classifier. The analysis is performed on a database consisting of 97 EEG recordings from 26 prematurely born infants, recorded between 27 and 42 weeks PMA. The results of the regression analysis revealed a significant positive correlation between the EEG complexity and the infant’s age. Moreover, the PMA of the neonate could be estimated with a root mean squared error of 1.88 weeks. The sleep stage classifier was able to discriminate quiet sleep from nonquiet sleep with an area under the curve (AUC) of 90%. These results suggest that the complexity of the brain dynamics is a highly useful index for brain maturation quantification and neonatal sleep stage classification.
Citation: Entropy
PubDate: 2017-09-26
DOI: 10.3390/e19100516
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 517: Participation Ratio for Constraint-Driven
Condensation with Superextensive Mass
Authors: Giacomo Gradenigo, Eric Bertin
First page: 517
Abstract: Broadly distributed random variables with a power-law distribution f ( m ) ∼ m - ( 1 + α ) are known to generate condensation effects. This means that, when the exponent α lies in a certain interval, the largest variable in a sum of N (independent and identically distributed) terms is for large N of the same order as the sum itself. In particular, when the distribution has infinite mean ( 0 < α < 1 ) one finds unconstrained condensation, whereas for α > 1 constrained condensation takes places fixing the total mass to a large enough value M = ∑ i = 1 N m i > M c . In both cases, a standard indicator of the condensation phenomenon is the participation ratio Y k = 〈 ∑ i m i k / ( ∑ i m i ) k 〉 ( k > 1 ), which takes a finite value for N → ∞ when condensation occurs. To better understand the connection between constrained and unconstrained condensation, we study here the situation when the total mass is fixed to a superextensive value M ∼ N 1 + δ ( δ > 0 ), hence interpolating between the unconstrained condensation case (where the typical value of the total mass scales as M ∼ N 1 / α for α < 1 ) and the extensive constrained mass. In particular we show that for exponents α < 1 a condensate phase for values δ > δ c = 1 / α - 1 is separated from a homogeneous phase at δ < δ c from a transition line, δ = δ c , where a weak condensation phenomenon takes place. We focus on the evaluation of the participation ratio as a generic indicator of condensation, also recalling or presenting results in the standard cases of unconstrained mass and of fixed extensive mass.
Citation: Entropy
PubDate: 2017-09-26
DOI: 10.3390/e19100517
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 518: Connecting Information Geometry and Geometric
Mechanics
Authors: Melvin Leok, Jun Zhang
First page: 518
Abstract: The divergence function in information geometry, and the discrete Lagrangian in discrete geometric mechanics each induce a differential geometric structure on the product manifold Q × Q . We aim to investigate the relationship between these two objects, and the fundamental role that duality, in the form of Legendre transforms, plays in both fields. By establishing an analogy between these two approaches, we will show how a fruitful cross-fertilization of techniques may arise from switching formulations based on the cotangent bundle T * Q (as in geometric mechanics) and the tangent bundle T Q (as in information geometry). In particular, we establish, through variational error analysis, that the divergence function agrees with the exact discrete Lagrangian up to third order if and only if Q is a Hessian manifold.
Citation: Entropy
PubDate: 2017-09-27
DOI: 10.3390/e19100518
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 519: Randomness Representation of Turbulence in
Canopy Flows Using Kolmogorov Complexity Measures
Authors: Dragutin Mihailović, Gordan Mimić, Paola Gualtieri, Ilija Arsenić, Carlo Gualtieri
First page: 519
Abstract: Turbulence is often expressed in terms of either irregular or random fluid flows, without quantification. In this paper, a methodology to evaluate the randomness of the turbulence using measures based on the Kolmogorov complexity (KC) is proposed. This methodology is applied to experimental data from a turbulent flow developing in a laboratory channel with canopy of three different densities. The methodology is even compared with the traditional approach based on classical turbulence statistics.
Citation: Entropy
PubDate: 2017-09-27
DOI: 10.3390/e19100519
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 520: Entropy Ensemble Filter: A Modified Bootstrap
Aggregating (Bagging) Procedure to Improve Efficiency in Ensemble Model
Simulation
Authors: Hossein Foroozand, Steven Weijs
First page: 520
Abstract: Over the past two decades, the Bootstrap AGGregatING (bagging) method has been widely used for improving simulation. The computational cost of this method scales with the size of the ensemble, but excessively reducing the ensemble size comes at the cost of reduced predictive performance. The novel procedure proposed in this study is the Entropy Ensemble Filter (EEF), which uses the most informative training data sets in the ensemble rather than all ensemble members created by the bagging method. The results of this study indicate efficiency of the proposed method in application to synthetic data simulation on a sinusoidal signal, a sawtooth signal, and a composite signal. The EEF method can reduce the computational time of simulation by around 50% on average while maintaining predictive performance at the same level of the conventional method, where all of the ensemble models are used for simulation. The analysis of the error gradient (root mean square error of ensemble averages) shows that using the 40% most informative ensemble members of the set initially defined by the user appears to be most effective.
Citation: Entropy
PubDate: 2017-09-28
DOI: 10.3390/e19100520
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 521: Evaluating the Irregularity of Natural
Languages
Authors: Candelario Hernández-Gómez, Rogelio Basurto-Flores, Bibiana Obregón-Quintana, Lev Guzmán-Vargas
First page: 521
Abstract: In the present work, we quantify the irregularity of different European languages belonging to four linguistic families (Romance, Germanic, Uralic and Slavic) and an artificial language (Esperanto). We modified a well-known method to calculate the approximate and sample entropy of written texts. We find differences in the degree of irregularity between the families and our method, which is based on the search of regularities in a sequence of symbols, and consistently distinguishes between natural and synthetic randomized texts. Moreover, we extended our study to the case where multiple scales are accounted for, such as the multiscale entropy analysis. Our results revealed that real texts have non-trivial structure compared to the ones obtained from randomization procedures.
Citation: Entropy
PubDate: 2017-09-29
DOI: 10.3390/e19100521
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 522: Secure Communication for Two-Way Relay
Networks with Imperfect CSI
Authors: Cong Sun, Ke Liu, Dahu Zheng, Wenbao Ai
First page: 522
Abstract: This paper considers a two-way relay network, where two legitimate users exchange messages through several cooperative relays in the presence of an eavesdropper, and the Channel State Information (CSI) of the eavesdropper is imperfectly known. The Amplify-and-Forward (AF) relay protocol is used. We design the relay beamforming weights to minimize the total relay transmit power, while requiring the Signal-to-Noise-Ratio (SNRs) of the legitimate users to be higher than the given thresholds and the achievable rate of the eavesdropper to be upper-bounded. Due to the imperfect CSI, a robust optimization problem is summarized. A novel iterative algorithm is proposed, where the line search technique is applied, and the feasibility is preserved during iterations. In each iteration, two Quadratically-Constrained Quadratic Programming (QCQP) subproblems and a one-dimensional subproblem are optimally solved. The optimality property of the robust optimization problem is analyzed. Simulation results show that the proposed algorithm performs very close to the non-robust model with perfect CSI, in terms of the obtained relay transmit power; it~achieves higher secrecy rate compared to the existing work. Numerically, the proposed algorithm converges very quickly, and more than 85% of the problems are solved optimally.
Citation: Entropy
PubDate: 2017-09-29
DOI: 10.3390/e19100522
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 523: Informative Nature and Nonlinearity of Lagged
Poincaré Plots Indices in Analysis of Heart Rate Variability
Authors: Berik Koichubekov, Viktor Riklefs, Marina Sorokina, Ilya Korshukov, Lyudmila Turgunova, Yelena Laryushina, Riszhan Bakirova, Gulmira Muldaeva, Ernur Bekov, Makhabbat Kultenova
First page: 523
Abstract: Lagged Poincaré plots have been successful in characterizing abnormal cardiac function. However, the current research practices do not favour any specific lag of Poincaré plots, thus complicating the comparison of results of different researchers in their analysis of heart rate of healthy subjects and patients. We researched the informative nature of lagged Poincaré plots in different states of the autonomic nervous system. It was tested in three models: different age groups, groups with different balance of autonomous regulation, and in hypertensive patients. Correlation analysis shows that for lag l = 6, SD1/SD2 has weak (r = 0.33) correlation with linear parameters of heart rate variability (HRV). For l more than 6 it displays even less correlation with linear parameters, but the changes in SD1/SD2 become statistically insignificant. Secondly, surrogate data tests show that the real SD1/SD2 is statistically different from its surrogate value and the conclusion could be made that the heart rhythm has nonlinear properties. Thirdly, the three models showed that for different functional states of the autonomic nervous system (ANS), SD1/SD2 ratio varied only for lags l = 5 and 6. All of this allow to us to give cautious recommendation to use SD1/SD2 with lags 5 and 6 as a nonlinear characteristic of HRV. The received data could be used as the basis for continuing the research in standardisation of nonlinear analytic methods.
Citation: Entropy
PubDate: 2017-10-10
DOI: 10.3390/e19100523
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 524: On the Limiting Behaviour of the Fundamental
Geodesics of Information Geometry
Authors: Frank Critchley, Paul Marriott
First page: 524
Abstract: The Information Geometry of extended exponential families has received much recent attention in a variety of important applications, notably categorical data analysis, graphical modelling and, more specifically, log-linear modelling. The essential geometry here comes from the closure of an exponential family in a high-dimensional simplex. In parallel, there has been a great deal of interest in the purely Fisher Riemannian structure of (extended) exponential families, most especially in the Markov chain Monte Carlo literature. These parallel developments raise challenges, addressed here, at a variety of levels: both theoretical and practical—relatedly, conceptual and methodological. Centrally to this endeavour, this paper makes explicit the underlying geometry of these two areas via an analysis of the limiting behaviour of the fundamental geodesics of Information Geometry, these being Amari’s (+1) and (0)-geodesics, respectively. Overall, a substantially more complete account of the Information Geometry of extended exponential families is provided than has hitherto been the case. We illustrate the importance and benefits of this novel formulation through applications.
Citation: Entropy
PubDate: 2017-09-30
DOI: 10.3390/e19100524
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 525: On Entropy Dynamics for Active
“Living” Particles
Authors: Ahmed Elaiw, Mohammed Alghamdi, Nicola Bellomo
First page: 525
Abstract: This paper presents a modeling approach, followed by entropy calculations of the dynamics of large systems of interacting active particles viewed as living—hence, complex—systems. Active particles are partitioned into functional subsystems, while their state is modeled by a discrete scalar variable, while the state of the overall system is defined by a probability distribution function over the state of the particles. The aim of this paper consists of contributing to a further development of the mathematical kinetic theory of active particles.
Citation: Entropy
PubDate: 2017-10-02
DOI: 10.3390/e19100525
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 526: A Formula of Packing Pressure of a Factor Map
Authors: Cao Zhao, Ercai Chen, Xiucheng Hong, Xiaoyao Zhou
First page: 526
Abstract: In this paper, using the notion of packing pressure, we show a formula of packing pressure of a factor map. We also give an application in conformal repellers.
Citation: Entropy
PubDate: 2017-10-04
DOI: 10.3390/e19100526
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 527: Coarse-Graining and the Blackwell Order
Authors: Johannes Rauh, Pradeep Banerjee, Eckehard Olbrich, Jürgen Jost, Nils Bertschinger, David Wolpert
First page: 527
Abstract: Suppose we have a pair of information channels, κ 1 , κ 2 , with a common input. The Blackwell order is a partial order over channels that compares κ 1 and κ 2 by the maximal expected utility an agent can obtain when decisions are based on the channel outputs. Equivalently, κ 1 is said to be Blackwell-inferior to κ 2 if and only if κ 1 can be constructed by garbling the output of κ 2 . A related partial order stipulates that κ 2 is more capable than κ 1 if the mutual information between the input and output is larger for κ 2 than for κ 1 for any distribution over inputs. A Blackwell-inferior channel is necessarily less capable. However, examples are known where κ 1 is less capable than κ 2 but not Blackwell-inferior. We show that this may even happen when κ 1 is constructed by coarse-graining the inputs of κ 2 . Such a coarse-graining is a special kind of “pre-garbling” of the channel inputs. This example directly establishes that the expected value of the shared utility function for the coarse-grained channel is larger than it is for the non-coarse-grained channel. This contradicts the intuition that coarse-graining can only destroy information and lead to inferior channels. We also discuss our results in the context of information decompositions.
Citation: Entropy
PubDate: 2017-10-06
DOI: 10.3390/e19100527
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 528: Generalized Skew-Normal Negentropy and Its
Application to Fish Condition Factor Time Series
Authors: Reinaldo Arellano-Valle, Javier Contreras-Reyes, Milan Stehlík
First page: 528
Abstract: The problem of measuring the disparity of a particular probability density function from a normal one has been addressed in several recent studies. The most used technique to deal with the problem has been exact expressions using information measures over particular distributions. In this paper, we consider a class of asymmetric distributions with a normal kernel, called Generalized Skew-Normal (GSN) distributions. We measure the degrees of disparity of these distributions from the normal distribution by using exact expressions for the GSN negentropy in terms of cumulants. Specifically, we focus on skew-normal and modified skew-normal distributions. Then, we establish the Kullback–Leibler divergences between each GSN distribution and the normal one in terms of their negentropies to develop hypothesis testing for normality. Finally, we apply this result to condition factor time series of anchovies off northern Chile.
Citation: Entropy
PubDate: 2017-10-06
DOI: 10.3390/e19100528
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 529: How Can We Fully Use Noiseless Feedback to
Enhance the Security of the Broadcast Channel with Confidential Messages
Authors: Xin Li, Bin Dai, Zheng Ma
First page: 529
Abstract: The model for a broadcast channel with confidential messages (BC-CM) plays an important role in the physical layer security of modern communication systems. In recent years, it has been shown that a noiseless feedback channel from the legitimate receiver to the transmitter increases the secrecy capacity region of the BC-CM. However, at present, the feedback coding scheme for the BC-CM only focuses on producing secret keys via noiseless feedback, and other usages of the feedback need to be further explored. In this paper, we propose a new feedback coding scheme for the BC-CM. The noiseless feedback in this new scheme is not only used to produce secret keys for the legitimate receiver and the transmitter but is also used to generate update information that allows both receivers (the legitimate receiver and the wiretapper) to improve their channel outputs. From a binary example, we show that this full utilization of noiseless feedback helps to increase the secrecy level of the previous feedback scheme for the BC-CM.
Citation: Entropy
PubDate: 2017-10-06
DOI: 10.3390/e19100529
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 530: Bivariate Partial Information Decomposition:
The Optimization Perspective
Authors: Abdullah Makkeh, Dirk Theis, Raul Vicente
First page: 530
Abstract: Bertschinger, Rauh, Olbrich, Jost, and Ay (Entropy, 2014) have proposed a definition of a decomposition of the mutual information M I ( X : Y , Z ) into shared, synergistic, and unique information by way of solving a convex optimization problem. In this paper, we discuss the solution of their Convex Program from theoretical and practical points of view.
Citation: Entropy
PubDate: 2017-10-07
DOI: 10.3390/e19100530
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 531: Multivariate Dependence beyond Shannon
Information
Authors: Ryan James, James Crutchfield
First page: 531
Abstract: Accurately determining dependency structure is critical to understanding a complex system’s organization. We recently showed that the transfer entropy fails in a key aspect of this—measuring information flow—due to its conflation of dyadic and polyadic relationships. We extend this observation to demonstrate that Shannon information measures (entropy and mutual information, in their conditional and multivariate forms) can fail to accurately ascertain multivariate dependencies due to their conflation of qualitatively different relations among variables. This has broad implications, particularly when employing information to express the organization and mechanisms embedded in complex systems, including the burgeoning efforts to combine complex network theory with information theory. Here, we do not suggest that any aspect of information theory is wrong. Rather, the vast majority of its informational measures are simply inadequate for determining the meaningful relationships among variables within joint probability distributions. We close by demonstrating that such distributions exist across an arbitrary set of variables.
Citation: Entropy
PubDate: 2017-10-07
DOI: 10.3390/e19100531
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 532: Bowen Lemma in the Countable Symbolic Space
Authors: Mingtian Li, Jihua Ma
First page: 532
Abstract: We consider the sets of quasi-regular points in the countable symbolic space. We measure the sizes of the sets by Billingsley-Hausdorff dimension defined by Gibbs measures. It is shown that the dimensions of those sets, always bounded from below by the convergence exponent of the Gibbs measure, are given by a variational principle, which generalizes Li and Ma’s result and Bowen’s result.
Citation: Entropy
PubDate: 2017-10-11
DOI: 10.3390/e19100532
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 533: Cross Entropy Method Based Hybridization of
Dynamic Group Optimization Algorithm
Authors: Rui Tang, Simon Fong, Nilanjan Dey, Raymond Wong, Sabah Mohammed
First page: 533
Abstract: Recently, a new algorithm named dynamic group optimization (DGO) has been proposed, which lends itself strongly to exploration and exploitation. Although DGO has demonstrated its efficacy in comparison to other classical optimization algorithms, DGO has two computational drawbacks. The first one is related to the two mutation operators of DGO, where they may decrease the diversity of the population, limiting the search ability. The second one is the homogeneity of the updated population information which is selected only from the companions in the same group. It may result in premature convergence and deteriorate the mutation operators. In order to deal with these two problems in this paper, a new hybridized algorithm is proposed, which combines the dynamic group optimization algorithm with the cross entropy method. The cross entropy method takes advantage of sampling the problem space by generating candidate solutions using the distribution, then it updates the distribution based on the better candidate solution discovered. The cross entropy operator does not only enlarge the promising search area, but it also guarantees that the new solution is taken from all the surrounding useful information into consideration. The proposed algorithm is tested on 23 up-to-date benchmark functions; the experimental results verify that the proposed algorithm over the other contemporary population-based swarming algorithms is more effective and efficient.
Citation: Entropy
PubDate: 2017-10-09
DOI: 10.3390/e19100533
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 534: An Approximated Box Height for
Differential-Box-Counting Method to Estimate Fractal Dimensions of
Gray-Scale Images
Authors: Chinmaya Panigrahy, Angel Garcia-Pedrero, Ayan Seal, Dionisio Rodríguez-Esparragón, Nihar Mahato, Consuelo Gonzalo-Martín
First page: 534
Abstract: The Fractal Dimension (FD) of an image defines the roughness using a real number which is highly associated with the human perception of surface roughness. It has been applied successfully for many computer vision applications such as texture analysis, segmentation and classification. Several techniques can be found in literature to estimate FD. One such technique is Differential Box Counting (DBC). Its performance is influenced by many parameters. In particular, the box height is directly related to the gray-level variations over image grid, which badly affects the performance of DBC. In this work, a new method for estimating box height is proposed without changing the other parameters of DBC. The proposed box height has been determined empirically and depends only on the image size. All the experiments have been performed on simulated Fractal Brownian Motion (FBM) Database and Brodatz Database. It has been proved experimentally that the proposed box height allow to improve the performance of DBC, Shifting DBC, Improved DBC and Improved Triangle DBC, which are closer to actual FD values of the simulated FBM images.
Citation: Entropy
PubDate: 2017-10-10
DOI: 10.3390/e19100534
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 535: Contact Hamiltonian Dynamics: The Concept and
Its Use
Authors: Alessandro Bravetti
First page: 535
Abstract: We give a short survey on the concept of contact Hamiltonian dynamics and its use in several areas of physics, namely reversible and irreversible thermodynamics, statistical physics and classical mechanics. Some relevant examples are provided along the way. We conclude by giving insights into possible future directions.
Citation: Entropy
PubDate: 2017-10-11
DOI: 10.3390/e19100535
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 536: Hydrodynamics of a Granular Gas in a
Heterogeneous Environment
Authors: Francisco Vega Reyes, Antonio Lasanta
First page: 536
Abstract: We analyze the transport properties of a low density ensemble of identical macroscopic particles immersed in an active fluid. The particles are modeled as inelastic hard spheres (granular gas). The non-homogeneous active fluid is modeled by means of a non-uniform stochastic thermostat. The theoretical results are validated with a numerical solution of the corresponding the kinetic equation (direct simulation Monte Carlo method). We show a steady flow in the system that is accurately described by Navier-Stokes (NS) hydrodynamics, even for high inelasticity. Surprisingly, we find that the deviations from NS hydrodynamics for this flow are stronger as the inelasticity decreases. The active fluid action is modeled here with a non-uniform fluctuating volume force. This is a relevant result given that hydrodynamics of particles in complex environments, such as biological crowded environments, is still a question under intense debate.
Citation: Entropy
PubDate: 2017-10-11
DOI: 10.3390/e19100536
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 538: Investigating the Thermodynamic Performances
of TO-Based Metamaterial Tunable Cells with an Entropy Generation Approach
Authors: Guoqiang Xu, Haochun Zhang, Xiu Zhang, Yan Jin
First page: 538
Abstract: Active control of heat flux can be realized with transformation optics (TO) thermal metamaterials. Recently, a new class of metamaterial tunable cells has been proposed, aiming to significantly reduce the difficulty of fabrication and to flexibly switch functions by employing several cells assembled on related positions following the TO design. However, owing to the integration and rotation of materials in tunable cells, they might lead to extra thermal losses as compared with the previous continuum design. This paper focuses on investigating the thermodynamic properties of tunable cells under related design parameters. The universal expression for the local entropy generation rate in such metamaterial systems is obtained considering the influence of rotation. A series of contrast schemes are established to describe the thermodynamic process and thermal energy distributions from the viewpoint of entropy analysis. Moreover, effects of design parameters on thermal dissipations and system irreversibility are investigated. In conclusion, more thermal dissipations and stronger thermodynamic processes occur in a system with larger conductivity ratios and rotation angles. This paper presents a detailed description of the thermodynamic properties of metamaterial tunable cells and provides reference for selecting appropriate design parameters on related positions to fabricate more efficient and energy-economical switchable TO devices.
Citation: Entropy
PubDate: 2017-10-13
DOI: 10.3390/e19100538
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 539: Kovacs-Like Memory Effect in Athermal
Systems: Linear Response Analysis
Authors: Carlos Plata, Antonio Prados
First page: 539
Abstract: We analyze the emergence of Kovacs-like memory effects in athermal systems within the linear response regime. This is done by starting from both the master equation for the probability distribution and the equations for the physically-relevant moments. The general results are applied to a general class of models with conserved momentum and non-conserved energy. Our theoretical predictions, obtained within the first Sonine approximation, show an excellent agreement with the numerical results. Furthermore, we prove that the observed non-monotonic relaxation is consistent with the monotonic decay of the non-equilibrium entropy.
Citation: Entropy
PubDate: 2017-10-13
DOI: 10.3390/e19100539
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 540: Complexity-Entropy Maps as a Tool for the
Characterization of the Clinical Electrophysiological Evolution of
Patients under Pharmacological Treatment with Psychotropic Drugs
Authors: Juan Diaz, Diego Mateos, Carina Boyallian
First page: 540
Abstract: In the clinical electrophysiological practice, reading and comparing electroencephalographic (EEG) recordings are sometimes insufficient and take too much time. Tools coming from the information theory or nonlinear systems theory such as entropy and complexity have been presented as an alternative to address this problem. In this work, we introduce a novel method—the permutation Lempel–Ziv Complexity vs. Permutation Entropy map. We apply this method to the EEGs of two patients with specific diagnosed pathologies during respective follow up processes of pharmacological changes in order to detect alterations that are not evident with the usual inspection method. The method allows for comparing between different states of the patients’ treatment, with a healthy control group, given global information about the signal, supplementing the traditional method of visual inspection of EEG.
Citation: Entropy
PubDate: 2017-10-13
DOI: 10.3390/e19100540
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 541: Stationary Wavelet Singular Entropy and
Kernel Extreme Learning for Bearing Multi-Fault Diagnosis
Authors: Nibaldo Rodriguez, Guillermo Cabrera, Carolina Lagos, Enrique Cabrera
First page: 541
Abstract: The behavioural diagnostics of bearings play an essential role in the management of several rotation machine systems. However, current diagnostic methods do not deliver satisfactory results with respect to failures in variable speed rotational phenomena. In this paper, we consider the Shannon entropy as an important fault signature pattern. To compute the entropy, we propose combining stationary wavelet transform and singular value decomposition. The resulting feature extraction method, that we call stationary wavelet singular entropy (SWSE), aims to improve the accuracy of the diagnostics of bearing failure by finding a small number of high-quality fault signature patterns. The features extracted by the SWSE are then passed on to a kernel extreme learning machine (KELM) classifier. The proposed SWSE-KELM algorithm is evaluated using two bearing vibration signal databases obtained from Case Western Reserve University. We compare our SWSE feature extraction method to other well-known methods in the literature such as stationary wavelet packet singular entropy (SWPSE) and decimated wavelet packet singular entropy (DWPSE). The experimental results show that the SWSE-KELM consistently outperforms both the SWPSE-KELM and DWPSE-KELM methods. Further, our SWSE method requires fewer features than the other two evaluated methods, which makes our SWSE-KELM algorithm simpler and faster.
Citation: Entropy
PubDate: 2017-10-13
DOI: 10.3390/e19100541
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 542: Backtracking and Mixing Rate of Diffusion on
Uncorrelated Temporal Networks
Authors: Martin Gueuning, Renaud Lambiotte, Jean-Charles Delvenne
First page: 542
Abstract: We consider the problem of diffusion on temporal networks, where the dynamics of each edge is modelled by an independent renewal process. Despite the apparent simplicity of the model, the trajectories of a random walker exhibit non-trivial properties. Here, we quantify the walker’s tendency to backtrack at each step (return where he/she comes from), as well as the resulting effect on the mixing rate of the process. As we show through empirical data, non-Poisson dynamics may significantly slow down diffusion due to backtracking, by a mechanism intrinsically different from the standard bus paradox and related temporal mechanisms. We conclude by discussing the implications of our work for the interpretation of results generated by null models of temporal networks.
Citation: Entropy
PubDate: 2017-10-13
DOI: 10.3390/e19100542
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 543: Is Cetacean Intelligence Special' New
Perspectives on the Debate
Authors: Alejandro Chinea
First page: 543
Abstract: In recent years, the interpretation of our observations of animal behaviour, in particular that of cetaceans, has captured a substantial amount of attention in the scientific community. The traditional view that supports a special intellectual status for this mammalian order has fallen under significant scrutiny, in large part due to problems of how to define and test the cognitive performance of animals. This paper presents evidence supporting complex cognition in cetaceans obtained using the recently developed intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience and postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge. This theoretical framework explaining animal intelligence in neural computational terms is supported using a new mathematical model. Two pathways leading to higher levels of intelligence in animals are identified, each reflecting a trade-off either in energetic requirements or the number of neurons used. A description of the evolutionary pathway that led to increased cognitive capacities in cetacean brains is detailed and evidence supporting complex cognition in cetaceans is presented. This paper also provides an interpretation of the adaptive function of cetacean neuronal traits.
Citation: Entropy
PubDate: 2017-10-13
DOI: 10.3390/e19100543
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 544: Equilibration in the Nosé–Hoover
Isokinetic Ensemble: Effect of Inter-Particle Interactions
Authors: Shamik Gupta, Stefano Ruffo
First page: 544
Abstract: We investigate the stationary and dynamic properties of the celebrated Nosé–Hoover dynamics of many-body interacting Hamiltonian systems, with an emphasis on the effect of inter-particle interactions. To this end, we consider a model system with both short- and long-range interactions. The Nosé–Hoover dynamics aim to generate the canonical equilibrium distribution of a system at a desired temperature by employing a set of time-reversible, deterministic equations of motion. A signature of canonical equilibrium is a single-particle momentum distribution that is Gaussian. We find that the equilibrium properties of the system within the Nosé–Hoover dynamics coincides with that within the canonical ensemble. Moreover, starting from out-of-equilibrium initial conditions, the average kinetic energy of the system relaxes to its target value over a size-independent timescale. However, quite surprisingly, our results indicate that under the same conditions and with only long-range interactions present in the system, the momentum distribution relaxes to its Gaussian form in equilibrium over a scale that diverges with the system size. On adding short-range interactions, the relaxation is found to occur over a timescale that has a much weaker dependence on system size. This system-size dependence of the timescale vanishes when only short-range interactions are present in the system. An implication of such an ultra-slow relaxation when only long-range interactions are present in the system is that macroscopic observables other than the average kinetic energy when estimated in the Nosé–Hoover dynamics may take an unusually long time to relax to its canonical equilibrium value. Our work underlines the crucial role that interactions play in deciding the equivalence between Nosé–Hoover and canonical equilibrium.
Citation: Entropy
PubDate: 2017-10-14
DOI: 10.3390/e19100544
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 545: Compressed Secret Key Agreement:Maximizing
Multivariate Mutual Information per Bit
Authors: Chung Chan
First page: 545
Abstract: The multiterminal secret key agreement problem by public discussion is formulated with an additional source compression step where, prior to the public discussion phase, users independently compress their private sources to filter out strongly correlated components in order to generate a common secret key. The objective is to maximize the achievable key rate as a function of the joint entropy of the compressed sources. Since the maximum achievable key rate captures the total amount of information mutual to the compressed sources, an optimal compression scheme essentially maximizes the multivariate mutual information per bit of randomness of the private sources, and can therefore be viewed more generally as a dimension reduction technique. Single-letter lower and upper bounds on the maximum achievable key rate are derived for the general source model, and an explicit polynomial-time computable formula is obtained for the pairwise independent network model. In particular, the converse results and the upper bounds are obtained from those of the related secret key agreement problem with rate-limited discussion. A precise duality is shown for the two-user case with one-way discussion, and such duality is extended to obtain the desired converse results in the multi-user case. In addition to posing new challenges in information processing and dimension reduction, the compressed secret key agreement problem helps shed new light on resolving the difficult problem of secret key agreement with rate-limited discussion by offering a more structured achieving scheme and some simpler conjectures to prove.
Citation: Entropy
PubDate: 2017-10-14
DOI: 10.3390/e19100545
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 546: Performance Improvement of Plug-and-Play
Dual-Phase-Modulated Quantum Key Distribution by Using a Noiseless
Amplifier
Authors: Dongyun Bai, Peng Huang, Hongxin Ma, Tao Wang, Guihua Zeng
First page: 546
Abstract: We show that the successful use of a noiseless linear amplifier (NLA) can help increase the maximum transmission distance and tolerate more excess noise of the plug-and-play dual-phase-modulated continuous-variable quantum key distribution. In particular, an equivalent entanglement-based scheme model is proposed to analyze the security, and the secure bound is derived with the presence of a Gaussian noisy and lossy channel. The analysis shows that the performance of the NLA-based protocol can be further improved by adjusting the effective parameters.
Citation: Entropy
PubDate: 2017-10-20
DOI: 10.3390/e19100546
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 548: A Permutation Disalignment Index-Based
Complex Network Approach to Evaluate Longitudinal Changes in
Brain-Electrical Connectivity
Authors: Nadia Mammone, Simona De Salvo, Cosimo Ieracitano, Silvia Marino, Angela Marra, Francesco Corallo, Francesco Morabito
First page: 548
Abstract: In the study of neurological disorders, Electroencephalographic (EEG) signal processing can provide valuable information because abnormalities in the interaction between neuron circuits may reflect on macroscopic abnormalities in the electrical potentials that can be detected on the scalp. A Mild Cognitive Impairment (MCI) condition, when caused by a disorder degenerating into dementia, affects the brain connectivity. Motivated by the promising results achieved through the recently developed descriptor of coupling strength between EEG signals, the Permutation Disalignment Index (PDI), the present paper introduces a novel PDI-based complex network model to evaluate the longitudinal variations in brain-electrical connectivity. A group of 33 amnestic MCI subjects was enrolled and followed-up with over four months. The results were compared to MoCA (Montreal Cognitive Assessment) tests, which scores the cognitive abilities of the patient. A significant negative correlation could be observed between MoCA variation and the characteristic path length ( λ ) variation ( r = - 0 . 56 , p = 0 . 0006 ), whereas a significant positive correlation could be observed between MoCA variation and the variation of clustering coefficient (CC, r = 0 . 58 , p = 0 . 0004 ), global efficiency (GE, r = 0 . 57 , p = 0 . 0005 ) and small worldness (SW, r = 0 . 57 , p = 0 . 0005 ). Cognitive decline thus seems to reflect an underlying cortical “disconnection” phenomenon: worsened subjects indeed showed an increased λ and decreased CC, GE and SW. The PDI-based connectivity model, proposed in the present work, could be a novel tool for the objective quantification of longitudinal brain-electrical connectivity changes in MCI subjects.
Citation: Entropy
PubDate: 2017-10-17
DOI: 10.3390/e19100548
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 549: Risk Assessment and Decision-Making under
Uncertainty in Tunnel and Underground Engineering
Authors: Yuanpu Xia, Ziming Xiong, Xin Dong, Hao Lu
First page: 549
Abstract: The impact of uncertainty on risk assessment and decision-making is increasingly being prioritized, especially for large geotechnical projects such as tunnels, where uncertainty is often the main source of risk. Epistemic uncertainty, which can be reduced, is the focus of attention. In this study, the existing entropy-risk decision model is first discussed and analyzed, and its deficiencies are improved upon and overcome. Then, this study addresses the fact that existing studies only consider parameter uncertainty and ignore the influence of the model uncertainty. Here, focus is on the issue of model uncertainty and differences in risk consciousness with different decision-makers. The utility theory is introduced in the model. Finally, a risk decision model is proposed based on the sensitivity analysis and the tolerance cost, which can improve decision-making efficiency. This research can provide guidance or reference for the evaluation and decision-making of complex systems engineering problems, and indicate a direction for further research of risk assessment and decision-making issues.
Citation: Entropy
PubDate: 2017-10-18
DOI: 10.3390/e19100549
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 550: Entropy of Entropy: Measurement of Dynamical
Complexity for Biological Systems
Authors: Chang Hsu, Sung-Yang Wei, Han-Ping Huang, Long Hsu, Sien Chi, Chung-Kang Peng
First page: 550
Abstract: Healthy systems exhibit complex dynamics on the changing of information embedded in physiologic signals on multiple time scales that can be quantified by employing multiscale entropy (MSE) analysis. Here, we propose a measure of complexity, called entropy of entropy (EoE) analysis. The analysis combines the features of MSE and an alternate measure of information, called superinformation, useful for DNA sequences. In this work, we apply the hybrid analysis to the cardiac interbeat interval time series. We find that the EoE value is significantly higher for the healthy than the pathologic groups. Particularly, short time series of 70 heart beats is sufficient for EoE analysis with an accuracy of 81% and longer series of 500 beats results in an accuracy of 90%. In addition, the EoE versus Shannon entropy plot of heart rate time series exhibits an inverted U relationship with the maximal EoE value appearing in the middle of extreme order and disorder.
Citation: Entropy
PubDate: 2017-10-18
DOI: 10.3390/e19100550
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 551: Cosmographic Thermodynamics of Dark Energy
Authors: Orlando Luongo
First page: 551
Abstract: Dark energy’s thermodynamics is here revised giving particular attention to the role played by specific heats and entropy in a flat Friedmann-Robertson-Walker universe. Under the hypothesis of adiabatic heat exchanges, we rewrite the specific heats through cosmographic, model-independent quantities and we trace their evolutions in terms of z. We demonstrate that dark energy may be modeled as perfect gas, only as the Mayer relation is preserved. In particular, we find that the Mayer relation holds if j − q > 1 2 . The former result turns out to be general so that, even at the transition time, the jerk parameter j cannot violate the condition: j t r > 1 2 . This outcome rules out those models which predict opposite cases, whereas it turns out to be compatible with the concordance paradigm. We thus compare our bounds with the Λ CDM model, highlighting that a constant dark energy term seems to be compatible with the so-obtained specific heat thermodynamics, after a precise redshift domain. In our treatment, we show the degeneracy between unified dark energy models with zero sound speed and the concordance paradigm. Under this scheme, we suggest that the cosmological constant may be viewed as an effective approach to dark energy either at small or high redshift domains. Last but not least, we discuss how to reconstruct dark energy’s entropy from specific heats and we finally compute both entropy and specific heats into the luminosity distance d L , in order to fix constraints over them through cosmic data.
Citation: Entropy
PubDate: 2017-10-19
DOI: 10.3390/e19100551
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 552: Prediction Model of the Power System
Frequency Using a Cross-Entropy Ensemble Algorithm
Authors: Yi Tang, Han Cui, Qi Wang
First page: 552
Abstract: Frequency prediction after a disturbance has received increasing research attention given its substantial value in providing a decision-making foundation in power system emergency control. With the advancing development of machine learning, analysis power systems with machine-learning methods has become completely different from traditional approaches. In this paper, an ensemble algorithm using cross-entropy as a combination strategy is presented to address the trade-off between prediction accuracy and calculation speed. The prediction difficulty caused by inadequate numbers of severe disturbance samples is also overcome by the ensemble model. In the proposed ensemble algorithm, base learners are selected following the principle of diversity, which guarantees the ensemble algorithm’s accuracy. Cross-entropy is applied to evaluate the fitting performance of the base learners and to set the weight coefficient in the ensemble algorithm. Subsequently, an online prediction model based on the algorithm is established that integrates training, prediction and updating. In the Western System Coordinating Council 9-bus (WSCC 9) system and the Institute of Electrical and Electronics Engineers 39-bus (IEEE 39) system, the algorithm is shown to significantly improve the prediction accuracy in both sample-rich and sample-poor situations, verifying the effectiveness and superiority of the proposed ensemble algorithm.
Citation: Entropy
PubDate: 2017-10-19
DOI: 10.3390/e19100552
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 553: Rainfall Network Optimization Using Radar and
Entropy
Authors: Hui-Chung Yeh, Yen-Chang Chen, Che-Hao Chang, Cheng-Hsuan Ho, Chiang Wei
First page: 553
Abstract: In this study, a method combining radar and entropy was proposed to design a rainfall network. Owing to the shortage of rain gauges in mountain areas, weather radars are used to measure rainfall over catchments. The major advantage of radar is that it is possible to observe rainfall widely in a short time. However, the rainfall data obtained by radar do not necessarily correspond to that observed by ground-based rain gauges. The in-situ rainfall data from telemetering rain gauges were used to calibrate a radar system. Therefore, the rainfall intensity; as well as its distribution over the catchment can be obtained using radar. Once the rainfall data of past years at the desired locations over the catchment were generated, the entropy based on probability was applied to optimize the rainfall network. This method is applicable in remote and mountain areas. Its most important utility is to construct an optimal rainfall network in an ungauged catchment. The design of a rainfall network in the catchment of the Feitsui Reservoir was used to illustrate the various steps as well as the reliability of the method.
Citation: Entropy
PubDate: 2017-10-19
DOI: 10.3390/e19100553
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 554: Upper Bounds for the Rate Distortion Function
of Finite-Length Data Blocks of Gaussian WSS Sources
Authors: Jesús Gutiérrez-Gutiérrez, Marta Zárraga-Rodríguez, Xabier Insausti
First page: 554
Abstract: In this paper, we present upper bounds for the rate distortion function (RDF) of finite-length data blocks of Gaussian wide sense stationary (WSS) sources and we propose coding strategies to achieve such bounds. In order to obtain those bounds, we previously derive new results on the discrete Fourier transform (DFT) of WSS processes.
Citation: Entropy
PubDate: 2017-10-19
DOI: 10.3390/e19100554
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 555: The Prior Can Often Only Be Understood in the
Context of the Likelihood
Authors: Andrew Gelman, Daniel Simpson, Michael Betancourt
First page: 555
Abstract: A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.
Citation: Entropy
PubDate: 2017-10-19
DOI: 10.3390/e19100555
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 556: A Combinatorial Grassmannian Representation
of the Magic Three-Qubit Veldkamp Line
Authors: Metod Saniga
First page: 556
Abstract: It is demonstrated that the magic three-qubit Veldkamp line occurs naturally within the Veldkamp space of a combinatorial Grassmannian of type G 2 ( 7 ) , V ( G 2 ( 7 ) ) . The lines of the ambient symplectic polar space are those lines of V ( G 2 ( 7 ) ) whose cores feature an odd number of points of G 2 ( 7 ) . After introducing the basic properties of three different types of points and seven distinct types of lines of V ( G 2 ( 7 ) ) , we explicitly show the combinatorial Grassmannian composition of the magic Veldkamp line; we first give representatives of points and lines of its core generalized quadrangle GQ ( 2 , 2 ) , and then additional points and lines of a specific elliptic quadric Q - (5, 2), a hyperbolic quadric Q + (5, 2), and a quadratic cone Q ^ (4, 2) that are centered on the GQ ( 2 , 2 ) . In particular, each point of Q + (5, 2) is represented by a Pasch configuration and its complementary line, the (Schläfli) double-six of points in Q - (5, 2) comprise six Cayley–Salmon configurations and six Desargues configurations with their complementary points, and the remaining Cayley–Salmon configuration stands for the vertex of Q ^ (4, 2).
Citation: Entropy
PubDate: 2017-10-19
DOI: 10.3390/e19100556
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 557: Multivariate Multiscale Symbolic Entropy
Analysis of Human Gait Signals
Authors: Jian Yu, Junyi Cao, Wei-Hsin Liao, Yangquan Chen, Jing Lin, Rong Liu
First page: 557
Abstract: The complexity quantification of human gait time series has received considerable interest for wearable healthcare. Symbolic entropy is one of the most prevalent algorithms used to measure the complexity of a time series, but it fails to account for the multiple time scales and multi-channel statistical dependence inherent in such time series. To overcome this problem, multivariate multiscale symbolic entropy is proposed in this paper to distinguish the complexity of human gait signals in health and disease. The embedding dimension, time delay and quantization levels are appropriately designed to construct similarity of signals for calculating complexity of human gait. The proposed method can accurately detect healthy and pathologic group from realistic multivariate human gait time series on multiple scales. It strongly supports wearable healthcare with simplicity, robustness, and fast computation.
Citation: Entropy
PubDate: 2017-10-19
DOI: 10.3390/e19100557
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 558: Comparative Statistical Mechanics of Muscle
and Non-Muscle Contractile Systems: Stationary States of Near-Equilibrium
Systems in A Linear Regime
Authors: Yves Lecarpentier, Victor Claes, Xénophon Krokidis, Jean-Louis Hébert, Oumar Timbely, François-Xavier Blanc, Francine Michel, Alexandre Vallée
First page: 558
Abstract: A. Huxley’s equations were used to determine the mechanical properties of muscle myosin II (MII) at the molecular level, as well as the probability of the occurrence of the different stages in the actin–myosin cycle. It was then possible to use the formalism of statistical mechanics with the grand canonical ensemble to calculate numerous thermodynamic parameters such as entropy, internal energy, affinity, thermodynamic flow, thermodynamic force, and entropy production rate. This allows us to compare the thermodynamic parameters of a non-muscle contractile system, such as the normal human placenta, with those of different striated skeletal muscles (soleus and extensor digitalis longus) as well as the heart muscle and smooth muscles (trachea and uterus) in the rat. In the human placental tissues, it was observed that the kinetics of the actin–myosin crossbridges were considerably slow compared with those of smooth and striated muscular systems. The entropy production rate was also particularly low in the human placental tissues, as compared with that observed in smooth and striated muscular systems. This is partly due to the low thermodynamic flow found in the human placental tissues. However, the unitary force of non-muscle myosin (NMII) generated by each crossbridge cycle in the myofibroblasts of the human placental tissues was similar in magnitude to that of MII in the myocytes of both smooth and striated muscle cells. Statistical mechanics represents a powerful tool for studying the thermodynamics of all contractile muscle and non-muscle systems.
Citation: Entropy
PubDate: 2017-10-20
DOI: 10.3390/e19100558
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 559: EXONEST: The Bayesian Exoplanetary Explorer
Authors: Kevin Knuth, Ben Placek, Daniel Angerhausen, Jennifer Carter, Bryan D’Angelo, Anthony Gai, Bertrand Carado
First page: 559
Abstract: The fields of astronomy and astrophysics are currently engaged in an unprecedented era of discovery as recent missions have revealed thousands of exoplanets orbiting other stars. While the Kepler Space Telescope mission has enabled most of these exoplanets to be detected by identifying transiting events, exoplanets often exhibit additional photometric effects that can be used to improve the characterization of exoplanets. The EXONEST Exoplanetary Explorer is a Bayesian exoplanet inference engine based on nested sampling and originally designed to analyze archived Kepler Space Telescope and CoRoT (Convection Rotation et Transits planétaires) exoplanet mission data. We discuss the EXONEST software package and describe how it accommodates plug-and-play models of exoplanet-associated photometric effects for the purpose of exoplanet detection, characterization and scientific hypothesis testing. The current suite of models allows for both circular and eccentric orbits in conjunction with photometric effects, such as the primary transit and secondary eclipse, reflected light, thermal emissions, ellipsoidal variations, Doppler beaming and superrotation. We discuss our new efforts to expand the capabilities of the software to include more subtle photometric effects involving reflected and refracted light. We discuss the EXONEST inference engine design and introduce our plans to port the current MATLAB-based EXONEST software package over to the next generation Exoplanetary Explorer, which will be a Python-based open source project with the capability to employ third-party plug-and-play models of exoplanet-related photometric effects.
Citation: Entropy
PubDate: 2017-10-20
DOI: 10.3390/e19100559
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 561: Comparing Markov Chain Samplers for Molecular
Simulation
Authors: Robert Skeel, Youhan Fang
First page: 561
Abstract: Markov chain Monte Carlo sampling propagators, including numerical integrators for stochastic dynamics, are central to the calculation of thermodynamic quantities and determination of structure for molecular systems. Efficiency is paramount, and to a great extent, this is determined by the integrated autocorrelation time (IAcT). This quantity varies depending on the observable that is being estimated. It is suggested that it is the maximum of the IAcT over all observables that is the relevant metric. Reviewed here is a method for estimating this quantity. For reversible propagators (which are those that satisfy detailed balance), the maximum IAcT is determined by the spectral gap in the forward transfer operator, but for irreversible propagators, the maximum IAcT can be far less than or greater than what might be inferred from the spectral gap. This is consistent with recent theoretical results (not to mention past practical experience) suggesting that irreversible propagators generally perform better if not much better than reversible ones. Typical irreversible propagators have a parameter controlling the mix of ballistic and diffusive movement. To gain insight into the effect of the damping parameter for Langevin dynamics, its optimal value is obtained here for a multidimensional quadratic potential energy function.
Citation: Entropy
PubDate: 2017-10-21
DOI: 10.3390/e19100561
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 562: Symmetries and Geometrical Properties of
Dynamical Fluctuations in Molecular Dynamics
Authors: Robert Jack, Marcus Zimmer
First page: 562
Abstract: We describe some general results that constrain the dynamical fluctuations that can occur in non-equilibrium steady states, with a focus on molecular dynamics. That is, we consider Hamiltonian systems, coupled to external heat baths, and driven out of equilibrium by non-conservative forces. We focus on the probabilities of rare events (large deviations). First, we discuss a PT (parity-time) symmetry that appears in ensembles of trajectories where a current is constrained to have a large (non-typical) value. We analyse the heat flow in such ensembles, and compare it with non-equilibrium steady states. Second, we consider pathwise large deviations that are defined by considering many copies of a system. We show how the probability currents in such systems can be decomposed into orthogonal contributions that are related to convergence to equilibrium and to dissipation. We discuss the implications of these results for modelling non-equilibrium steady states.
Citation: Entropy
PubDate: 2017-10-22
DOI: 10.3390/e19100562
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 563: Construction of New Fractional Repetition
Codes from Relative Difference Sets with λ=1
Authors: Young-Sik Kim, Hosung Park, Jong-Seon No
First page: 563
Abstract: Fractional repetition (FR) codes are a class of distributed storage codes that replicate and distribute information data over several nodes for easy repair, as well as efficient reconstruction. In this paper, we propose three new constructions of FR codes based on relative difference sets (RDSs) with λ = 1 . Specifically, we propose new ( q 2 - 1 , q , q ) FR codes using cyclic RDS with parameters ( q + 1 , q - 1 , q , 1 ) constructed from q-ary m-sequences of period q 2 - 1 for a prime power q, ( p 2 , p , p ) FR codes using non-cyclic RDS with parameters ( p , p , p , 1 ) for an odd prime p or p = 4 and ( 4 l , 2 l , 2 l ) FR codes using non-cyclic RDS with parameters ( 2 l , 2 l , 2 l , 1 ) constructed from the Galois ring for a positive integer l. They are differentiated from the existing FR codes with respect to the constructable code parameters. It turns out that the proposed FR codes are (near) optimal for some parameters in terms of the FR capacity bound. Especially, ( 8 , 3 , 3 ) and ( 9 , 3 , 3 ) FR codes are optimal, that is, they meet the FR capacity bound for all k. To support various code parameters, we modify the proposed ( q 2 - 1 , q , q ) FR codes using decimation by a factor of the code length q 2 - 1 , which also gives us new good FR codes.
Citation: Entropy
PubDate: 2017-10-22
DOI: 10.3390/e19100563
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 564: Prior Elicitation, Assessment and Inference
with a Dirichlet Prior
Authors: Michael Evans, Irwin Guttman, Peiying Li
First page: 564
Abstract: Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the prior with respect to the bias it induces as well as how to check for prior-data conflict. It is shown that the assessment of a hypothesis via relative belief can easily take into account what it means for the falsity of the hypothesis to correspond to a difference of practical importance and provide evidence in favor of a hypothesis.
Citation: Entropy
PubDate: 2017-10-22
DOI: 10.3390/e19100564
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 565: Fluctuation of Information Entropy Measures
in Cell Image
Authors: Ishay Wohl, Naomi Zurgil, Yaron Hakuk, Maria Sobolev, Mordechai Deutsch
First page: 565
Abstract: A simple, label-free cytometry technique is introduced. It is based on the analysis of the fluctuation of image Gray Level Information Entropy (GLIE) which is shown to reflect intracellular biophysical properties like generalized entropy. In this study, the analytical relations between cellular thermodynamic generalized entropy and diffusivity and GLIE fluctuation measures are explored for the first time. The standard deviation (SD) of GLIE is shown by experiments, simulation and theoretical analysis to be indifferent to microscope system “noise”. Then, the ability of GLIE fluctuation measures to reflect basic cellular entropy conditions of early death and malignancy is demonstrated in a cell model of human, healthy-donor lymphocytes, malignant Jurkat cells, as well as dead lymphocytes and Jurkat cells. Utilization of GLIE-based fluctuation measures seems to have the advantage of displaying biophysical characterization of the tested cells, like diffusivity and entropy, in a novel, unique, simple and illustrative way.
Citation: Entropy
PubDate: 2017-10-23
DOI: 10.3390/e19100565
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 566: Biological Aging and Life Span Based on
Entropy Stress via Organ and Mitochondrial Metabolic Loading
Authors: Kalyan Annamalai, Arnab Nanda
First page: 566
Abstract: The energy for sustaining life is released through the oxidation of glucose, fats, and proteins. A part of the energy released within each cell is stored as chemical energy of Adenosine Tri-Phosphate molecules, which is essential for performing life-sustaining functions, while the remainder is released as heat in order to maintain isothermal state of the body. Earlier literature introduced the availability concepts from thermodynamics, related the specific irreversibility and entropy generation rates to metabolic efficiency and energy release rate of organ k, computed whole body specific entropy generation rate of whole body at any given age as a sum of entropy generation within four vital organs Brain, Heart, Kidney, Liver (BHKL) with 5th organ being the rest of organs (R5) and estimated the life span using an upper limit on lifetime entropy generated per unit mass of body, σM,life. The organ entropy stress expressed in terms of lifetime specific entropy generated per unit mass of body organs (kJ/(K kg of organ k)) was used to rank organs and heart ranked highest while liver ranked lowest. The present work includes the effects of (1) two additional organs: adipose tissue (AT) and skeletal muscles (SM) which are of importance to athletes; (2) proportions of nutrients oxidized which affects blood temperature and metabolic efficiencies; (3) conversion of the entropy stress from organ/cellular level to mitochondrial level; and (4) use these parameters as metabolism-based biomarkers for quantifying the biological aging process in reaching the limit of σM,life. Based on the 7-organ model and Elia constants for organ metabolic rates for a male of 84 kg steady mass and using basic and derived allometric constants of organs, the lifetime energy expenditure is estimated to be 2725 MJ/kg body mass while lifetime entropy generated is 6050 kJ/(K kg body mass) with contributions of 190; 1835.0; 610; 290; 700; 1470 and 95 kJ/K contributed by AT-BHKL-SM-R7 to 1 kg body mass over life time. The corresponding life time entropy stresses of organs are: 1.2; 60.5; 110.5; 110.5; 50.5; 3.5; 3.0 MJ/K per kg organ mass. Thus, among vital organs highest stress is for heart and kidney and lowest stress is for liver. The 5-organ model (BHKL and R5) also shows similar ranking. Based on mitochondrial volume and 5-organ model, the entropy stresses of organs expressed in kJ/K per cm3 of Mito volume are: 12,670; 5465; 2855; 4730 kJ/cm3 of Mito for BHKL indicating brain to be highly stressed and liver to be least stressed. Thus, the organ entropy stress ranking based on unit volume of mitochondria within an organ (kJ/(K cm3 of Mito of organ k)) differs from entropy stress based on unit mass of organ. Based on metabolic loading, the brains of athletes already under extreme mitochondrial stress and facing reduced metabolic efficiency under concussion are subjected to more increased stress. In the absence of non-intrusive measurements for estimating organ-based metabolic rates which can serve as metabolism-based biomarkers for biological aging (BA) of whole body, alternate methods are suggested for estimating the biological aging rate.
Citation: Entropy
PubDate: 2017-10-23
DOI: 10.3390/e19100566
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 567: Radially Excited AdS5 Black Holes in
Einstein–Maxwell–Chern–Simons Theory
Authors: Jose Blázquez-Salcedo
First page: 567
Abstract: In the large coupling regime of the 5-dimensional Einstein–Maxwell–Chern–Simons theory, charged and rotating cohomogeneity-1 black holes form sequences of extremal and non-extremal radially excited configurations. These asymptotically global Anti-de Sitter (AdS 5 ) black holes form a discrete set of solutions, characterised by the vanishing of the total angular momenta, or the horizon angular velocity. However, the solutions are not static. In this paper, we study the branch structure that contains these excited states, and its relation with the static Reissner–Nordström-AdS black hole. Thermodynamic properties of these solutions are considered, revealing that the branches with lower excitation number can become thermodynamically unstable beyond certain critical solutions that depend on the free parameters of the configuration.
Citation: Entropy
PubDate: 2017-10-24
DOI: 10.3390/e19100567
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 568: Entropy Analysis of Short-Term Heartbeat
Interval Time Series during Regular Walking
Authors: Bo Shi, Yudong Zhang, Chaochao Yuan, Shuihua Wang, Peng Li
First page: 568
Abstract: Entropy measures have been extensively used to assess heart rate variability (HRV), a noninvasive marker of cardiovascular autonomic regulation. It is yet to be elucidated whether those entropy measures can sensitively respond to changes of autonomic balance and whether the responses, if there are any, are consistent across different entropy measures. Sixteen healthy subjects were enrolled in this study. Each subject undertook two 5-min ECG measurements, one in a resting seated position and another while walking on a treadmill at a regular speed of 5 km/h. For each subject, the two measurements were conducted in a randomized order and a 30-min rest was required between them. HRV time series were derived and were analyzed by eight entropy measures, i.e., approximate entropy (ApEn), corrected ApEn (cApEn), sample entropy (SampEn), fuzzy entropy without removing local trend (FuzzyEn-g), fuzzy entropy with local trend removal (FuzzyEn-l), permutation entropy (PermEn), conditional entropy (CE), and distribution entropy (DistEn). Compared to resting seated position, regular walking led to significantly reduced CE and DistEn (both p ≤ 0.006; Cohen’s d = 0.9 for CE, d = 1.7 for DistEn), and increased PermEn (p < 0.0001; d = 1.9), while all these changes disappeared after performing a linear detrend or a wavelet detrend (<~0.03 Hz) on HRV. In addition, cApEn, SampEn, FuzzyEn-g, and FuzzyEn-l showed significant decreases during regular walking after linear detrending (all p < 0.006; 0.8 < d < 1), while a significantly increased ApEn (p < 0.0001; d = 1.9) and a significantly reduced cApEn (p = 0.0006; d = 0.8) were observed after wavelet detrending. To conclude, multiple entropy analyses should be performed to assess HRV in order for objective results and caution should be paid when drawing conclusions based on observations from a single measure. Besides, results from different studies will not be comparable unless it is clearly stated whether data have been detrended and the methods used for detrending have been specified.
Citation: Entropy
PubDate: 2017-10-24
DOI: 10.3390/e19100568
Issue No: Vol. 19, No. 10 (2017)
- Entropy, Vol. 19, Pages 569: Detection of Causal Relations in Time Series
Affected by Noise in Tokamaks Using Geodesic Distance on Gaussian
Manifolds
Authors: Andrea Murari, Teddy Craciunescu, Emmanuele Peluso, Michela Gelfusa, JET Contributors
First page: 569
Abstract: : Modern experiments in Magnetic Confinement Nuclear Fusion can produce Gigabytes of data, mainly in form of time series. The acquired signals, composing massive databases, are typically affected by significant levels of noise. The interpretation of the time series can therefore become quite involved, particularly when tenuous causal relations have to be investigated. In the last years, synchronization experiments, to control potentially dangerous instabilities, have become a subject of intensive research. Their interpretation requires quite delicate causality analysis. In this paper, the approach of Information Geometry is applied to the problem of assessing the effectiveness of synchronization experiments on JET (Joint European Torus). In particular, the use of the Geodesic Distance on Gaussian Manifolds is shown to improve the results of advanced techniques such as Recurrent Plots and Complex Networks, when the noise level is not negligible. In cases affected by particularly high levels of noise, compromising the traditional treatments, the use of the Geodesic Distance on Gaussian Manifolds allows deriving quite encouraging results. In addition to consolidating conclusions previously quite uncertain, it has been demonstrated that the proposed approach permit to successfully analyze signals of discharges which were otherwise unusable, therefore salvaging the interpretation of those experiments.
Citation: Entropy
PubDate: 2017-10-24
DOI: 10.3390/e19100569
Issue No: Vol. 19, No. 10 (2017)