Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Incorporating prior knowledge into a segmentation process, whether it is geometrical constraints such as volume penalisation, (partial) convexity enforcement, or topological prescriptions to preserve the contextual relations between objects, proves to improve accuracy in medical image segmentation, in particular when addressing the issue of weak boundary definition. Motivated by this observation, the proposed contribution aims to provide a unified variational framework including geometrical constraints in the training of convolutional neural networks in the form of a penalty in the loss function. These geometrical constraints take several forms and encompass level curve alignment through the integration of the weighted total variation, an area penalisation phrased as a hard constraint in the modelling, and an intensity homogeneity criterion based on a combination of the standard Dice loss with the piecewise constant Mumford–Shah model. The mathematical formulation yields a non-smooth non-convex optimisation problem, which rules out conventional smooth optimisation techniques and leads us to adopt a Lagrangian setting. The application falls within the scope of organ-at-risk segmentation in CT (computed tomography) images, in the context of radiotherapy planning. Experiments demonstrate that our method provides significant improvements (i) over existing non-constrained approaches both in terms of quantitative criteria, such as the measure of overlap, and qualitative assessment (spatial regularisation/coherency, fewer outliers), (ii) over in-layer constrained deep convolutional networks, and shows a certain degree of versatility. PubDate: 2022-05-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In many applications of geometric processing, the border of a continuous shape and of its digitization (i.e., its pixelated representation) should be matched. Assuming that the continuous-shape boundary is locally turn bounded, we prove that there exists a mapping between the boundary of the digitization and the one of the continuous shape such that these boundaries are traveled together in a cyclic order manner. Then, we use this mapping to prove the multigrid convergence of perimeter estimators that are based on polygons inscribed in the digitization. Furthermore, convergence speed is given for this class of estimators. If, moreover, the continuous curves also have a Lipschitz turn, an explicit error bound is calculated. PubDate: 2022-05-20
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Image encryption has become an indispensable tool for achieving highly secure image-based communications. Numerous encryption approaches have appeared and demonstrated varying degrees of robustness to adversarial attacks. In this paper, an efficient and robust image encryption algorithm is established based on randomized difference equations, random permutations and randomized logic circuits. Specifically, hyperchaotic and chaotic systems are used to generate pseudo-random sequences. These sequences are thus used to define random first-order difference equations, chaotic permutations and logic circuits. Image encryption based on these three randomized modules shows high computational efficiency as well as strong robustness against statistical, differential, and chosen-plaintext attacks. The proposed scheme leads to almost zero correlation in the encrypted images, entropy values of more than 7.99 for the test images, and a key space size of \(2^{572}\) . Furthermore, differential analysis shows that the number of pixel change rate (NPCR) and the unified average change intensity (UACI) for the proposed technique are on average 99.61 and 33.35%, respectively. PubDate: 2022-05-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The integration of mathematical morphology operations within convolutional neural network architectures has received an increasing attention lately. However, replacing standard convolution layers by morphological layers performing erosions or dilations is particularly challenging because the \(\min \) and \(\max \) operations are not differentiable. P-convolution layers were proposed as a possible solution to this issue since they can act as smooth differentiable approximation of \(\min \) and \(\max \) operations, yielding pseudo-dilation or pseudo-erosion layers. In a recent work, we proposed two novel morphological layers based on the same principle as the p-convolution, while circumventing its principal drawbacks, and showcased their capacity to efficiently learn grayscale morphological operators while raising several edge cases. In this work, we complete those previous results by thoroughly analyzing the behavior of the proposed layers and by investigating and settling the reported edge cases. We also demonstrate the compatibility of one of the proposed morphological layers with binary morphological frameworks. PubDate: 2022-05-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Reconstructing a discrete object by means of X-rays along a finite set U of (discrete) directions represents one of the main task in discrete tomography. Indeed, it is an ill-posed inverse problem, since different structures exist having the same projections along all lines whose directions range in U. Characteristic of ambiguous reconstructions are special configurations, called switching components, whose understanding represents a main issue in discrete tomography, and an independent interesting geometric problem as well. The investigation of switching component usually bases on some kind of prior knowledge that is incorporated in the tomographic problem. In this paper, we focus on switching components under the constraint of convexity along the horizontal and the vertical directions imposed to the unknown object. Moving from their geometric characterization in windows and curls, we provide a numerical description, by encoding them as special sequences of integers. A detailed study of these sequences leads to the complete understanding of their combinatorial structure, and to a polynomial-time algorithm that explicitly reconstructs any of them from a pair of integers arbitrarily given. PubDate: 2022-05-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We propose a state estimation approach to time-varying magnetic resonance imaging utilizing a priori information. In state estimation, the time-dependent image reconstruction problem is modeled by separate state evolution and observation models. In our method, we compute the state estimates by using the Kalman filter and steady-state Kalman smoother utilizing a data-driven estimate for the process noise covariance matrix, constructed from conventional sliding window estimates. The proposed approach is evaluated using radially golden angle sampled simulated and experimental small animal data from a rat brain. In our method, the state estimates are updated after each new spoke of radial data becomes available, leading to faster frame rate compared with the conventional approaches. The results are compared with the estimates with the sliding window method. The results show that the state estimation approach with the data-driven process noise covariance can improve both spatial and temporal resolution. PubDate: 2022-05-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we analyze the single image dehazing problem and propose a new variational method to solve it based on the dark channel prior. In the analysis section, we determine the influence that error in estimation of parameters of the haze degradation model has on the reconstructed image and give conclusions that can be used in designing a dehazing method. After that, we use those conclusions to bias our variational method as well as create a smooth variant of the dark channel prior, so it can be directly used in variational methods as well as potentially deep learning methods. We compare the proposed method quantitatively on a synthetic hazy image dataset as well as qualitatively on real-life hazy images. PubDate: 2022-05-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A large number of modern video background modeling algorithms deal with computational costly minimization problems that often need parameter adjustments. While in most cases spatial and temporal constraints are added artificially to the minimization process, our approach is to exploit Dynamic Mode Decomposition (DMD), a spectral decomposition technique that naturally extracts spatio-temporal patterns from data. Applied to video data, DMD can compute background models. However, the original DMD algorithm for background modeling is neither efficient nor robust. In this paper, we present an equivalent reformulation with constraints leading to a more suitable decomposition into fore- and background. Due to the reformulation, which uses sparse and low-dimensional structures, an efficient and robust algorithm is derived that computes accurate background models. Moreover, we show how our approach can be extended to RGB data, data with periodic parts, and streaming data enabling a versatile use. PubDate: 2022-05-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We present a closed form solution to the problem of registration of fully overlapping 3D point clouds undergoing unknown rigid transformations, as well as for detection and registration of sub-parts undergoing unknown rigid transformations. The solution is obtained by adapting the general framework of the universal manifold embedding (UME) to the case where the transformations the object may undergo are rigid. The UME nonlinearly maps functions related by certain types of geometric transformations of coordinates to the same linear subspace of some Euclidean space while retaining the information required to recover the transformation. Therefore registration, matching and classification can be solved as linear problems in a low dimensional linear space. In this paper, we extend the UME framework to the special case where it is a priori known that the geometric transformations are rigid. While a variety of methods exist for point cloud registration, the method proposed in this paper is notably different as registration is achieved by a closed form solution that employs the UME low dimensional representation of the shapes to be registered. PubDate: 2022-05-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The Arrow–Hurwicz method is an inexact version of the Uzawa method; it has been widely applied to solve various saddle point problems in different areas including many fundamental image processing problems. It is also the basis of a number of important algorithms such as the extragradient method and the primal–dual hybrid gradient method. Convergence of the classic Arrow–Hurwicz method, however, is known only when some more restrictive conditions are additionally assumed, such as strong convexity of the functions or some demanding requirements on the step sizes. In this short note, we show by very simple counterexamples that the classic Arrow–Hurwicz method with any constant step size is not necessarily convergent for solving generic convex saddle point problems, including some fundamental cases such as the canonical linear programming model and the bilinear saddle point problem. This result plainly fathoms the convergence understanding of the Arrow–Hurwicz method and retrospectively validates the rationale of studying its convergence under various additional conditions in image processing literature. PubDate: 2022-04-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The increasingly common use of neural network classifiers in industrial and social applications of image analysis has allowed impressive progress these last years. Such methods are, however, sensitive to algorithmic bias, i.e., to an under- or an over-representation of positive predictions or to higher prediction errors in specific subgroups of images. We then introduce in this paper a new method to temper the algorithmic bias in Neural-Network-based classifiers. Our method is Neural-Network architecture agnostic and scales well to massive training sets of images. It indeed only overloads the loss function with a Wasserstein-2-based regularization term for which we back-propagate the impact of specific output predictions using a new model, based on the Gâteaux derivatives of the predictions distribution. This model is algorithmically reasonable and makes it possible to use our regularized loss with standard stochastic gradient-descent strategies. Its good behavior is assessed on the reference Adult census, MNIST, CelebA datasets. PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Mathematical morphology is a valuable theory of nonlinear operators widely used for image processing and analysis. Although initially conceived for binary images, mathematical morphology has been successfully extended to vector-valued images using several approaches. Vector-valued morphological operators based on total orders are particularly promising because they circumvent the problem of false colors. On the downside, they often introduce irregularities in the output image. This paper proposes measuring the irregularity of a vector-valued morphological operator by the relative gap between the generalized sum of pixel-wise distances and the Wasserstein metric. Apart from introducing a measure of the irregularity, referred to as the irregularity index, this paper also addresses its computational implementation. Precisely, we distinguish between the ideal global and the practical local irregularity indexes. The local irregularity index, which can be computed more quickly by aggregating values of local windows, yields a lower bound for the global irregularity index. Computational experiments with natural images illustrate the effectiveness of the proposed irregularity indexes. PubDate: 2022-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we propose an approach to the problem of automated image restoration and segmentation. To solve these tasks simultaneously, we consider the Mumford–Shah-like regularization. Using results of topological asymptotic analysis, we prove the existence of minimizers of the proposed functional and construct a method for their computation. Moreover, within the same theoretical framework, we introduce novel criteria for selecting optimal values for two regularization parameters presented in the model. This allows us to restore and segment images automatically, without any user interference. In the end, we explain some implementation issues and present results of numerical experiments on synthetic and real test images. PubDate: 2022-04-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In this paper, we focus on the inverse problem of reconstructing distributional brain activity with cortical and weakly detectable deep components in non-invasive Electroencephalography. We consider a recently introduced hybrid reconstruction strategy combining a hierarchical Bayesian model to incorporate a priori information and the advanced randomized multiresolution scanning (RAMUS) source space decomposition approach to reduce modelling errors, respectively. In particular, we aim to generalize the previously extensively used conditionally Gaussian prior (CGP) formalism to achieve distributional reconstructions with higher focality. For this purpose, we introduce as a hierarchical prior, a general exponential distribution, which we refer to as conditionally exponential prior (CEP). The first-degree CEP corresponds to focality enforcing Laplace prior, but it also suffers from strong depth bias, when applied in numerical modelling, making the deep activity unrecoverable. We sample over multiple resolution levels via RAMUS to reduce this bias as it is known to depend on the resolution of the source space. Moreover, we introduce a procedure based on the physiological a priori knowledge of the brain activity to obtain the shape and scale parameters of the gamma hyperprior that steer the CEP. The posterior estimates are calculated using iterative statistical methods, expectation maximization and iterative alternating sequential algorithm, which we show to be algorithmically similar and to have a close resemblance to the iterative \(\ell _1\) and \(\ell _2\) reweighting methods. The performance of CEP is compared with the recent sampling-based dipole localization method Sequential semi-analytic Monte Carlo estimation (SESAME) in numerical experiments of simulated somatosensory evoked potentials related to the human median nerve stimulation. Our results obtained using synthetic sources suggest that a hybrid of the first-degree CEP and RAMUS can achieve an accuracy comparable to the second-degree case (CGP) while being more focal. Further, the proposed hybrid is shown to be robust to noise effects and compares well with the dipole reconstructions obtained with SESAME. PubDate: 2022-04-15
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Autoencoders and generative models produce some of the most spectacular deep learning results to date. However, understanding and controlling the latent space of these models presents a considerable challenge. Drawing inspiration from principal component analysis and autoencoders, we propose the principal component analysis autoencoder (PCA-AE). This is a novel autoencoder whose latent space verifies two properties. Firstly, the dimensions are organised in decreasing importance with respect to the data at hand. Secondly, the components of the latent space are statistically independent. We achieve this by progressively increasing the latent space during training, and with a covariance loss applied to the latent codes. The resulting autoencoder produces a latent space which separates the intrinsic attributes of the data into different components of the latent space, in a completely unsupervised manner. We also describe an extension of our approach to the case of powerful, pre-trained GANs. We show results on both synthetic examples of shapes and on a state-of-the-art GAN. For example, we are able to separate the colour shade scale of hair, pose of faces and gender, without accessing any labels. We compare the PCA-AE with other state-of-the-art approaches, in particular with respect to the ability to disentangle attributes in the latent space. We hope that this approach will contribute to better understanding of the intrinsic latent spaces of powerful deep generative models. PubDate: 2022-04-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Deep Convolutional Neural Networks (DCNNs) can well extract the features from natural images. However, the classification functions in the existing network architecture of CNNs are simple and lack capabilities to handle important spatial information as have been done by many well-known traditional variational image segmentation models. Priors such as spatial regularization, volume prior and shapes priors cannot be handled by existing DCNNs. We propose a novel Soft Threshold Dynamics (STD) framework which can integrate many spatial priors of the classic variational models into the DCNNs for image segmentation. The novelty of our method is to interpret the softmax activation function as a dual variable in a variational problem, and thus many spatial priors can be imposed in the dual space. From this viewpoint, we can build a STD based framework which can enable the outputs of DCNNs to have many special priors such as spatial regularization, volume preservation and star-shape prior. The proposed method is a general mathematical framework and it can be applied to any image segmentation DCNNs with a softmax classification layer. To show the efficiency of our method, we applied it to the popular DeepLabV3+ image segmentation network, and the experiments results show that our method can work efficiently on data-driven image segmentation DCNNs. PubDate: 2022-04-13
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We consider image denoising problems formulated as variational problems. It is known that Hamilton–Jacobi PDEs govern the solution of such optimization problems when the noise model is additive. In this work, we address certain nonadditive noise models and show that they are also related to Hamilton–Jacobi PDEs. These findings allow us to establish new connections between additive and nonadditive noise imaging models. Specifically, we study how the solutions to these optimization problems depend on the parameters and the observed images. We show that the optimal values are ruled by some Hamilton–Jacobi PDEs, while the optimizers are characterized by the spatial gradient of the solution to the Hamilton–Jacobi PDEs. Moreover, we use these relations to investigate the asymptotic behavior of the variational model as the parameter goes to infinity, that is, when the influence of the noise vanishes. With these connections, some non-convex models for nonadditive noise can be solved by applying convex optimization algorithms to the equivalent convex models for additive noise. Several numerical results are provided for denoising problems with Poisson noise or multiplicative noise. PubDate: 2022-03-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract In many applications, geodesic hierarchical models are adequate for the study of temporal observations. We employ such a model derived for manifold-valued data to Kendall’s shape space. In particular, instead of the Sasaki metric, we adapt a functional-based metric, which increases the computational efficiency and does not require the implementation of the curvature tensor. We propose the corresponding variational time discretization of geodesics and employ the approach for longitudinal analysis of 2D rat skulls shapes as well as 3D shapes derived from an imaging study on osteoarthritis. Particularly, we perform hypothesis test and estimate the mean trends. PubDate: 2022-03-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Cardinality and rank functions are ideal ways of regularizing under-determined linear systems, but optimization of the resulting formulations is made difficult since both these penalties are non-convex and discontinuous. The most common remedy is to instead use the \(\ell ^1\) - and nuclear norms. While these are convex and can therefore be reliably optimized, they suffer from a shrinking bias that degrades the solution quality in the presence of noise. This well-known drawback has given rise to a fauna of non-convex alternatives, which usually features better global minima at the price of maybe getting stuck in undesired local minima. We focus in particular penalties based on the quadratic envelope, which have been shown to have global minima which even coincide with the “oracle solution,” i.e., there is no bias at all. So, which one do we choose, convex with a definite bias, or non-convex with no bias but less predictability' In this article, we develop a framework which allows us to interpolate between these alternatives; that is, we construct sparsity inducing penalties where the degree of non-convexity/bias can be chosen according to the specifics of the particular problem. PubDate: 2022-03-08 DOI: 10.1007/s10851-022-01071-5