Abstract: This research deals with the study of the partial hierarchical Poisson regression model (with a random intercept), where this model is one of the most important models widely applied in analyzing data that is characterized by the fact that the observations take a hierarchical form. Where it the full maximum likelihood (FML) method is used to estimate the model parameters. The model was applied to the covid-9 deaths in Mosul city, were recorded during the period (1/1/202 - 1/9/2021), where four major hospitals in the city were selected to represent the group of second level of data (Ibn Sina Hospital, Al Salam Hospital, Shifa Hospital, General Mosul Hospital). *طالب دبلوم عالی/ قسم الاحصاء والمعلوماتیة/ کلیة علوم الحاسوب والریاضیات/ جامعة الموصل. **استاذ مساعد/ قسم الاحصاء والمعلوماتیة/ کلیة علوم الحاسوب والریاضیات/ جامعة الموصل. تاریخ استلام البحث: 4/3/2022 تاریخ القبول: 9/4/2022 تاریخ النشر: 1/12/2022 The research found the adequate of the model for this type of data, as it was found that there are some factors that contribute to the increase in the number of deaths in the epidemic, such as the advanced age of the patient, the length of stay in the hospital, the percentage of oxygen in the patient's blood, in addition to the incidence of some chronic diseases such as asthma. The study recommended a more in-depth study of other types of these models, and the use of other estimation methods, in addition to paying attention to the methods of data recording by the city health department. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: Locally weighted regression (LOESS) is a modern non-parametric regression method designed for treating cases where classical procedures are not highly efficient or cannot applied efficiently. Sunspots are the darker areas of the solar sphere's surface relative to other regions and are an important indicator of solar activity .The aim of this paper is to model and predict the number of sunspots because of their very importance to understanding the terrestrial consequences of solar activity and its direct impact on weather and communication systems on Earth, which may lead to damage to satellites. In this paper, the number of sunspots represented by annual data for the period from 1900 to 2021 (122 years) as well as monthly data for the period from January 1900 to January 2022 (1465 months) was obtained from the global data center (Sunspot Index and Long-term Solar Observations) (SILSO). The LOESS regression used for estimating and predicting the number of monthly and annual sunspots. The smoothing parameter, as well as the degree of the polynomial that fulfills the lowest for Akaike corrected information criterion. The analysis showed the ability of the LOESS to represent sunspot data by passing diagnostic tests as well as its high predictive ability. From the predictive values for the monthly data, it found that the maximum average number of sunspots will be 123.7 in July 2022, and the lowest average will be in February with 61.3 sunspots. Regarding the annual data, it found from the predictive values that the maximum average number of sunspots will be in the year 2023 with an average of 161.7 sunspots, and the lowest average will be in the year 2029 with an average of 16.1.Keywords: Locally weighted regression; sunspot; solar cycle; prediction. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: Structural Equation Modeling is a statistical methodology commonly used in the social and administrative sciences and all other. In this research, the researcher made a comparison between methods of estimation Unweighted Least Squares with Mean and Variance Adjusted( ULSMV) and weighted Least Squares with Mean and Variance Adjusted (WLSMV). When we have a five-way Likert scale, the data is treated as ordinal using the polychoric matrix as inputs for the weighted methods with robust corrections. With robust standard errors ULSMV and WLSMV.No study compared these methods and the impact of outliers on them. where a robust algorithm is proposed to clean the data from the outlier, as this proposed algorithm calculates the robust correlation matrix Reweighted Fast Consistent and High Breakdown (RFCH), which consists of several steps and has been modified by taking the clean data before calculating the RFCH correlation matrix, where these data are data clean from outlier to add in the methods and to calculate a correlation matrix for each method where the purpose is to keep the ordinal data to calculate the polychoric matrix, which is robust to the violation of the assumption of normal distribution.By conducting a simulation experiment on different sample sizes and the degree of distribution to observe the accuracy of the proposed method for obtaining clean data. On methods ULSMV and WLSMV before and after the treatment process by calculating the absolute bias rate For the standard errors and the estimated parameters, in addition to studying the extent of their effect on the quality of fit indicators for each of the chi-square index, Comparative fit index (CFI), Tucker-Lewis Index (TLI), and Root-Mean-Squared-Error-of Approximation( RMSEA), Standardized Root Mean square Residual (SRMR), , with the robust corrections in the chi-square index for each of the methods WLSMV and ULSMV the accuracy of the proposed. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: In this research paper, n jobs have to be scheduled on one-machine to minimize the sum of maximum earliness and maximum tardiness. We solved a series of bi-criteria scheduling problems that related to minimize the sum of the maximum earliness and tardiness. Three new algorithms were presented, two for hierarchical objective and one for the simultaneous objective. Using the results of these algorithms, we minimize the sum of maximum earliness and maximum tardiness. This objective considered as one of the NP-hard problem, and it is also irregular, so this objective missed some helpful properties of regularity. The proposed algorithms had simple structures, and simple to implement. Lastly, they tested for different n. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: The climatic changes have important role which may lead to huge problems for the health of human and other organisms, therefore it is necessary to study and forecast this type of datasets to reduce . the damages through planning and controlling for these changes in the future. The main problem can be summarized in the nonlinearity of climatic dataset and its chaotic changes. The common approach is the integrated autoregressive and moving average model (ARIMA) as traditional univariate time series approach. Therefore, more appropriate model for studying the climatic data has been proposed for obtaining more accurate forecasting, it can be called random forest (RF) model.This model cannot deal with nonlinear data correctly and that may lead to inaccurate forecasting results. In this thesis, climatic datasets are studied represented by minimum air temperature and rational humidity for agricultural meteorological station in Nineveh. This thesis aims to satisfy data homogeneity through different seasons and find suitable model deal with nonlinear data correctly with minimal forecasting error comparing to ARIMA as traditional model. The research found the adequate of the model for this type of data, as it was found that there are some factors that contribute to the increase in the number of deaths in the epidemic, such as the advanced age of the patient, the length of stay in the hospital, the percentage of oxygen in the patient's blood, in addition to the incidence of some chronic diseases such as asthma. The study recommended a more in-depth study of other types of these models, and the use of other estimation methods, in addition to paying attention to the methods of data recording by the city health department. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: Response variables in biological phenomena vary between three types: numerical response variables, ordinal categorical response variables, and nominal categorical response variables. In statistical studies, handling ordinal variables varies in accordance with the perspective of the statistical approach to the response variable. Ordinal variables can be adopted as nominal categorical variables, which neglect the ordinal property of the categories. Ordinal variables can also be treated.as an ordinal categorical variable (discrete variable), in which case the ranking information can be utilized in establishing the predicted models. In this study, the most important statistical methods that can be used to analyze data with an ordinal response variable have been investigated. Among these methods are the Multiple Regression Method, and The Ordinal Logistic Regression Method. The mechanism of building models and parameter estimations were theoretically exhibited, as well as reading the statistical significance of the regression coefficients in all the models in the study. The application was carried out on a real sample of patients with osteoporosis. Where multiple models were built to determine the most important factors affecting the likelihood of developing the disease. The best model was diagnosed according to the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). The results of the statistical analysis demonstrated the superiority of the ordinal logistic regression model over the multiple linear regression model in its explanation of the relationship between the response variable and the covariates. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: Survival distributions are important and commonly used in different fields and the Weibull Distribution is one of these distributions that has a different formula and the Weibull distribution has been chosen with two parameters, the measurement parameter and the shape parameter, and its properties have been studied and the two distribution parameters have been estimated in two ways, namely the method of greatest possibility and the method of Biz when the information about the parameter is not available (Non-Informative) and when the information about the parameter is available (Informative), What was reached in theory was applied to real data represented by a potentiometer of cement material for seven days. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: The coronavirus disease, also called COVID-19 is caused by the SARS-CoV-2 virus. Most the people contaminated with the virus will experience mild to moderate symptoms of respiratory diseases. The aim of this paper is constructing a model by multilevel modeling for these patients who sufferers by coronaviruses, we got seven hospitals which totals (636) patients in private and public that 27% from Erbil, 26% from Sulaimani, 23% from Duhok and 24% from Halabja from the period (September 1th, 2019 to February 1th, 2022). In these modelling of multilevel restricted maximum likelihood estimation (RMLE) and full maximum likelihood (FML) acclimate estimate the parameters of multilevel models (fixed and random). The application was on the HRCT lungs of patients, seven hospitals were selected randomly among the county in Kurdistan region of Iraq. The result shows that all three variables are significant at the hospital level, but in the two final models add level-2 predictor (Doctor Experience) that interaction with level-1 predictor (smoker), which is far from significant. However, there is a significant relationship between being a diabetic and having a CT scan, but the relationship between smoking and having a CT scan is not significant. PubDate: Wed, 30 Nov 2022 20:30:00 +010

Abstract: In this research, a simple experiment in the field of agriculture was studied, in terms of the effect of out-of-control noise as a result of several reasons, including the effect of environmental conditions on the observations of agricultural experiments, through the use of Discrete Wavelet transformation, specifically (The Coiflets transform of wavelength 1 to 2 and the Daubechies transform of wavelength 2 To 3) based on two levels of transform (J-4) and (J-5), and applying the hard threshold rules, soft and non-negative, and comparing the wavelet transformation methods using real data for an experiment with a size of 26 observations. The application was carried out through a program in the language of MATLAB. The researcher concluded that using the wavelet transform with the Suggested threshold reduced the noise of observations through the comparison criteria. PubDate: Wed, 30 Nov 2022 20:30:00 +010