Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Abstract Count: 66741

200
32177
Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes
Abstract:
The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.
199
41876
Using Gaussian Process in Wind Power Forecasting
Abstract:
The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.
198
33883
Propagation of Cos-Gaussian Beam in Photorefractive Crystal
Authors:
Abstract:
A physical model for guiding the wave in photorefractive media is studied. Propagation of cos-Gaussian beam as the special cases of sinusoidal-Gaussian beams in photorefractive crystal is simulated numerically by the Crank-Nicolson method in one dimension. Results show that the beam profile deforms as the energy transfers from the center to the tails under propagation. This simulation approach is of significant interest for application in optical telecommunication. The results are presented graphically and discussed.
197
9430
Frequency Offset Estimation Schemes Based on ML for OFDM Systems in Non-Gaussian Noise Environments
Abstract:
In this paper, frequency offset (FO) estimation schemes robust to the non-Gaussian noise environments are proposed for orthogonal frequency division multiplexing (OFDM) systems. First, a maximum-likelihood (ML) estimation scheme in non-Gaussian noise environments is proposed, and then, the complexity of the ML estimation scheme is reduced by employing a reduced set of candidate values. In numerical results, it is demonstrated that the proposed schemes provide a significant performance improvement over the conventional estimation scheme in non-Gaussian noise environments while maintaining the performance similar to the estimation performance in Gaussian noise environments.
196
18445
Gaussian Operations with a Single Trapped Ion
Abstract:
In this letter, we review the literature of the major concepts that govern Gaussian quantum information. As we work with quantum information and computation with continuous variables, Gaussian states are needed to better describe these systems. Analyzing a single ion locked in a Paul trap we use the interaction picture to obtain a toolbox of Gaussian operations with the ion-laser interaction Hamiltionian. This is achieved exciting the ion through the combination of two lasers of distinct frequencies corresponding to different sidebands of the external degrees of freedom. First we study the case of a trap with 1 mode and then the case with 2 modes. In this way, we achieve different continuous variables gates just by changing the external degrees of freedom of the trap and combining the Hamiltonians of blue and red sidebands.
195
52660
Simulation of Propagation of Cos-Gaussian Beam in Strongly Nonlocal Nonlinear Media Using Paraxial Group Transformation
Abstract:
In this paper, propagation of cos-Gaussian beam in strongly nonlocal nonlinear media has been stimulated by using paraxial group transformation. At first, cos-Gaussian beam, nonlocal nonlinear media, critical power, transfer matrix, and paraxial group transformation are introduced. Then, the propagation of the cos-Gaussian beam in strongly nonlocal nonlinear media is simulated. Results show that beam propagation has periodic structure during self-focusing effect in this case. However, this simple method can be used for investigation of propagation of kinds of beams in ABCD optical media.
194
5135
Use of Gaussian-Euclidean Hybrid Function Based Artificial Immune System for Breast Cancer Diagnosis
Abstract:
Due to the fact that there exist only a small number of complex systems in artificial immune system (AIS) that work out nonlinear problems, nonlinear AIS approaches, among the well-known solution techniques, need to be developed. Gaussian function is usually used as similarity estimation in classification problems and pattern recognition. In this study, diagnosis of breast cancer, the second type of the most widespread cancer in women, was performed with different distance calculation functions that euclidean, gaussian and gaussian-euclidean hybrid function in the clonal selection model of classical AIS on Wisconsin Breast Cancer Dataset (WBCD), which was taken from the University of California, Irvine Machine-Learning Repository. We used 3-fold cross validation method to train and test the dataset. According to the results, the maximum test classification accuracy was reported as 97.35% by using of gaussian-euclidean hybrid function for fold-3. Also, mean of test classification accuracies for all of functions were obtained as 94.78%, 94.45% and 95.31% with use of euclidean, gaussian and gaussian-euclidean, respectively. With these results, gaussian-euclidean hybrid function seems to be a potential distance calculation method, and it may be considered as an alternative distance calculation method for hard nonlinear classification problems.
193
33569
System of Linear Equations, Gaussian Elimination
Abstract:
In this paper linear equations are discussed in detail along with elimination method. Gaussian elimination and Gauss Jordan schemes are carried out to solve the linear system of equation. This paper comprises of matrix introduction, and the direct methods for linear equations. The goal of this research was to analyze different elimination techniques of linear equations and measure the performance of Gaussian elimination and Gauss Jordan method, in order to find their relative importance and advantage in the field of symbolic and numeric computation. The purpose of this research is to revise an introductory concept of linear equations, matrix theory and forms of Gaussian elimination through which the performance of Gauss Jordan and Gaussian elimination can be measured.
192
124754
Self-Action Effects of a Non-Gaussian Laser Beam Through Plasma
Abstract:
The propagation of the Non-Gaussian laser beam results in strong self-focusing as compare to the Gaussian laser beam, which helps to achieve a prerequisite of the plasma-based electron, Terahertz generation, and higher harmonic generations. The theoretical investigation on the evolution of non-Gaussian laser beam through the collisional plasma with ramped density has been presented. The non-uniform irradiance over the cross-section of the laser beam results in redistribution of the carriers that modifies the optical response of the plasma in such a way that the plasma behaves like a converging lens to the laser beam. The formulation is based on finding a semi-analytical solution of the nonlinear Schrodinger wave equation (NLSE) with the help of variational theory. It has been observed that the decentred parameter ‘q’ of laser and wavenumber of ripples of medium contribute to providing the required conditions for the improvement of self-focusing.
191
24879
Adaptive Target Detection of High-Range-Resolution Radar in Non-Gaussian Clutter
Authors:
Abstract:
In non-Gaussian clutter of a spherically invariant random vector, in the cases that a certain estimated covariance matrix could become singular, the adaptive target detection of high-range-resolution radar is addressed. Firstly, the restricted maximum likelihood (RML) estimates of unknown covariance matrix and scatterer amplitudes are derived for non-Gaussian clutter. And then the RML estimate of texture is obtained. Finally, a novel detector is devised. It is showed that, without secondary data, the proposed detector outperforms the existing Kelly binary integrator.
190
113018
Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach
Abstract:
Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.
189
2233
The Extended Skew Gaussian Process for Regression
Authors:
Abstract:
In this paper, we propose a generalization to the Gaussian process regression(GPR) model called the extended skew Gaussian process for regression(ESGPr) model. The ESGPR model works better than the GPR model when the errors are skewed. We derive the predictive distribution for the ESGPR model at a new input. Also we apply the ESGPR model to FOREX data and we find that it fits the Forex data better than the GPR model.
188
85030
Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images
Abstract:
Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.
187
45613
Learning the Dynamics of Articulated Tracked Vehicles
Abstract:
In this work, we present a Bayesian non-parametric approach to model the motion control of ATVs. The motion control model is based on a Dirichlet Process-Gaussian Process (DP-GP) mixture model. The DP-GP mixture model provides a flexible representation of patterns of control manoeuvres along trajectories of different lengths and discretizations. The model also estimates the number of patterns, sufficient for modeling the dynamics of the ATV.
186
10266
ML-Based Blind Frequency Offset Estimation Schemes for OFDM Systems in Non-Gaussian Noise Environments
Abstract:
This paper proposes frequency offset (FO) estimation schemes robust to the non-Gaussian noise for orthogonal frequency division multiplexing (OFDM) systems. A maximum-likelihood (ML) scheme and a low-complexity estimation scheme are proposed by applying the probability density function of the cyclic prefix of OFDM symbols to the ML criterion. From simulation results, it is confirmed that the proposed schemes offer a significant FO estimation performance improvement over the conventional estimation scheme in non-Gaussian noise environments.
185
46532
Unsupervised Reciter Recognition Using Gaussian Mixture Models
Abstract:
This work proposes an unsupervised text-independent probabilistic approach to recognize Quran reciter voice. It is an accurate approach that works on real time applications. This approach does not require a prior information about reciter models. It has two phases, where in the training phase the reciters' acoustical features are modeled using Gaussian Mixture Models, while in the testing phase, unlabeled reciter's acoustical features are examined among GMM models. Using this approach, a high accuracy results are achieved with efficient computation time process.
184
124846
Second Harmonic Generation of Higher-Order Gaussian Laser Beam in Density Rippled Plasma
Abstract:
This work presents the theoretical investigation of an enhanced second-harmonic generation of higher-order Gaussian laser beam in plasma having a density ramp. The mechanism responsible for the self-focusing of a laser beam in plasma is considered to be the relativistic mass variation of plasma electrons under the effect of a highly intense laser beam. Using the moment theory approach and considering the Wentzel-Kramers-Brillouin approximation for the non-linear Schrodinger wave equation, the differential equation is derived, which governs the spot size of the higher-order Gaussian laser beam in plasma. The nonlinearity induced by the laser beam creates the density gradient in the background plasma electrons, which is responsible for the excitation of the electron plasma wave. The large amplitude electron plasma wave interacts with the fundamental beam, which further produces the coherent radiations with double the frequency of the incident beam. The analysis shows the important role of the different modes of higher-order Gaussian laser beam and density ramp on the efficiency of generated harmonics.
183
15768
Stimulated Raman Scattering of Ultra Intense Hollow Gaussian Beam
Abstract:
Effect of relativistic nonlinearity on stimulated Raman scattering of the propagating laser beam carrying null intensity in center (hollow Gaussian beam) by excited plasma wave are studied in a collisionless plasma. The construction of the equations is done employing the fluid theory which is developed with partial differential equation and Maxwell’s equations. The analysis is done using eikonal method. The phenonmenon of Stimulated Raman scattering is shown along with the excitation of seed plasma wave. The power of plasma wave and back reflectivity is observed for higher order of hollow Gaussian beam. Back reflectivity is studied numerically for various orders of HGLB with different value of plasma density, laser power and beam radius. Numerical analysis shows that these parameters play vital role on reflectivity characteristics.
182
6433
Solving Single Machine Total Weighted Tardiness Problem Using Gaussian Process Regression
Abstract:
This paper proposes an application of probabilistic technique, namely Gaussian process regression, for estimating an optimal sequence of the single machine with total weighted tardiness (SMTWT) scheduling problem. In this work, the Gaussian process regression (GPR) model is utilized to predict an optimal sequence of the SMTWT problem, and its solution is improved by using an iterated local search based on simulated annealing scheme, called GPRISA algorithm. The results show that the proposed GPRISA method achieves a very good performance and a reasonable trade-off between solution quality and time consumption. Moreover, in the comparison of deviation from the best-known solution, the proposed mechanism noticeably outperforms the recently existing approaches.
181
26245
Facial Expression Recognition Using Sparse Gaussian Conditional Random Field
Abstract:
The analysis of expression and facial Action Units (AUs) detection are very important tasks in fields of computer vision and Human Computer Interaction (HCI) due to the wide range of applications in human life. Many works have been done during the past few years which has their own advantages and disadvantages. In this work, we present a new model based on Gaussian Conditional Random Field. We solve our objective problem using ADMM and we show how well the proposed model works. We train and test our work on two facial expression datasets, CK+, and RU-FACS. Experimental evaluation shows that our proposed approach outperform state of the art expression recognition.
180
34103
Novel Inference Algorithm for Gaussian Process Classification Model with Multiclass and Its Application to Human Action Classification
Abstract:
In this paper, we propose a novel inference algorithm for the multi-class Gaussian process classification model that can be used in the field of human behavior recognition. This algorithm can drive simultaneously both a posterior distribution of a latent function and estimators of hyper-parameters in a Gaussian process classification model with multi-class. Our algorithm is based on the Laplace approximation (LA) technique and variational EM framework. This is performed in two steps: called expectation and maximization steps. First, in the expectation step, using the Bayesian formula and LA technique, we derive approximately the posterior distribution of the latent function indicating the possibility that each observation belongs to a certain class in the Gaussian process classification model. Second, in the maximization step, using a derived posterior distribution of latent function, we compute the maximum likelihood estimator for hyper-parameters of a covariance matrix necessary to define prior distribution for latent function. These two steps iteratively repeat until a convergence condition satisfies. Moreover, we apply the proposed algorithm with human action classification problem using a public database, namely, the KTH human action data set. Experimental results reveal that the proposed algorithm shows good performance on this data set.
179
33816
Multinomial Dirichlet Gaussian Process Model for Classification of Multidimensional Data
Abstract:
We present probabilistic multinomial Dirichlet classification model for multidimensional data and Gaussian process priors. Here, we have considered an efficient computational method that can be used to obtain the approximate posteriors for latent variables and parameters needed to define the multiclass Gaussian process classification model. We first investigated the process of inducing a posterior distribution for various parameters and latent function by using the variational Bayesian approximations and important sampling method, and next we derived a predictive distribution of latent function needed to classify new samples. The proposed model is applied to classify the synthetic multivariate dataset in order to verify the performance of our model. Experiment result shows that our model is more accurate than the other approximation methods.
178
39750
Lee-Carter Mortality Forecasting Method with Dynamic Normal Inverse Gaussian Mortality Index
Abstract:
Pension scheme providers have to price mortality risk by accurate mortality forecasting method. There are many mortality-forecasting methods constructed and used in literature. The Lee-Carter model is the first model to consider stochastic improvement trends in life expectancy. It is still precisely used. Mortality forecasting is done by mortality index in the Lee-Carter model. It is assumed that mortality index fits ARIMA time series model. In this paper, we propose and use dynamic normal inverse gaussian distribution to modeling mortality indes in the Lee-Carter model. Using population mortality data for Italy, France, and Turkey, the model is forecasting capability is investigated, and a comparative analysis with other models is ensured by some well-known benchmarking criterions.
177
46957
Facility Anomaly Detection with Gaussian Mixture Model
Abstract:
Internet of Things allows one to collect data from facilities which are then used to monitor them and even predict malfunctions in advance. Conventional quality control methods focus on setting a normal range on a sensor value defined between a lower control limit and an upper control limit, and declaring as an anomaly anything falling outside it. However, interactions among sensor values are ignored, thus leading to suboptimal performance. We propose a multivariate approach which takes into account many sensor values at the same time. In particular Gaussian Mixture Model is used which is trained to maximize likelihood value using Expectation-Maximization algorithm. The number of Gaussian component distributions is determined by Bayesian Information Criterion. The negative Log likelihood value is used as an anomaly score. The actual usage scenario goes like a following. For each instance of sensor values from a facility, an anomaly score is computed. If it is larger than a threshold, an alarm will go off and a human expert intervenes and checks the system. A real world data from Building energy system was used to test the model.
176
118320
Gaussian Probability Density for Forest Fire Detection Using Satellite Imagery
Abstract:
we present a method for early detection of forest fires from a thermal infrared satellite image, using the image matrix of the probability of belonging. The principle of the method is to compare a theoretical mathematical model to an experimental model. We considered that each line of the image matrix, as an embodiment of a non-stationary random process. Since the distribution of pixels in the satellite image is statistically dependent, we divided these lines into small stationary and ergodic intervals to characterize the image by an adequate mathematical model. A standard deviation was chosen to generate random variables, so each interval behaves naturally like white Gaussian noise. The latter has been selected as the mathematical model that represents a set of very majority pixels, which we can be considered as the image background. Before modeling the image, we made a few pretreatments, then the parameters of the theoretical Gaussian model were extracted from the modeled image, these settings will be used to calculate the probability of each interval of the modeled image to belong to the theoretical Gaussian model. The high intensities pixels are regarded as foreign elements to it, so they will have a low probability, and the pixels that belong to the background image will have a high probability. Finally, we did present the reverse of the matrix of probabilities of these intervals for a better fire detection.
175
9623
Additive White Gaussian Noise Filtering from ECG by Wiener Filter and Median Filter: A Comparative Study
Abstract:
The Electrocardiogram (ECG) is the recording of the heart’s electrical potential versus time. ECG signals are often contaminated with noise such as baseline wander and muscle noise. As these signals have been widely used in clinical studies to detect heart diseases, it is essential to filter these noises. In this paper we compare performance of Wiener Filtering and Median Filtering methods to filter Additive White Gaussian (AWG) noise with the determined signal to noise ratio (SNR) ranging from 3 to 5 dB applied to long-term ECG recordings samples. Root mean square error (RMSE) and coefficient of determination (R2) between the filtered ECG and original ECG was used as the filter performance indicator. Experimental results show that Wiener filter has better noise filtering performance than Median filter.
174
10240
An Automatic Speech Recognition Tool for the Filipino Language Using the HTK System
Abstract:
This paper presents the development of a Filipino speech recognition tool using the HTK System. The system was trained from a subset of the Filipino Speech Corpus developed by the DSP Laboratory of the University of the Philippines-Diliman. The speech corpus was both used in training and testing the system by estimating the parameters for phonetic HMM-based (Hidden-Markov Model) acoustic models. Experiments on different mixture-weights were incorporated in the study. The phoneme-level word-based recognition of a 5-state HMM resulted in an average accuracy rate of 80.13 for a single-Gaussian mixture model, 81.13 after implementing a phoneme-alignment, and 87.19 for the increased Gaussian-mixture weight model. The highest accuracy rate of 88.70% was obtained from a 5-state model with 6 Gaussian mixtures.
173
137593
Unsupervised Learning and Similarity Comparison of Water Mass Characteristics with Gaussian Mixture Model for Visualizing Ocean Data
Abstract:
The temperature-salinity relationship is one of the most important characteristics used for identifying water masses in marine research. Temperature-salinity characteristics, however, may change dynamically with respect to the geographic location and is quite sensitive to the depth at the same location. When depth is taken into consideration, however, it is not easy to compare the characteristics of different water masses efficiently for a wide range of areas of the ocean. In this paper, the Gaussian mixture model was proposed to analyze the temperature-salinity-depth characteristics of water masses, based on which comparison between water masses may be conducted. Gaussian mixture model could model the distribution of a random vector and is formulated as the weighting sum for a set of multivariate normal distributions. The temperature-salinity-depth data for different locations are first used to train a set of Gaussian mixture models individually. The distance between two Gaussian mixture models can then be defined as the weighting sum of pairwise Bhattacharyya distances among the Gaussian distributions. Consequently, the distance between two water masses may be measured fast, which allows the automatic and efficient comparison of the water masses for a wide range area. The proposed approach not only can approximate the distribution of temperature, salinity, and depth directly without the prior knowledge for assuming the regression family, but may restrict the complexity by controlling the number of mixtures when the amounts of samples are unevenly distributed. In addition, it is critical for knowledge discovery in marine research to represent, manage and share the temperature-salinity-depth characteristics flexibly and responsively. The proposed approach has been applied to a real-time visualization system of ocean data, which may facilitate the comparison of water masses by aggregating the data without degrading the discriminating capabilities. This system provides an interface for querying geographic locations with similar temperature-salinity-depth characteristics interactively and for tracking specific patterns of water masses, such as the Kuroshio near Taiwan or those in the South China Sea.
172
15612
Confidence Intervals for Process Capability Indices for Autocorrelated Data
Authors:
Abstract:
Persistent pressure passed on to manufacturers from escalating consumer expectations and the ever growing global competitiveness have produced a rapidly increasing interest in the development of various manufacturing strategy models. Academic and industrial circles are taking keen interest in the field of manufacturing strategy. Many manufacturing strategies are currently centered on the traditional concepts of focused manufacturing capabilities such as quality, cost, dependability and innovation. Process capability indices was conducted assuming that the process under study is in statistical control and independent observations are generated over time. However, in practice, it is very common to come across processes which, due to their inherent natures, generate autocorrelated observations. The degree of autocorrelation affects the behavior of patterns on control charts. Even, small levels of autocorrelation between successive observations can have considerable effects on the statistical properties of conventional control charts. When observations are autocorrelated the classical control charts exhibit nonrandom patterns and lack of control. Many authors have considered the effect of autocorrelation on the performance of statistical process control charts. In this paper, the effect of autocorrelation on confidence intervals for different PCIs was included. Stationary Gaussian processes is explained. Effect of autocorrelation on PCIs is described in detail. Confidence intervals for Cp and Cpk are constructed for PCIs when data are both independent and autocorrelated. Confidence intervals for Cp and Cpk are computed. Approximate lower confidence limits for various Cpk are computed assuming AR(1) model for the data. Simulation studies and industrial examples are considered to demonstrate the results.
171
131675
A Time-Varying and Non-Stationary Convolution Spectral Mixture Kernel for Gaussian Process
Abstract:
Gaussian process (GP) with spectral mixture (SM) kernel demonstrates flexible non-parametric Bayesian learning ability in modeling unknown function. In this work a novel time-varying and non-stationary convolution spectral mixture (TN-CSM) kernel with a significant enhancing of interpretability by using process convolution is introduced. A way decomposing the SM component into an auto-convolution of base SM component and parameterizing it to be input dependent is outlined. Smoothly, performing a convolution between two base SM component yields a novel structure of non-stationary SM component with much better generalized expression and interpretation. The TN-CSM perfectly allows compatibility with the stationary SM kernel in terms of kernel form and spectral base ignored and confused by previous non-stationary kernels. On synthetic and real-world datatsets, experiments show the time-varying characteristics of hyper-parameters in TN-CSM and compare the learning performance of TN-CSM with popular and representative non-stationary GP.
170
49713
Human Action Recognition Using Variational Bayesian HMM with Dirichlet Process Mixture of Gaussian Wishart Emission Model
Abstract:
In this paper, we present the human action recognition method using the variational Bayesian HMM with the Dirichlet process mixture (DPM) of the Gaussian-Wishart emission model (GWEM). First, we define the Bayesian HMM based on the Dirichlet process, which allows an infinite number of Gaussian-Wishart components to support continuous emission observations. Second, we have considered an efficient variational Bayesian inference method that can be applied to drive the posterior distribution of hidden variables and model parameters for the proposed model based on training data. And then we have derived the predictive distribution that may be used to classify new action. Third, the paper proposes a process of extracting appropriate spatial-temporal feature vectors that can be used to recognize a wide range of human behaviors from input video image. Finally, we have conducted experiments that can evaluate the performance of the proposed method. The experimental results show that the method presented is more efficient with human action recognition than existing methods.
169
22974
Gaussian Mixture Model Based Identification of Arterial Wall Movement for Computation of Distension Waveform
Abstract:
This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.
168
79553
A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.
167
20532
Cyclostationary Gaussian Linearization for Analyzing Nonlinear System Response Under Sinusoidal Signal and White Noise Excitation
Authors:
Abstract:
A cyclostationary Gaussian linearization method is formulated for investigating the time average response of nonlinear system under sinusoidal signal and white noise excitation. The quantitative measure of cyclostationary mean, variance, spectrum of mean amplitude, and mean power spectral density of noise is analyzed. The qualitative response behavior of stochastic jump and bifurcation are investigated. The validity of the present approach in predicting the quantitative and qualitative statistical responses is supported by utilizing Monte Carlo simulations. The present analysis without imposing restrictive analytical conditions can be directly derived by solving non-linear algebraic equations. The analytical solution gives reliable quantitative and qualitative prediction of mean and noise response for the Duffing system subjected to both sinusoidal signal and white noise excitation.
166
46772
3D Shape Knitting: Loop Alignment on a Surface with Positive Gaussian Curvature
Abstract:
This paper aims at manipulating loop alignment in knitting a three-dimensional (3D) shape by its geometry. Two loop alignment methods are introduced to handle a surface with positive Gaussian curvature. As weft knitting is a two-dimensional (2D) knitting mechanism that the knitting cam carrying the feeders moves in two directions only, left and right, the knitted fabric generated grows in width and length but not in depth. Therefore, a 3D shape is required to be flattened to a 2D plane with surface area preserved for knitting. On this flattened plane, dimensional measurements are taken for loop alignment. The way these measurements being taken derived two different loop alignment methods. In this paper, only plain knitted structure was considered. Each knitted loop was taken as a basic unit for loop alignment in order to achieve the required geometric dimensions, without the inclusion of other stitches which give textural dimensions to the fabric. Two loop alignment methods were experimented and compared. Only one of these two can successfully preserve the dimensions of the shape.
165
21359
Adaptive CFAR Analysis for Non-Gaussian Distribution
Abstract:
Automatic detection of targets in a modern communication system RADAR is based primarily on the concept of adaptive CFAR detector. To have an effective detection, we must minimize the influence of disturbances due to the clutter. The detection algorithm adapts the CFAR detection threshold which is proportional to the average power of the clutter, maintaining a constant probability of false alarm. In this article, we analyze the performance of two variants of adaptive algorithms CA-CFAR and OS-CFAR and we compare the thresholds of these detectors in the marine environment (no-Gaussian) with a Weibull distribution.
164
16364
Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems
Abstract:
Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method.
163
63441
Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis
Abstract:
Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.
162
74780
Finding the Elastic Field in an Arbitrary Anisotropic Media by Implementing Accurate Generalized Gaussian Quadrature Solution
Abstract:
In the current study, the elastic field in an anisotropic elastic media is determined by implementing a general semi-analytical method. In this specific methodology, the displacement field is computed as a sum of finite functions with unknown coefficients. These aforementioned functions satisfy exactly both the homogeneous and inhomogeneous boundary conditions in the proposed media. It is worth mentioning that the unknown coefficients are determined by implementing the principle of minimum potential energy. The numerical integration is implemented by employing the Generalized Gaussian Quadrature solution. Furthermore, with the aid of the calculated unknown coefficients, the displacement field, as well as the other parameters of the elastic field, are obtainable as well. Finally, the comparison of the previous analytical method with the current semi-analytical method proposes the efficacy of the present methodology.
161
94151
Active Linear Quadratic Gaussian Secondary Suspension Control of Flexible Bodied Railway Vehicle
Abstract:
Passenger comfort has been paramount in the design of suspension systems of high speed cars. To analyze the effect of vibration on vehicle ride quality, a vertical model of a six degree of freedom railway passenger vehicle, with front and rear suspension, is built. It includes car body flexible effects and vertical rigid modes. A second order linear shaping filter is constructed to model Gaussian white noise into random rail excitation. The temporal correlation between the front and rear wheels is given by a second order Pade approximation. The complete track and the vehicle model are then designed. An active secondary suspension system based on a Linear Quadratic Gaussian (LQG) optimal control method is designed. The results show that the LQG control method reduces the vertical acceleration, pitching acceleration and vertical bending vibration of the car body as compared to the passive system.
160
21264
Edge Detection in Low Contrast Images
Abstract:
The edges of low contrast images are not clearly distinguishable to the human eye. It is difficult to find the edges and boundaries in it. The present work encompasses a new approach for low contrast images. The Chebyshev polynomial based fractional order filter has been used for filtering operation on an image. The preprocessing has been performed by this filter on the input image. Laplacian of Gaussian method has been applied on preprocessed image for edge detection. The algorithm has been tested on two test images.
159
17388
Local Spectrum Feature Extraction for Face Recognition
Abstract:
This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.
158
32479
Mixed Sub-Fractional Brownian Motion
Authors:
Abstract:
We will introduce a new extension of the Brownian motion, that could serve to get a good model of many natural phenomena. It is a linear combination of a finite number of sub-fractional Brownian motions; that is why we will call it the mixed sub-fractional Brownian motion. We will present some basic properties of this process. Among others, we will check that our process is non-Markovian and that it has non-stationary increments. We will also give the conditions under which it is a semimartingale. Finally, the main features of its sample paths will be specified.
157
28337
Nanofocusing of Surface Plasmon Polaritons by Partially Metal- Coated Dielectric Conical Probe: Optimal Asymmetric Distance
Abstract:
Nanometric superfocusing of optical intensity near the tip of partially metal- coated dielectric conical probe of the convergent surface plasmon polariton wave is investigated by the volume integral equation method. It is possible to perform nanofocusing using this probe by using both linearly and radially polarized Gaussian beams as the incident waves. Strongly localized and enhanced optical near-fields can be created on the tip of this probe for the cases of both incident Gaussian beams. However the intensity distribution near the probe tip was found to be very sensitive to the shape of the probe tip.
156
36677
Mixed-Sub Fractional Brownian Motion
Authors:
Abstract:
We will introduce a new extension of the Brownian motion, that could serve to get a good model of many natural phenomena. It is a linear combination of a finite number of sub-fractional Brownian motions; that is why we will call it the mixed sub-fractional Brownian motion. We will present some basic properties of this process. Among others, we will check that our process is non-markovian and that it has non-stationary increments. We will also give the conditions under which it is a semi-martingale. Finally, the main features of its sample paths will be specified.
155
7401
Temperature-Dependent Barrier Characteristics of Inhomogeneous Pd/n-GaN Schottky Barrier Diodes Surface
Abstract:
The current-voltage (I-V) characteristics of Pd/n-GaN Schottky barrier were studied at temperatures over room temperature (300-470K). The values of ideality factor (n), zero-bias barrier height (φB0), flat barrier height (φBF) and series resistance (Rs) obtained from I-V-T measurements were found to be strongly temperature dependent while (φBo) increase, (n), (φBF) and (Rs) decrease with increasing temperature. The apparent Richardson constant was found to be 2.1x10-9 Acm-2K-2 and mean barrier height of 0.19 eV. After barrier height inhomogeneities correction, by assuming a Gaussian distribution (GD) of the barrier heights, the Richardson constant and the mean barrier height were obtained as 23 Acm-2K-2 and 1.78eV, respectively. The corrected Richardson constant was very closer to theoretical value of 26 Acm-2K-2.
154
5922
OFDM Radar for Detecting a Rayleigh Fluctuating Target in Gaussian Noise
Abstract:
We develop methods for detecting a target for orthogonal frequency division multiplexing (OFDM) based radars. As a preliminary step we introduce the target and Gaussian noise models in discrete–time form. Then, resorting to match filter (MF) we derive a detector for two different scenarios: a non-fluctuating target and a Rayleigh fluctuating target. It will be shown that a MF is not suitable for Rayleigh fluctuating targets. In this paper we propose a reduced-complexity method based on fast Fourier transfrom (FFT) for such a situation. The proposed method has better detection performance.
153
116798
Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach
Abstract:
An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by "Study Nearby Strategy" to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.
152
6048
Statistical Analysis for Overdispersed Medical Count Data
Abstract:
Many researchers have suggested the use of zero inflated Poisson (ZIP) and zero inflated negative binomial (ZINB) models in modeling over-dispersed medical count data with extra variations caused by extra zeros and unobserved heterogeneity. The studies indicate that ZIP and ZINB always provide better fit than using the normal Poisson and negative binomial models in modeling over-dispersed medical count data. In this study, we proposed the use of Zero Inflated Inverse Trinomial (ZIIT), Zero Inflated Poisson Inverse Gaussian (ZIPIG) and zero inflated strict arcsine models in modeling over-dispersed medical count data. These proposed models are not widely used by many researchers especially in the medical field. The results show that these three suggested models can serve as alternative models in modeling over-dispersed medical count data. This is supported by the application of these suggested models to a real life medical data set. Inverse trinomial, Poisson inverse Gaussian, and strict arcsine are discrete distributions with cubic variance function of mean. Therefore, ZIIT, ZIPIG and ZISA are able to accommodate data with excess zeros and very heavy tailed. They are recommended to be used in modeling over-dispersed medical count data when ZIP and ZINB are inadequate.
151
106620
Non-Linear Causality Inference Using BAMLSS and Bi-CAM in Finance
Abstract:
Inferring causality from observational data is one of the fundamental subjects, especially in quantitative finance. So far most of the papers analyze additive noise models with either linearity, nonlinearity or Gaussian noise. We fill in the gap by providing a nonlinear and non-gaussian causal multiplicative noise model that aims to distinguish the cause from the effect using a two steps method based on Bayesian additive models for location, scale and shape (BAMLSS) and on causal additive models (CAM). We have tested our method on simulated and real data and we reached an accuracy of 0.86 on average. As real data, we considered the causality between financial indices such as S&P 500, Nasdaq, CAC 40 and Nikkei, and companies' log-returns. Our results can be useful in inferring causality when the data is heteroskedastic or non-injective.
150
87815
Nonlinear Interaction of Free Surface Sloshing of Gaussian Hump with Its Container
Abstract:
Movement of liquid with a free surface in a container is known as slosh. For instance, slosh occurs when water in a closed tank is set in motion by a free surface displacement, or when liquid natural gas in a container is vibrated by an external driving force, such as an earthquake or movement induced by transport. Slosh is also derived from resonant switching of a natural basin. During sloshing, different types of motion are produced by energy exchange between the liquid and its container. In present study, a numerical model is developed to simulate the nonlinear even harmonic oscillations of free surface sloshing of an initial disturbance to the free surface of a liquid in a closed square basin. The response of the liquid free surface is affected by amplitude and motion frequencies of its container; therefore, sloshing involves complex fluid-structure interactions. In the present study, nonlinear interaction of free surface sloshing of an initial Gaussian hump with its uneven container is predicted numerically. For this purpose, Green-Naghdi (GN) equations are applied as governing equation of fluid field to produce nonlinear second-order and higher-order wave interactions. These equations reduce the dimensions from three to two, yielding equations that can be solved efficiently. The GN approach assumes a particular flow kinematic structure in the vertical direction for shallow and deep-water problems. The fluid velocity profile is finite sum of coefficients depending on space and time multiplied by a weighting function. It should be noted that in GN theory, the flow is rotational. In this study, GN numerical simulations of initial Gaussian hump are compared with Fourier series semi-analytical solutions of the linearized shallow water equations. The comparison reveals that satisfactory agreement exists between the numerical simulation and the analytical solution of the overall free surface sloshing patterns. The resonant free surface motions driven by an initial Gaussian disturbance are obtained by Fast Fourier Transform (FFT) of the free surface elevation time history components. Numerically predicted velocity vectors and magnitude contours for the free surface patterns indicate that interaction of Gaussian hump with its container has localized effect. The result of this sloshing is applicable to the design of stable liquefied oil containers in tankers and offshore platforms.
149
19199
Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms
Abstract:
In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection.
148
89000
Spectral Mixture Model Applied to Cannabis Parcel Determination
Abstract:
Many research projects require accurate delineation of the different land cover type of the agricultural area. Especially it is critically important for the definition of specific plants like cannabis. However, the complexity of vegetation stands structure, abundant vegetation species, and the smooth transition between different seconder section stages make vegetation classification difficult when using traditional approaches such as the maximum likelihood classifier. Most of the time, classification distinguishes only between trees/annual or grain. It has been difficult to accurately determine the cannabis mixed with other plants. In this paper, a mixed distribution models approach is applied to classify pure and mix cannabis parcels using Worldview-2 imagery in the Lakes region of Turkey. Five different land use types (i.e. sunflower, maize, bare soil, and cannabis) were identified in the image. A constrained Gaussian mixture discriminant analysis (GMDA) was used to unmix the image. In the study, 255 reflectance ratios derived from spectral signatures of seven bands (Blue-Green-Yellow-Red-Rededge-NIR1-NIR2) were randomly arranged as 80% for training and 20% for test data. Gaussian mixed distribution model approach is proved to be an effective and convenient way to combine very high spatial resolution imagery for distinguishing cannabis vegetation. Based on the overall accuracies of the classification, the Gaussian mixed distribution model was found to be very successful to achieve image classification tasks. This approach is sensitive to capture the illegal cannabis planting areas in the large plain. This approach can also be used for monitoring and determination with spectral reflections in illegal cannabis planting areas.
147
38262
Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.
146
58573
Normalizing Logarithms of Realized Volatility in an ARFIMA Model
Authors:
Abstract:
Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.
145
18394
Distribution of Maximum Loss of Fractional Brownian Motion with Drift
Abstract:
In finance, the price of a volatile asset can be modeled using fractional Brownian motion (fBm) with Hurst parameter H>1/2. The Black-Scholes model for the values of returns of an asset using fBm is given as, 〖Y_t=Y_0 e^((r+μ)t+σB)〗_t^H, 0≤t≤T where Y_0 is the initial value, r is constant interest rate, μ is constant drift and σ is constant diffusion coefficient of fBm, which is denoted by B_t^H where t≥0. Black-Scholes model can be constructed with some Markov processes such as Brownian motion. The advantage of modeling with fBm to Markov processes is its capability of exposing the dependence between returns. The real life data for a volatile asset display long-range dependence property. For this reason, using fBm is a more realistic model compared to Markov processes. Investors would be interested in any kind of information on the risk in order to manage it or hedge it. The maximum possible loss is one way to measure highest possible risk. Therefore, it is an important variable for investors. In our study, we give some theoretical bounds on the distribution of maximum possible loss of fBm. We provide both asymptotical and strong estimates for the tail probability of maximum loss of standard fBm and fBm with drift and diffusion coefficients. In the investment point of view, these results explain, how large values of possible loss behave and its bounds.
144
31469
A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory
Abstract:
The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method.
143
80428
Non Classical Photonic Nanojets in near Field of Metallic and Negative-Index Scatterers, Purely Electric and Magnetic Nanojets
Abstract:
We present the results of our analytical and computational study of Laguerre-Gaussian (LG) beams scattering by spherical homogeneous isotropic particles located on the axis of the beam. We consider different types of scatterers (dielectric, metallic and double negative metamaterials) and different polarizations of the LG beams. A possibility to generate photonic nanojets using metallic and double negative metamaterial Mie scatterers is shown. We have studied the properties of such nonclassical nanojets and discovered new types of the nanojets characterized by zero on-axes magnetic (or electric) field with the electric (or magnetic) field polarized along the z-axis.
142
19105
Endocardial Ultrasound Segmentation using Level Set method
Abstract:
This paper presents a fully automatic segmentation method of the left ventricle at End Systolic (ES) and End Diastolic (ED) in the ultrasound images by means of an implicit deformable model (level set) based on Geodesic Active Contour model. A pre-processing Gaussian smoothing stage is applied to the image, which is essential for a good segmentation. Before the segmentation phase, we locate automatically the area of the left ventricle by using a detection approach based on the Hough Transform method. Consequently, the result obtained is used to automate the initialization of the level set model. This initial curve (zero level set) deforms to search the Endocardial border in the image. On the other hand, quantitative evaluation was performed on a data set composed of 15 subjects with a comparison to ground truth (manual segmentation).
141
127870
Enhancing Temporal Extrapolation of Wind Speed Using a Hybrid Technique: A Case Study in West Coast of Denmark
Abstract:
The demand for renewable energy is significantly increasing, major investments are being supplied to the wind power generation industry as a leading source of clean energy. The wind energy sector is entirely dependable and driven by the prediction of wind speed, which by the nature of wind is very stochastic and widely random. This s0tudy employs deep multi-fidelity Gaussian process regression, used to predict wind speeds for medium term time horizons. Data of the RUNE experiment in the west coast of Denmark were provided by the Technical University of Denmark, which represent the wind speed across the study area from the period between December 2015 and March 2016. The study aims to investigate the effect of pre-processing the data by denoising the signal using empirical wavelet transform (EWT) and engaging the vector components of wind speed to increase the number of input data layers for data fusion using deep multi-fidelity Gaussian process regression (GPR). The outcomes were compared using root mean square error (RMSE) and the results demonstrated a significant increase in the accuracy of predictions which demonstrated that using vector components of the wind speed as additional predictors exhibits more accurate predictions than strategies that ignore them, reflecting the importance of the inclusion of all sub data and pre-processing signals for wind speed forecasting models.
140
107546
Linear Quadratic Gaussian/Loop Transfer Recover Control Flight Control on a Nonlinear Model
Abstract:
As part of the development of a 4D autopilot system for unmanned aerial vehicles (UAVs), i.e. a time-dependent robust trajectory generation and control algorithm, this work addresses the problem of optimal path control based on the flight sensors data output that may be unreliable due to noise on data acquisition and/or transmission under certain circumstances. Although several filtering methods, such as the Kalman-Bucy filter or the Linear Quadratic Gaussian/Loop Transfer Recover Control (LQG/LTR), are available, the utter complexity of the control system, together with the robustness and reliability required of such a system on a UAV for airworthiness certifiable autonomous flight, required the development of a proper robust filter for a nonlinear system, as a way of further mitigate errors propagation to the control system and improve its ,performance. As such, a nonlinear algorithm based upon the LQG/LTR, is validated through computational simulation testing, is proposed on this paper.
139
22201
A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting
Abstract:
Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.
138
32650
A Background Subtraction Based Moving Object Detection Around the Host Vehicle
Abstract:
In this paper, we propose moving object detection method which is helpful for driver to safely take his/her car out of parking lot. When moving objects such as motorbikes, pedestrians, the other cars and some obstacles are detected at the rear-side of host vehicle, the proposed algorithm can provide to driver warning. We assume that the host vehicle is just before departure. Gaussian Mixture Model (GMM) based background subtraction is basically applied. Pre-processing such as smoothing and post-processing as morphological filtering are added.We examine “which color space has better performance for detection of moving objects?” Three color spaces including RGB, YCbCr, and Y are applied and compared, in terms of detection rate. Through simulation, we prove that RGB space is more suitable for moving object detection based on background subtraction.
137
17954
Image Compression Based on Regression SVM and Biorthogonal Wavelets
Abstract:
In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.
136
97753
Defect Profile Simulation of Oxygen Implantation into Si and GaAs
Abstract:
This study concerns the ion implantation of oxygen in two semiconductors Si and GaAs realized by a simulation using the SRIM tool. The goal of this study is to compare the effect of implantation energy on the distribution of implant ions in the two targets and to examine the different processes resulting from the interaction between the ions of oxygen and the target atoms (Si, GaAs). SRIM simulation results indicate that the implanted ions have a profile as a function of Gaussian-type; oxygen produced more vacancies and implanted deeper in Si compared to GaAs. Also, most of the energy loss is due to ionization and phonon production, where vacancy production amounts to few percent of the total energy.
135
105114
Regression for Doubly Inflated Multivariate Poisson Distributions
Abstract:
Dependent multivariate count data occur in several research studies. These data can be modeled by a multivariate Poisson or Negative binomial distribution constructed using copulas. However, when some of the counts are inflated, that is, the number of observations in some cells are much larger than other cells, then the copula based multivariate Poisson (or Negative binomial) distribution may not fit well and it is not an appropriate statistical model for the data. There is a need to modify or adjust the multivariate distribution to account for the inflated frequencies. In this article, we consider the situation where the frequencies of two cells are higher compared to the other cells, and develop a doubly inflated multivariate Poisson distribution function using multivariate Gaussian copula. We also discuss procedures for regression on covariates for the doubly inflated multivariate count data. For illustrating the proposed methodologies, we present a real data containing bivariate count observations with inflations in two cells. Several models and linear predictors with log link functions are considered, and we discuss maximum likelihood estimation to estimate unknown parameters of the models.
134
19477
Spectroscopic, Molecular Structure and Electrostatic Potential, Polarizability, Hyperpolarizability, and HOMO–LUMO Analysis of Monomeric and Dimeric Structures of N-(2-Methylphenyl)-2-Nitrobenzenesulfonamide
Abstract:
The monomer and dimer structures of the title molecule have been obtained from density functional theory (DFT) B3LYP method with 6-31G (d,p) as basis set calculations. The optimized geometrical parameters obtained by B3LYP/6-31G (d,p) method show good agreement with xperimental X-ray data. The polarizability and first order hyperpolarizabilty of the title molecule were calculated and interpreted. the intermolecular N–H•••O hydrogen bonds are discussed in dimer structure of the molecule. The vibrational wave numbers and their assignments were examined theoretically using the Gaussian 03 set of quantum chemistry codes. The predicted frontier molecular orbital energies at B3LYP/6-31G(d,p) method set show that charge transfer occurs within the molecule. The frontier molecular orbital calculations clearly show the inverse relationship of HOMO–LUMO gap with the total static hyperpolarizability. The results also show that N-(2-Methylphenyl)-2-nitrobenzenesulfonamide molecule may have nonlinear optical (NLO) comportment with non-zero values.
133
18951
The Spectroscopic, Molecular Structure and Electrostatic Potential, Polarizability, Hyperpolarizability, and HOMO–LUMO Analysis of Monomeric and Dimeric Structures of N-(2-Methylphenyl)-2-Nitrobenzenesulfonamide
Abstract:
The monomer and dimer structures of the title molecule have been obtained from density functional theory (DFT) B3LYP method with 6-31G(d,p) as basis set calculations. The optimized geometrical parameters obtained by B3LYP/6-31G(d,p) method show good agreement with experimental X-ray data. The polarizability and first order hyperpolarizability of the title molecule were calculated and interpreted. The intermolecular N–H•••O hydrogen bonds are discussed in dimer structure of the molecule. The vibrational wave numbers and their assignments were examined theoretically using the Gaussian 03 set of quantum chemistry codes. The predicted frontier molecular orbital energies at B3LYP/6-31G(d,p) method set show that charge transfer occurs within the molecule. The frontier molecular orbital calculations clearly show the inverse relationship of HOMO–LUMO gap with the total static hyperpolarizability. The results also show that N-(2-Methylphenyl)-2-nitrobenzenesulfonamide molecule may have nonlinear optical (NLO) comportment with non-zero values.
132
39606
Subjective versus Objective Assessment for Magnetic Resonance (MR) Images
Abstract:
Magnetic Resonance Imaging (MRI) is one of the most important medical imaging modality. Subjective assessment of the image quality is regarded as the gold standard to evaluate MR images. In this study, a database of 210 MR images which contains ten reference images and 200 distorted images is presented. The reference images were distorted with four types of distortions: Rician Noise, Gaussian White Noise, Gaussian Blur and DCT compression. The 210 images were assessed by ten subjects. The subjective scores were presented in Difference Mean Opinion Score (DMOS). The DMOS values were compared with four FR-IQA metrics. We have used Pearson Linear Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) to validate the DMOS values. The high correlation values of PLCC and SROCC shows that the DMOS values are close to the objective FR-IQA metrics.
131
61676
A Study of the Formation, Existence and Stability of Localised Pulses in PDE
Authors:
Abstract:
TOPIC: A study of the formation ,existness and stability of localised pulses in pde Ayaz Ahmad ,NITP, Abstract:In this paper we try to govern the evolution deterministic variable over space and time .We analysis the behaviour of the model which allows us to predict and understand the possible behaviour of the physical system .Bifurcation theory provides a basis to systematically investigate the models for invariant sets .Exploring the behaviour of PDE using bifurcation theory which provides many challenges both numerically and analytically. We use the derivation of a non linear partial differential equation which may be written in this form ∂u/∂t+c ∂u/∂x+∈(∂^3 u)/(∂x^3 )+¥u ∂u/∂x=0 We show that the temperature increased convection cells forms. Through our work we look for localised solution which are characterised by sudden burst of aeroidic spatio-temporal evolution. Key word: Gaussian pulses, Aeriodic ,spatio-temporal evolution ,convection cells, nonlinearoptics, Dr Ayaz ahmad Assistant Professor Department of Mathematics National institute of technology Patna ,Bihar,,India 800005 [email protected] +91994907553
130
50104
Towards Automatic Calibration of In-Line Machine Processes
Abstract:
In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820
129
97437
Multiscale Modelization of Multilayered Bi-Dimensional Soils
Abstract:
Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.
128
73795
Contrast Enhancement in Digital Images Using an Adaptive Unsharp Masking Method
Abstract:
Captured images may suffer from Gaussian blur due to poor lens focus or camera motion. Unsharp masking is a simple and effective technique to boost the image contrast and to improve digital images suffering from Gaussian blur. The technique is based on sharpening object edges by appending the scaled high-frequency components of the image to the original. The quality of the enhanced image is highly dependent on the characteristics of both the high-frequency components and the scaling/gain factor. Since the quality of an image may not be the same throughout, we propose an adaptive unsharp masking method in this paper. In this method, the gain factor is computed, considering the gradient variations, for individual pixels of the image. Subjective and objective image quality assessments are used to compare the performance of the proposed method both with the classic and the recently developed unsharp masking methods. The experimental results show that the proposed method has a better performance in comparison to the other existing methods.
127
123383
Identifying Degradation Patterns of LI-Ion Batteries from Impedance Spectroscopy Using Machine Learning
Abstract:
Forecasting the state of health and remaining useful life of Li-ion batteries is an unsolved challenge that limits technologies such as consumer electronics and electric vehicles. Here we build an accurate battery forecasting system by combining electrochemical impedance spectroscopy (EIS) -- a real-time, non-invasive and information-rich measurement that is hitherto underused in battery diagnosis -- with Gaussian process machine learning. We collect over 20,000 EIS spectra of commercial Li-ion batteries at different states of health, states of charge and temperatures -- the largest dataset to our knowledge of its kind. Our Gaussian process model takes the entire spectrum as input, without further feature engineering, and automatically determines which spectral features predict degradation. Our model accurately predicts the remaining useful life, even without complete knowledge of past operating conditions of the battery. Our results demonstrate the value of EIS signals in battery management systems.
126
10900
Cash Flow Optimization on Synthetic CDOs
Abstract:
Collateralized Debt Obligations are not as widely used nowadays as they were before 2007 Subprime crisis. Nonetheless there remains an enthralling challenge to optimize cash flows associated with synthetic CDOs. A Gaussian-based model is used here in which default correlation and unconditional probabilities of default are highlighted. Then numerous simulations are performed based on this model for different scenarios in order to evaluate the associated cash flows given a specific number of defaults at different periods of time. Cash flows are not solely calculated on a single bought or sold tranche but rather on a combination of bought and sold tranches. With some assumptions, the simplex algorithm gives a way to find the maximum cash flow according to correlation of defaults and maturities. The used Gaussian model is not realistic in crisis situations. Besides present system does not handle buying or selling a portion of a tranche but only the whole tranche. However the work provides the investor with relevant elements on how to know what and when to buy and sell.
125
29970
An Innovative Auditory Impulsed EEG and Neural Network Based Biometric Identification System
Abstract:
The prevalence of the internet and technology in our day to day lives is creating more security issues than ever. The need for protecting and providing a secure access to private and business data has led to the development of many security systems. One of the potential solutions is to employ the bio-metric authentication technique. In this paper we present an innovative biometric authentication method that utilizes a person’s EEG signal, which is acquired in response to an auditory stimulus,and transferred wirelessly to a computer that has the necessary ANN algorithm-Multi layer perceptrol neural network because of is its ability to differentiate between information which is not linearly separable.In order to determine the weights of the hidden layer we use Gaussian random weight initialization. MLP utilizes a supervised learning technique called Back propagation for training the network. The complex algorithm used for EEG classification reduces the chances of intrusion into the protected public or private data.
124
65416
Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method
Abstract:
Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.
123
122315
Powder Flow with Normalized Powder Particles Size Distribution and Temperature Analyses in Laser Melting Deposition: Analytical Modelling and Experimental Validation
Abstract:
Powder flow and temperature distributions are recognized as influencing factors during laser melting deposition (LMD) process, that not only affect the consolidation rate but also characteristics of the deposited layers. Herewith, two simplified analytical models will be presented to simulate the powder flow with the inclusion of powder particles size distribution in Gaussian form, under three powder jet nozzles, and temperature analyses during LMD process. The output of the 1st model will serve as the input in the 2nd model. The models will be validated with experimental data, i.e., weight measurement method for powder particles distribution and infrared imaging for temperature analyses. This study will increase the cost-efficiency of the LMD process by adjustment of the operating parameters for reaching optimal powder debit and energy. This research has received funds under the Marie Sklodowska-Curie grant agreement No. 764935, from the European Union’s Horizon 2020 research and innovation program.
122
87069
Hybrid Algorithm for Non-Negative Matrix Factorization Based on Symmetric Kullback-Leibler Divergence for Signal Dependent Noise: A Case Study
Abstract:
Non-negative matrix factorization approximates a high dimensional non-negative matrix V as the product of two non-negative matrices, W and H, and allows only additive linear combinations of data, enabling it to learn parts with representations in reality. It has been successfully applied in the analysis and interpretation of high dimensional data arising in neuroscience, computational biology, and natural language processing, to name a few. The objective of this paper is to assess a hybrid algorithm for non-negative matrix factorization with multiplicative updates. The method aims to minimize the symmetric version of Kullback-Leibler divergence known as intrinsic information and assumes that the noise is signal-dependent and that it originates from an arbitrary distribution from the exponential family. It is a generalization of currently available algorithms for Gaussian, Poisson, gamma and inverse Gaussian noise. We demonstrate the potential usefulness of the new generalized algorithm by comparing its performance to the baseline methods which also aim to minimize symmetric divergence measures.
121
48156
Air Dispersion Model for Prediction Fugitive Landfill Gaseous Emission Impact in Ambient Atmosphere
Abstract:
This paper will explore formation of HCl aerosol at atmospheric boundary layers and encourages the uptake of environmental modeling systems (EMSs) as a practice evaluation of gaseous emissions ("framework measures") from small and medium-sized enterprises (SMEs). The conceptual model predicts greenhouse gas emissions to ecological points beyond landfill site operations. It focuses on incorporation traditional knowledge into baseline information for both measurement data and the mathematical results, regarding parameters influence model variable inputs. The paper has simplified parameters of aerosol processes based on the more complex aerosol process computations. The simple model can be implemented to both Gaussian and Eulerian rural dispersion models. Aerosol processes considered in this study were (i) the coagulation of particles, (ii) the condensation and evaporation of organic vapors, and (iii) dry deposition. The chemical transformation of gas-phase compounds is taken into account photochemical formulation with exposure effects according to HCl concentrations as starting point of risk assessment. The discussion set out distinctly aspect of sustainability in reflection inputs, outputs, and modes of impact on the environment. Thereby, models incorporate abiotic and biotic species to broaden the scope of integration for both quantification impact and assessment risks. The later environmental obligations suggest either a recommendation or a decision of what is a legislative should be achieved for mitigation measures of landfill gas (LFG) ultimately.
120
37296
The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features
Abstract:
An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.
119
44704
Importance of Knowledge in the Interdisciplinary Production Processes of Innovative Medical Tools
Abstract:
Processes of production of innovative medical tools have interdisciplinary character. They consist of direct and indirect close cooperation of specialists of different scientific branches. The Knowledge they have seems to be important for undertaken design, construction and manufacturing processes. The Knowledge exchange between participants of these processes is therefore crucial for the final result, which are innovative medical products. The paper draws attention to the necessity of feedback from the end user to the designer / manufacturer of medical tools which will allow for more accurate understanding of user needs. The study describes prerequisites of production processes of innovative medical (surgical) tools including participants and category of knowledge resources occurring in these processes. They are the result of research in selected Polish organizations involved in the production of medical instruments and are the basis for further work on the development of knowledge sharing model in interdisciplinary teams geographically dispersed.
118
53707
The Morphological Processes of Bura Verbs
Abstract:
Bura refers both to the kingdom, the people as well as to the language. It is a language spoken in North-Eastern Nigeria. It is also classified under the Chadic group of languages, subgroup of the Afro-Asiatic phylum. Three morphological processes were found to be operating in Bura language viz: affixation, reduplication and modification. Affixation could be prefixation, infixation and suffixation, while reduplication and modification are divided into complete and partial. Verbs as well, can be formed through various processes like affixation, reduplication and modification. The aim of this paper is to examine the morphological processes that are found in Bura language. In this study, research informants were selected by means of sampling technique. The study helps us to understand that Bura like other languages morphological processes of verbs is possible.
117
114393
Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon
Abstract:
The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range  to  MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.
116
50470
Preliminary Study of Human Reliability of Control in Case of Fire Based on the Decision Processes and Stress Model of Human in a Fire
Abstract:
This paper presents the findings of preliminary study on human control performance in case of fire. The relationship between human control and human decision is studied in decision processes and stress model of human in a fire. Human behavior aspects involved in the decision process during a fire incident. The decision processes appear that six of individual perceptual processes: recognition, validation, definition, evaluation, commitment, and reassessment. Then, human may be stressed in order to get an optimal decision for their activity. This paper explores problems in human control processes and stresses in a catastrophic situation. Thus, the future approach will be concerned to reduce stresses and ambiguous irrelevant information.
115
20692
A New Approach to Interval Matrices and Applications
Abstract:
An interval may be defined as a convex combination as follows: I=[a,b]={x_α=(1-α)a+αb: α∈[0,1]}. Consequently, we may adopt interval operations by applying the scalar operation point-wise to the corresponding interval points: I ∙J={x_α∙y_α ∶ αϵ[0,1],x_α ϵI ,y_α ϵJ}, With the usual restriction 0∉J if ∙ = ÷. These operations are associative: I+( J+K)=(I+J)+ K, I*( J*K)=( I*J )* K. These two properties, which are missing in the usual interval operations, will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The arithmetic introduced here avoids such vague terms as ”interval extension”, ”inclusion function”, determinants which we encounter in the engineering literature that deal with interval linear systems. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. We will enable the extension of many results from usual state space models to interval state space models. The interval state space model we will consider here is one of the form X_((t+1) )=AX_t+ W_t, Y_t=HX_t+ V_t, t≥0, where A∈ 〖IR〗^(k×k), H ∈ 〖IR〗^(p×k) are interval matrices and 〖W 〗_t ∈ 〖IR〗^k,V_t ∈〖IR〗^p are zero – mean Gaussian white-noise interval processes. This feeling is reassured by the numerical results we obtained in a simulation examples.
114
83723
Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information
Abstract:
Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.
113
97274
SeCloudBPMN: A Lightweight Extension for BPMN Considering Security Threats in the Cloud
Abstract:
Business processes are crucial for organizations and help businesses to evaluate and optimize their performance and processes against current and future-state business goals. Outsourcing business processes to the cloud becomes popular due to a wide varsity of benefits and cost-saving. However, cloud outsourcing raises enterprise data security concerns, which must be incorporated in Business Process Model and Notation (BPMN). This paper, presents SeCloudBPMN, a lightweight extension for BPMN which extends the BPMN to explicitly support the security threats in the cloud as an outsourcing environment. SeCloudBPMN helps business’s security experts to outsource business processes to the cloud considering different threats from inside and outside the cloud. In this way, appropriate security countermeasures could be considered to preserve data security in business processes outsourcing to the cloud.
112
76200
Analysis of Nonlinear Dynamic Systems Excited by Combined Colored and White Noise Excitations
Abstract:
In this paper, single-degree-of-freedom (SDOF) systems to white noise and colored noise excitations are investigated. By expressing colored noise excitation as a second-order filtered white noise process and introducing colored noise as an additional state variable, the equation of motion for SDOF system under colored noise is then transferred artificially to multi-degree-of-freedom (MDOF) system under white noise excitations. As a consequence, corresponding Fokker-Planck-Kolmogorov (FPK) equation governing the joint probabilistic density function (PDF) of state variables increases to 4-dimension (4-D). Solution procedure and computer programme become much more sophisticated. The exponential-polynomial closure (EPC) method, widely applied for cases of SDOF systems under white noise excitations, is developed and improved for cases of systems under colored noise excitations and for solving the complex 4-D FPK equation. On the other hand, Monte Carlo simulation (MCS) method is performed to test the approximate EPC solutions. Two examples associated with Gaussian and non-Gaussian colored noise excitations are considered. Corresponding band-limited power spectral densities (PSDs) for colored noise excitations are separately given. Numerical studies show that the developed EPC method provides relatively accurate estimates of the stationary probabilistic solutions. Moreover, statistical parameter of mean-up crossing rate (MCR) is taken into account, which is important for reliability and failure analysis.
111
62728
Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis
Abstract:
As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.
110
84759
Quasistationary States and Mean Field Model
Abstract:
Systems with long-range interactions are very common in nature. They are observed from the atomic scale to the astronomical scale and exhibit anomalies, such as inequivalence of ensembles, negative heat capacity, ergodicity breaking, nonequilibrium phase transitions, quasistationary states, and anomalous diffusion. These anomalies are exacerbated when special initial conditions are imposed; in particular, we use the so-called water bag initial conditions that stand for a uniform distribution. Several theoretical and practical implications are discussed here. A potential energy inspired by dipole-dipole interactions is proposed to build the dipole-type Hamiltonian mean-field model. As expected, the dynamics is novel and general to the behavior of systems with long-range interactions, which is obtained through molecular dynamics technique. Two plateaus sequentially emerge before arriving at equilibrium, which are corresponding to two different quasistationary states. The first plateau is a type of quasistationary state the lifetime of which depends on a power law of N and the second plateau seems to be a true quasistationary state as reported in the literature. The general behavior of the model according to its dynamics and thermodynamics is described. Using numerical simulation we characterize the mean kinetic energy, caloric curve, and the diffusion law through the mean square of displacement. The present challenge is to characterize the distributions in phase space. Certainly, the equilibrium state is well characterized by the Gaussian distribution, but quasistationary states in general depart from any Gaussian function.
109
62368
Reliability Modeling on Drivers’ Decision during Yellow Phase
Abstract:
The random and heterogeneous behavior of vehicles in India puts up a greater challenge for researchers. Stop-and-go modeling at signalized intersections under heterogeneous traffic conditions has remained one of the most sought-after fields. Vehicles are often caught up in the dilemma zone and are unable to take quick decisions whether to stop or cross the intersection. This hampers the traffic movement and may lead to accidents. The purpose of this work is to develop a stop and go prediction model that depicts the drivers’ decision during the yellow time at signalised intersections. To accomplish this, certain traffic parameters were taken into account to develop surrogate model. This research investigated the Stop and Go behavior of the drivers by collecting data from 4-signalized intersections located in two major Indian cities. Model was developed to predict the drivers’ decision making during the yellow phase of the traffic signal. The parameters used for modeling included distance to stop line, time to stop line, speed, and length of the vehicle. A Kriging base surrogate model has been developed to investigate the drivers’ decision-making behavior in amber phase. It is observed that the proposed approach yields a highly accurate result (97.4 percent) by Gaussian function. It was observed that the accuracy for the crossing probability was 95.45, 90.9 and 86.36.11 percent respectively as predicted by the Kriging models with Gaussian, Exponential and Linear functions.
108
53456
Proposing an Index for Determining Key Knowledge Management Processes in Decision Making Units Using Fuzzy Quality Function Deployment (QFD), Data Envelopment Analysis (DEA) Method
Abstract:
This paper proposes an approach to identify key processes required by an organization in the field of knowledge management and aligning them with organizational objectives. For this purpose, first, organization’s most important non-financial objectives which are impacted by knowledge management processes are identified and then, using a quality house, are linked with knowledge management processes which are regarded as technical elements. Using this method, processes that are in need of improvement and more attention are prioritized based on their significance. This means that if a process has more influence on organization’s objectives and is in a dire situation comparing to others, is prioritized for choice and improvement. In this research process dominance is considered to be an influential element in process ranking (in addition to communication matrix). This is the reason for utilizing DEA techniques for prioritizing processes in quality house. Results of implementing the method in Khuzestan steel company represents this method’s capability of identifying key processes that require improvements in organization’s knowledge management system.
107
6408
Practical Application of Business Processes Simulation
Abstract:
Company managers are always looking for more and more opportunities to succeed in today's fiercely competitive market. Maintain your place among the successful companies on the market today or come up with a revolutionary business idea; it is much more difficult than before. Each new or improved method, tools, or the approach that can improve the functioning of business processes or even the entire system is worth checking and verification. The use of simulation in the design of manufacturing systems and their management in practice is one of the ways without increased risk to find the optimal parameters of manufacturing processes and systems. The paper presents an example of using simulation to solve the bottleneck problem in concrete company.
106
10591
Evaluating the Logistic Performance Capability of Regeneration Processes
Abstract:
For years now, it has been recognized that logistic performance capability contributes enormously to a production enterprise’s competitiveness and as such is a critical control lever. In doing so, the orientation on customer wishes (e.g. delivery dates) represents a key parameter not only in the value-adding production but also in product regeneration. Since production and regeneration processes have different characteristics, production planning and control measures cannot be directly transferred to regeneration processes. As part of a special research project, the Institute of Production Systems and Logistics Hannover is focused on increasing the logistic performance capability of regeneration processes for complex capital goods. The aim is to ensure logistic targets are met by implementing a model specifically designed to align the capacities and load in regeneration processes.
105
78578
A Real-Time Moving Object Detection and Tracking Scheme and Its Implementation for Video Surveillance System
Abstract:
Detection and tracking of moving objects are very important in many application contexts such as detection and recognition of people, visual surveillance and automatic generation of video effect and so on. However, the task of detecting a real shape of an object in motion becomes tricky due to various challenges like dynamic scene changes, presence of shadow, and illumination variations due to light switch. For such systems, once the moving object is detected, tracking is also a crucial step for those applications that used in military defense, video surveillance, human computer interaction, and medical diagnostics as well as in commercial fields such as video games. In this paper, an object presents in dynamic background is detected using adaptive mixture of Gaussian based analysis of the video sequences. Then the detected moving object is tracked using the region based moving object tracking and inter-frame differential mechanisms to address the partial overlapping and occlusion problems. Firstly, the detection algorithm effectively detects and extracts the moving object target by enhancing and post processing morphological operations. Secondly, the extracted object uses region based moving object tracking and inter-frame difference to improve the tracking speed of real-time moving objects in different video frames. Finally, the plotting method was applied to detect the moving objects effectively and describes the object’s motion being tracked. The experiment has been performed on image sequences acquired both indoor and outdoor environments and one stationary and web camera has been used.
104
2631
Practical Application of Simulation of Business Processes
Abstract:
Company managers are always looking for more and more opportunities to succeed in today's fiercely competitive market. To maintain your place among the successful companies on the market today or to come up with a revolutionary business idea is much more difficult than before. Each new or improved method, tool, or approach that can improve the functioning of business processes or even of the entire system is worth checking and verification. The use of simulation in the design of manufacturing systems and their management in practice is one of the ways without increased risk, which makes it possible to find the optimal parameters of manufacturing processes and systems. The paper presents an example of use of simulation for solution of the bottleneck problem in the concrete company.
103
48352
A Review of the Run to Run (R to R) Control in the Manufacturing Processes
Abstract:
Run- to- Run (R2 R) control was developed in order to monitor and control different semiconductor manufacturing processes based upon the fundamental engineering frameworks. This technology allows rectification in the optimum direction. This control always had a significant potency in which was appeared in a variety of processes. The term run to run refers to the case where the act of control would take with the aim of getting batches of silicon wafers which produced in a manufacturing process. In the present work, a brief review about run-to-run control investigated which mainly is effective in the manufacturing process.
102
63403
Fluid Catalytic Cracking: Zeolite Catalyzed Chemical Industry Processes
Abstract:
One of the major conversion technologies in the oil refinery industry is Fluid catalytic cracking (FCC) which produces the majority of the world’s gasoline. Some useful products are generated from the vacuum gas oil, heavy gas oil and residue feedstocks by the FCC unit in an oil refinery. Moreover, Zeolite catalysts (zeo-catalysts) have found widespread applications and have proved to be substantial and paradigmatic in oil refining and petrochemical processes, such as FCC because of their porous features. Several famous zeo-catalysts have been fabricated and applied in industrial processes as milestones in history, and have brought on huge changes in petrochemicals. So far, more than twenty types of zeolites have been industrially applied, and their versatile porous architectures with their essential features have contributed to affect the catalytic efficiency. This poster depicts the evolution of pore models in zeolite catalysts which are accompanied by an increase in environmental and demands. The crucial roles of modulating pore models are outlined for zeo-catalysts for the enhancement of their catalytic performances in various industrial processes. The development of industrial processes for the FCC process, aromatic conversions and olefin production, makes it obvious that the pore architecture plays a very important role in zeo-catalysis processes. By looking at the different necessities of industrial processes, rational construction of the pore model is critically essential. Besides, the pore structure of the zeolite would have a substantial and direct effect on the utilization efficiency of the zeo-catalyst.
101
85625
Gamification Using Stochastic Processes: Engage Children to Have Healthy Habits
Abstract:
This article is based on a dissertation that intends to analyze and make a model, intelligently, algorithms based on stochastic processes of a gamification application applied to marketing. Gamification is used in our daily lives to engage us to perform certain actions in order to achieve goals and gain rewards. This strategy is an increasingly adopted way to encourage and retain customers through game elements. The application of gamification aims to encourage children between 6 and 10 years of age to have healthy habits and the purpose of serving as a model for use in marketing. This application was developed in unity; we implemented intelligent algorithms based on stochastic processes, web services to respond to all requests of the application, a back-office website to manage the application and the database. The behavioral analysis of the use of game elements and stochastic processes in children’s motivation was done. The application of algorithms based on stochastic processes in-game elements is very important to promote cooperation and to ensure fair and friendly competition between users which consequently stimulates the user’s interest and their involvement in the application and organization.
100
14381
Design Components and Reliability Aspects of Municipal Waste Water and SEIG Based Micro Hydro Power Plant
Authors:
Abstract:
This paper presents design aspects and probabilistic approach for generation reliability evaluation of an alternative resource: municipal waste water based micro hydro power generation system. Annual and daily flow duration curves have been obtained for design, installation, development, scientific analysis and reliability evaluation of the MHPP. The hydro potential of the waste water flowing through sewage system of the BHU campus has been determined to produce annual flow duration and daily flow duration curves by ordering the recorded water flows from maximum to minimum values. Design pressure, the roughness of the pipe’s interior surface, method of joining, weight, ease of installation, accessibility to the sewage system, design life, maintenance, weather conditions, availability of material, related cost and likelihood of structural damage have been considered for design of a particular penstock for reliable operation of the MHPP. A MHPGS based on MWW and SEIG is designed, developed, and practically implemented to provide reliable electric energy to suitable load in the campus of the Banaras Hindu University, Varanasi, (UP), India. Generation reliability evaluation of the developed MHPP using Gaussian distribution approach, safety factor concept, peak load consideration and Simpson 1/3rd rule has presented in this paper.
99
27912
Identification and Selection of a Supply Chain Target Process for Re-Design
Abstract:
A supply chain consists of different processes and when conducting supply chain re-design is necessary to identify the relevant processes and select a target for re-design. A solution was developed which consists to identify first the relevant processes using the Supply Chain Operations Reference (SCOR) model, then to use Analytical Hierarchy Process (AHP) for target process selection. An application was conducted in an Airline MRO supply chain re-design project which shows this combination can clearly aid the identification of relevant supply chain processes and the selection of a target process for re-design.
98
10104
Integrated Evaluation of Green Design and Green Manufacturing Processes Using a Mathematical Model
Abstract:
In this research, a mathematical model for integrated evaluation of green design and green manufacturing processes is presented. To design a product, there can be alternative options to design the detailed components to fulfill the same product requirement. In the design alternative cases, the components of the product can be designed with different materials and detailed specifications. If several design alternative cases are proposed, the different materials and specifications can affect the manufacturing processes. In this paper, a new concept for integrating green design and green manufacturing processes is presented. A green design can be determined based the manufacturing processes of the designed product by evaluating the green criteria including energy usage and environmental impact, in addition to the traditional criteria of manufacturing cost. With this concept, a mathematical model is developed to find the green design and the associated green manufacturing processes. In the mathematical model, the cost items include material cost, manufacturing cost, and green related cost. The green related cost items include energy cost and environmental cost. The objective is to find the decisions of green design and green manufacturing processes to achieve the minimized total cost. In practical applications, the decision-making can be made to select a good green design case and its green manufacturing processes. In this presentation, an example product is illustrated. It shows that the model is practical and useful for integrated evaluation of green design and green manufacturing processes.
97
4098
Business Process Mashup
Abstract:
Recently, many companies are based on process development from scratch to achieve their business goals. The process development is not trivial and the main objective of enterprise managing processes is to decrease the software development time. Several concepts have been proposed in the field of business process-based reused development, known as BP Mashup. This concept consists of reusing existing business processes which have been modeled in order to respond to a particular goal. To meet user process requirements, our contribution is to mix parts of processes as 'processes fragments' components to build a new process (i.e. process mashup). The main idea of our paper is to offer graphical framework tool for both creating and running processes mashup. Allow users to perform a mixture of fragments, using a simple interface with set of graphical mixture operators based on a proposed formal model. A process mashup and mixture behavior are described within a new specification of a high-level language, language for process mashup (BPML).
96
19123
Flat-Top Apodization of Laser Beams by Means of Acousto-Optics
Abstract:
We demonstrate a method for adaptive spatial shaping of laser beams by means of acousto-optic Bragg diffraction. Transformation of the angular spectrum during Bragg diffraction is used to convert Gaussian intensity distribution into a flat-top one. Theoretical model is supported by the experiment.
95
26936
Femtochemistry of Iron(III) Carboxylates in Aqueous Solutions
Abstract:
Photochemical reactions with participation of iron (III) carboxylates are important for environmental photochemistry and have a great potential of application in water purification (Advanced Oxidation Processes, photo-Fenton and Fenton-like processes). In spite of this information about excited states and primary intermediates in photochemistry of Fe(III) complexes with carboxylic acids is scarce. This talk presents and discusses the results of several recent authors' publications in a field of ultra fast spectroscopy of natural Fe(III) carboxylates.
94
80384
Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Abstract:
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.
93
39498
The Experimental and Numerical Analysis of TRIP Steel Wire Drawing Processes Drawn with Different Partial Reductions
Abstract:
The strain intensity and redundant strains, dependent in multistage TRIP wire drawing processes from values used single partial reductions, should influence on the intensity of transformation the retained austenite into martensite and thereby on mechanical properties of drawn wires. The numerical analysis of drawing processes with use of Drawing 2D programme, for steel wires made from TRIP steel with 0,29 % has been shown in the work. The change of strain intensity Ԑc and the values of redundant strain Ԑxy, has been determined for particular draws in dependence of used single partial reductions.
92
47616
Site Formation Processes at a New Kingdom Settlement at Sai Island, Sudan
Abstract:
The important Egyptian New Kingdom settlement at Sai Island Sudan presents a complex stratigraphic archaeological record. This study takes the theoretic stance that it, not just the archaeological material being retrieved from the deposits but the sediments themselves that reflect human agency. These anthropogenic sediments reflect the use life of the buildings and spaces between and the post-depositional processes which operate to complicate the archaeological record. The application of soil micromorphology is a technique that takes intact block samples of sediment and analyses them in thin section under a petrological microscope. A detailed understanding of site formation processes and a contextualized knowledge of the material culture can be understood through careful and systematic observation of the changing facies. The major findings of the study are that soil and sedimentary information can provide valuable insights to the use of space during the New Kingdom and elucidate the complexities of site formation processes.
91
14980
Environmental Engineering Case Study of Waste Water Treatement
Abstract:
Wastewater treatment consists of applying known technology to improve or upgrade the quality of a wastewater. Usually wastewater treatment will involve collecting the wastewater in a central, segregated location (the Wastewater Treatment Plant) and subjecting the wastewater to various treatment processes. Most often, since large volumes of wastewater are involved, treatment processes are carried out on continuously flowing wastewaters (continuous flow or "open" systems) rather than as "batch" or a series of periodic treatment processes in which treatment is carried out on parcels or "batches" of wastewaters. While most wastewater treatment processes are continuous flow, certain operations, such as vacuum filtration, involving storage of sludge, the addition of chemicals, filtration and removal or disposal of the treated sludge, are routinely handled as periodic batch operations.
90
47525
Designing Information Systems in Education as Prerequisite for Successful Management Results
Abstract:
This research paper shows matrix technology models and examples of information systems in education (in the Republic of Croatia and in the Germany) in support of business, education (when learning and teaching) and e-learning. Here we researched and described the aims and objectives of the main process in education and technology, with main matrix classes of data. In this paper, we have example of matrix technology with detailed description of processes related to specific data classes in the processes of education and an example module that is support for the process: ‘Filling in the directory and the diary of work’ and ‘evaluation’. Also, on the lower level of the processes, we researched and described all activities which take place within the lower process in education. We researched and described the characteristics and functioning of modules: ‘Fill the directory and the diary of work’ and ‘evaluation’. For the analysis of the affinity between the aforementioned processes and/or sub-process we used our application model created in Visual Basic, which was based on the algorithm for analyzing the affinity between the observed processes and/or sub-processes.
89
106885
A Critical Re-Evaluation of Knowledge Management Definitions and Terminologies
Abstract:
The last three decades have witnessed myriads of definitions of knowledge management proposed by researchers and industry practitioners. Despite the magnitude of research and available literature on knowledge management, there is yet to be a consensus on what constitutes a good definition. There exists an in-exhaustive list of definitions which can appear confusing, conflicting and overlapping. What is even more daunting is the lack of common terminology in describing knowledge management processes and the inconsistency in the sequence in which the processes take. Whilst newbies to knowledge management research would struggle to make sense of knowledge management definitions, industry practitioners would struggle with their applicability. Against this backdrop, this study aimed to re-evaluate knowledge management definitions and terminologies. The objectives were threefold: (1) to conduct a critical review of an existing body of work around knowledge management concepts and definitions (2) to analyse and synthesise findings (3) to present conclusions and recommendations. The methodology for this study centres around the review of the literature and secondary data sources. A total of 48 knowledge management processes were found and extracted from various definitions (e.g. ‘identify’, ‘capture’, ‘codify’, ‘store’…). A taxonomy of the processes was created based on the commonality of the entities. The 48 processes were classified under 8 headings which were further converged into 3 main headings namely ‘acquire’, ‘exploit’ and ‘evaluate’, of which all definitions therefore hinge. The study concludes that in the multitude of knowledge management definitions, there is a consistent pattern to which the processes are organised and should be utilised. The contribution of this study is in the synthesis of previous work by various authors and the presentation of a more holistic approach to knowledge management definitions and terminologies.
88
59747
Parallel Transformation Processes of Historical Centres: The Cases of Sevilla and Valparaiso
Abstract:
The delimitation in the cities of heritage areas implicit in strong processes of transformation, both social and material. The study shows how two cities, seemingly different as Seville (Spain) and Valparaiso (Chile), share the same transformation process from its declaration as heritage cities. The metdología used in research has been on the one hand the analytic-criticism has shown us all processes and the level of involvement of these. On the other hand the direct observation methodology has allowed us to ratify all studied. Faced with these processes research shows social resources that people have developed to address each of them. The study concludes the need to strengthen the social and associative fabric in heritage areas as a resource to ensure the survival of heritage, not only material but also social and cultural. As examples, we have chosen Seville and Valparaiso: the gentrification of Seville prior to the universal exhibition of ‘92 –with pretty specific plans-- is paralleled by Valparaiso’s plan to revitalize its port and its protected (UNESCO) area. The whole of our theoretical discourse will be based thereupon.
87
5611
Developing an Information Model of Manufacturing Process for Sustainability
Authors:
Abstract:
Manufacturing companies use life-cycle inventory databases to analyze sustainability of their manufacturing processes. Life cycle inventory data provides reference data which may not be accurate for a specific company. Collecting accurate data of manufacturing processes for a specific company requires enormous time and efforts. An information model of typical manufacturing processes can reduce time and efforts to get appropriate reference data for a specific company. This paper shows an attempt to build an abstract information model which can be used to develop information models for specific manufacturing processes.
86
72560
Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.
85
29499
Mathematical Models for GMAW and FCAW Welding Processes for Structural Steels Used in the Oil Industry
Abstract:
With increase the production oil and lines transmission gases that are in ample expansion, the industries medium and great transport they had to adapt itself to supply the demand manufacture in this fabrication segment. In this context, two welding processes have been more extensively used: the GMAW (Gas Metal Arc Welding) and the FCAW (Flux Cored Arc Welding). In this work, welds using these processes were carried out in flat position on ASTM A-36 carbon steel plates in order to make a comparative evaluation between them concerning to mechanical and metallurgical properties. A statistical tool based on technical analysis and design of experiments, DOE, from the Minitab software was adopted. For these analyses, the voltage, current, and welding speed, in both processes, were varied. As a result, it was observed that the welds in both processes have different characteristics in relation to the metallurgical properties and performance, but they present good weldability, satisfactory mechanical strength e developed mathematical models.
84
42170
Performance Comparison of Non-Binary RA and QC-LDPC Codes
Authors:
Abstract:
Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.
83
8140
A Comparison of Neural Network and DOE-Regression Analysis for Predicting Resource Consumption of Manufacturing Processes
Abstract:
Artificial neural networks (ANN) as well as Design of Experiments (DOE) based regression analysis (RA) are mainly used for modeling of complex systems. Both methodologies are commonly applied in process and quality control of manufacturing processes. Due to the fact that resource efficiency has become a critical concern for manufacturing companies, these models needs to be extended to predict resource-consumption of manufacturing processes. This paper describes an approach to use neural networks as well as DOE based regression analysis for predicting resource consumption of manufacturing processes and gives a comparison of the achievable results based on an industrial case study of a turning process.
82
48716
The Roles of Education, Policies and Technologies in the Globalization Processes of Creative Industry
Abstract:
Creative Industry has been recognized as top priority in many nations for decades, as through globalization processes, culture can be economized by creative industry to develop economies. From non-economic perspectives; creative industry supports nation-identity, enhances global exposure, and improve international relation. In order to enable the globalization processes of creative industry, a three-step approach was proposed to align education, policies, and technologies into a transformation platform, and eventually to achieve a common model of global collaboration.
81
57322
A Method to Enhance the Accuracy of Digital Forensic in the Absence of Sufficient Evidence in Saudi Arabia
Abstract:
Digital forensics seeks to achieve the successful investigation of digital crimes through obtaining acceptable evidence from digital devices that can be presented in a court of law. Thus, the digital forensics investigation is normally performed through a number of phases in order to achieve the required level of accuracy in the investigation processes. Since 1984 there have been a number of models and frameworks developed to support the digital investigation processes. In this paper, we review a number of the investigation processes that have been produced throughout the years and introduce a proposed digital forensic model which is based on the scope of the Saudi Arabia investigation process. The proposed model has been integrated with existing models for the investigation processes and produced a new phase to deal with a situation where there is initially insufficient evidence.
80
87268
Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.
79
131031
Maturity Model for Agro-Industrial Logistics
Abstract:
This abstract presents the methodology for improving the logistics processes of agricultural production units belonging to the coffee, cocoa, and fruit sectors, starting from the fundamental concepts and detailing each of the phases to carry out the diagnosis, which will be the basis for the formulation of its action plan and implementation of the maturity model. As a result of this work, the maturity model is formulated to improve logistics processes. This model seeks to: generate a progressive model that is useful for all productive units belonging to these sectors at the national level, regardless of their initial conditions, focus on the improvement of logistics processes as a strategy that contributes to improving the competitiveness of the agricultural sector in Colombia and spread the implementation of good logistics practices in postharvest in all departments of the country through autonomous tools. This model has been built through a series of steps that allow the evaluation and improvement of the logistics dimensions or indicators. The potential improvements for each dimension provide the foundation on which to advance to the next level. Within the maturity model, a methodology is indicated for the design and execution of strategies to improve its logistics processes, taking into account the current state of each production unit.
78
13825
Application of Statistical Linearized Models for Investigations of Digital Dynamic Pulse-Frequency Control Systems
Abstract:
This paper is focused on dynamic pulse-frequency modulation (DPFM) control systems. Currently, the control law based on DPFM control signals is widely used in direct digital control subsystems introduced in the automated control systems of technological processes. Statistical analysis of automatic control systems is reduced to its construction of functional relationships between the statistical characteristics of the errors processes and input processes. Structural and dynamic Volterra models of digital pulse-frequency control systems can be used to develop methods for generating the dependencies, differing accuracy, requiring the amount of information about the statistical characteristics of input processes and computing labor intensity of their use.
77
78144
The Quality of Management: A Leadership Maturity Model to Leverage Complexity
Abstract:
Today´s production processes experience a constant increase in complexity paving new ways for progressive forms of leadership. In the customized production, individual customer requirements drive companies to adapt their manufacturing processes constantly while the pressure for smaller lot sizes, lower costs and faster lead times grows simultaneously. When production processes are becoming more dynamic and complex, the conventional quality management approaches show certain limitations. This paper gives an introduction to complexity science from a quality management perspective. By analyzing and evaluating different characteristics of complexity, the critical complexity parameters are identified and assessed. We found that the quality of leadership plays a crucial role when dealing with increasing complexity. Therefore, we developed a concept for qualitative leadership customized for the management within complex processes based on a maturity model. The maturity model was then applied in the industry to assess the leadership quality of several shop floor managers with a positive evaluation feedback. In result, the maturity model proved to be a sustainable approach to leverage the rising complexity in production processes more effectively.
76
85410
Psychosocial Processes and Strategies behind Islamic Deradicalisation: A Scoping Review
Abstract:
Due to the loss of territory, foreign terrorist fighters who have joined Islamic State are returning to their home countries. In order to counter this threat to international security, it is important to implement deradicalisation programmes, through strategies and processes that can reverse radicalisation. The objectives of this scoping review - which is underway - are to provide a comprehensive overview of the programmes being implemented, its main characteristics, the main motives and processes leading to deradicalisation, and to identify the key findings and the existing gaps in the literature. The methodology to be implemented in this scoping review follows the guidelines proposed by Arksey and O’Malley and by The Joanna Briggs Institute. The main results will be the development of a synthesis map of the deradicalisation programmes existing in the world, its main features, and recommendations to policy-makers and professionals.
75
105943
The Importance of Analysis of Internal Quality Management Systems and Self-Examination Processes in Engineering Accreditation Processes
Authors:
Abstract:
The accreditation process of engineering degree programmes is based on various reports evaluated by the relevant governing bodies of the institution of higher education. One of the aforementioned reports for the accreditation process is a self-assessment report which is to be completed by the applying institution. This paper seeks to emphasise the importance of analysis of internal quality management systems and self-examination processes in the engineering accreditation processes. A description of how the programme fulfils the criteria should be given. Relevant stakeholders all need to contribute in the writing and structuring of the self-assessment report. The last step is to gather evidence in the form of supporting documentation. In conclusion, the paper also identifies learning outcomes in a case study in seeking accreditation from an international relevant professional body.
74
32037
A Comparative Study of Photo and Electro-Fenton Reactions Efficiency in Degradation of Cationic Dyes Mixture
Abstract:
The aim of this work was to compare the degradation of a mixture of three cationic dyes by advanced oxidation processes (electro-Fenton, photo-Fenton) in aqueous solution. These processes are based on the in situ production of hydroxyl radical, a highly strong oxidant, which allows the degradation of organic pollutants until their mineralization into CO2 and H2O. Under optimal operating conditions, the evolution of total organic carbon (TOC) and electrical energy efficiency have been investigated for the two processes.
73
6490
Influence of Different Asymmetric Rolling Processes on Shear Strain
Abstract:
Materials with ultrafine-grained structure and unique physical and mechanical properties can be obtained by methods of severe plastic deformation, which include processes of asymmetric rolling (AR). Asymmetric rolling is a very effective way to create ultrafine-grained structures of metals and alloys. Since the asymmetric rolling is a continuous process, it has great potential for industrial production of ultrafine-grained structure sheets. Basic principles of asymmetric rolling are described in detail in scientific literature. In this work finite element modeling of asymmetric rolling and metal forming processes in multiroll gauge was performed. Parameters of the processes which allow achieving significant values of shear strain were defined. The results of the study will be useful for the research of the evolution of ultra-fine metal structure in asymmetric rolling.
72
62473
A General Framework for Measuring the Internal Fraud Risk of an Enterprise Resource Planning System
Abstract:
Internal corporate fraud, which is fraud carried out by internal stakeholders of a company, affects the well-being of the organisation just like its external counterpart. Even if such an act is carried out for the short-term benefit of a corporation, the act is ultimately harmful to the entity in the long run. Internal fraud is often carried out by relying upon aberrations from usual business processes. Business processes are the lifeblood of a company in modern managerial context. Such processes are developed and fine-tuned over time as a corporation grows through its life stages. Modern corporations have embraced technological innovations into their business processes, and Enterprise Resource Planning (ERP) systems being at the heart of such business processes is a testimony to that. Since ERP systems record a huge amount of data in their event logs, the logs are a treasure trove for anyone trying to detect any sort of fraudulent activities hidden within the day-to-day business operations and processes. This research utilises the ERP systems in place within corporations to assess the likelihood of prospective internal fraud through developing a framework for measuring the risks of fraud through Process Mining techniques and hence finds risky designs and loose ends within these business processes. This framework helps not only in identifying existing cases of fraud in the records of the event log, but also signals the overall riskiness of certain business processes, and hence draws attention for carrying out a redesign of such processes to reduce the chance of future internal fraud while improving internal control within the organisation. The research adds value by applying the concepts of Process Mining into the analysis of data from modern day applications of business process records, which is the ERP event logs, and develops a framework that should be useful to internal stakeholders for strengthening internal control as well as provide external auditors with a tool of use in case of suspicion. The research proves its usefulness through a few case studies conducted with respect to big corporations with complex business processes and an ERP in place.
71
31104
BIASS in the Estimation of Covariance Matrices and Optimality Criteria
Abstract:
The precision of parameter estimators in the Gaussian linear model is traditionally accounted by the variance-covariance matrix of the asymptotic distribution. However, this measure can underestimate the true variance, specially for small samples. Traditionally, optimal design theory pays attention to this variance through its relationship with the model's information matrix. For this reason it seems convenient, at least in some cases, adapt the optimality criteria in order to get the best designs for the actual variance structure, otherwise the loss in efficiency of the designs obtained with the traditional approach may be very important.
70
41901
Student Perceptions on Administrative Support in the Delivering of Open Distance Learning Programmes – A Case Study
Abstract:
The Unit for Open Distance Learning (UODL) at the North-West University (NWU), South Africa was established in 2013 with its main function to deliver open distance learning (ODL) programmes to approximately 30 000 students from the Faculties of Education Sciences, Health Sciences, Theology and Arts and Culture. Quality operational and administrative processes are key components in the delivery of these programmes and they need to function optimally for students to be successful in their studies. Operational and administrative processes include aspects such as applications, registration, dissemination of study material, availability of electronic platforms, the management of assessment, and the dissemination of important information. To be able to ensure and enhance quality during these processes, it is vital to determine students’ perceptions with regards to these mentioned processes. A questionnaire was available online and also distributed to the 63 tuition centres. The purpose of this research was to determine the perceptions of ODL students from NWU regarding operational and administrative processes. 1903 students completed and submitted the questionnaire. The data was quantitatively analysed and discussed. Results indicated that the majority of students are satisfied with the operational and administrative processes; however, the results also indicated some areas that need improvement. The data gathered is important to identify strengths and areas for improvement and form part of a bigger strategy of qualitative assurance at the UODL.
69
16253
An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.
68
122472
Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.
67
24549
BER Estimate of WCDMA Systems with MATLAB Simulation Model
Abstract:
Simulation plays an important role during all phases of the design and engineering of communications systems, from early stages of conceptual design through the various stages of implementation, testing, and fielding of the system. In the present paper, a simulation model has been constructed for the WCDMA system in order to evaluate the performance. This model describes multiusers effects and calculation of BER (Bit Error Rate) in 3G mobile systems using Simulink MATLAB 7.1. Gaussian Approximation defines the multi-user effect on system performance. BER has been analyzed with comparison between transmitting data and receiving data.
66
29742
Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI
Abstract:
Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.
65
37056
Numerical Determination of Transition of Cup Height between Hydroforming Processes
Abstract:
Various attempts concerning the low formability issue for lightweight materials like aluminium and magnesium alloys are being investigated in many studies. Advanced forming processes such as hydroforming is one of these attempts. In last decades sheet hydroforming process has an increasing interest, particularly in the automotive and aerospace industries. This process has many advantages such as enhanced formability, the capability to form complex parts, higher dimensional accuracy and surface quality, reduction of tool costs and reduced die wear compared to the conventional sheet metal forming processes. There are two types of sheet hydroforming. One of them is hydromechanical deep drawing (HDD) that is a special drawing process in which pressurized fluid medium is used instead of one of the die half compared to the conventional deep drawing (CDD) process. Another one is sheet hydroforming with die (SHF-D) in which blank is formed with the act of fluid pressure and it takes the shape of die half. In this study, transition of cup height according to cup diameter between the processes was determined by performing simulation of the processes in Finite Element Analysis. Firstly SHF-D process was simulated for 40 mm cup diameter at different cup heights chancing from 10 mm to 30 mm and the cup height to diameter ratio value in which it is not possible to obtain a successful forming was determined. Then the same ratio was checked for a different cup diameter of 60 mm. Then thickness distributions of the cups formed by SHF-D and HDD processes were compared for the cup heights. Consequently, it was found that the thickness distribution in HDD process in the analyses was more uniform.
64
43309
A Study on How to Improve PMBOK (Project Management Body of Knowledge) Guidelines Performance by Simulation
Abstract:
The project-oriented organizations are more appropriate for sustainable environments. Any effective project-oriented organization should institutionalize its project management processes in such a manner to yield the greatest possible profits. The aim of this paper is to study the relationship between the project management PMBOK guideline (Project Management Body of Knowledge) and simulation technology in project-oriented organizations. The methodology involves using five steps for applying these two tools aimed at enhancing project management processes in the Lorestan Gas Corporation, as one of the project-oriented organization. Results show the implementation of such management approach leads to a 5% performance improvement and using PMBOK can be instrumental in effective delay management. The implementation of the aforementioned improvement package was effective in improving the efficiency of organizational processes; in terms of optimizing the resource utilization that has manifested itself in resource losses and cost reductions.
63
86905
Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins
Abstract:
The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.
62
9370
Mining Diagnostic Investigation Process
Abstract:
In complex healthcare diagnostic investigation process, medical practitioners have to focus on ways to standardize their processes to perform high quality care and optimize the time and costs. Process mining techniques can be applied to extract process related knowledge from data without considering causal and dynamic dependencies in business domain and processes. The application of process mining is effective in diagnostic investigation. It is very helpful where a treatment gives no dispositive evidence favoring it. In this paper, we applied process mining to discover important process flow of diagnostic investigation for hepatitis patients. This approach has some benefits which can enhance the quality and efficiency of diagnostic investigation processes.
61
99714
Diagnosis of Logistics Processes: Bibliometric Review and Analysis
Abstract:
The diagnostic processes have been consolidated as fundamental tools in the adequate knowledge of organizations and their processes. The diagnosis is related to the interpretation of the data, findings and the relevant information, to determine problems, causes, or the simple state and behavior of a process, without including a solution to the problems detected. The objective of this work is to identify the necessary stages to diagnose the logistic processes in a metalworking company, from the literary revision of different disciplines. A total of 62 articles were chosen to identify, through bibliometric analysis, the most cited articles, as well as the most frequent authors and journals. The results allowed to identify the two fundamental stages in the diagnostic process: a primary phase (general) based on the logical subjectivity of the knowledge of the person who evaluates, and the secondary phase (specific), related to the interpretation of the results, findings or data. Also, two phases were identified, one related to the definition of the scope of the actions to be developed and the other, as an initial description of what was observed in the process.
60
22563
Environmental Pollution and Treatment Technology
Abstract:
Water pollution is nowadays a serious problem, due to the increasing scarcity of water and thus to the impact induced by such pollution on the human health. Various techniques are made use of to deal with water pollution. Among the most used ones, some can be enumerated: the bacterian bed, the activated mud, the Lagunage as biological processes and coagulation-floculation as a physic-chemical process. These processes are very expensive and an treatment efficiency which decreases along with the increase of the initial pollutants’ concentration. This is the reason why research has been reoriented towards the use of a process by adsorption as an alternative solution instead of the other traditional processes. In our study, we have tempted to exploit the characteristics of two metallic hydroxides Al and Fe to purify contaminated water by two industrial dyes SBL blue and SRL-150 orange. Results have shown the efficiency of the two materials on the blue SBL dye.
59
79973
Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes
Abstract:
In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.
58
32500
Adsoption Tests of Two Industrial Dyes by Hydroxyds of Metals
Abstract:
Water pollution is nowadays a serious problem, due to the increasing scarcity of water and thus to the impact induced by such pollution on the human health. Various techniques are made use of to deal with water pollution. Among the most used ones, some can be enumerated: the bacterian bed, the activated sludge, lagoons as biological processes and coagulation-flocculation as a physic-chemical process. These processes are very expensive and a decreasing in efficiency treatment with the increase of the initial pollutants concentration. This is the reason why research has been reoriented towards the use of adsorption process as an alternative solution instead of the other traditional processes. In our study, we have tempted to explore the characteristics of hydroxides of Al and Fe to purify contaminated water by two industrial dyes SBL blue and SRL-150 orange. Results have shown the efficiency of the two materials on the blue SBL dye.
57
87033
The Energy Efficient Water Reuse by Combination of Nano-Filtration and Capacitive Deionization Processes
Abstract:
The high energy consuming processes such as advanced oxidation and reverse osmosis are used as a reuse process. This study aims at developing an energy efficient reuse process by combination of nanofiltration (NF) and capacitive deionization processes (CDI) processes. Lab scale experiments were conducted by using effluents from a wastewater treatment plant located at Koyang city in Korea. Commercial NF membrane (NE4040-70, Toray Ltd.) and CDI module (E40, Siontech INC.) were tested in series. The pollutant removal efficiencies were evaluated on the basis of Korean water quality criteria for water reuse. In addition, the energy consumptions were also calculated. As a result, the hybrid process showed lower energy consumption than conventional reverse osmosis process even though its effluent did meet the Korean standard. Consequently, this study suggests that the hybrid process is feasible for the energy efficient water reuse.
56
17632
The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data
Abstract:
In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.
55
14553
The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance
Abstract:
Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.
54
76362
Descriptive Analysis of Variations in Maguindanaon Language
Authors:
Abstract:
People who live in the same region and who seemed to speak the same language still vary in some aspects of their language. The variation may occur in terms of pronunciation, lexicon, morphology, and syntax. This qualitative study described the phonological, morphological, and lexical variations of the Maguindanaon language among the ten Maguindanao municipalities. Purposive sampling, in-depth interviews, focus group discussion, and sorting and classifying of words according to phonological and morphological as well as lexical structures in data analysis were employed. The variations occurred through phonemic changes and other phonological processes and morphological processes. Phonological processes consisted of vowel lengthening and deletion while morphological processes included affixation, borrowing, and coinage. In the phonological variation, it was observed that there were phonemic changes in one dialect to another. For example, there was a change of phoneme /r/ to /l/. The phoneme /r/ was most likely to occur in Kabuntalan like /biru/, /kurIt/, and /kɘmɅr/ whereas in the rest of the dialects these were /bilu/, /kuIɪt/, and /kɘmɅl/ respectively. Morphologically, the affixation was the main way to know the tenses. For example, the root sarig (expect) when inserted with im becomes simarig, i.e. s + im + arig = simarig (expected). Lexical variation also existed in the Maguindanaon language. Results revealed that the variation in phonology, morphology, and lexicon were observed to be associated primarily on geographic distribution.
53
10245
Educational Innovation and ICT: Before and during 21st Century
Abstract:
Educational innovation is a quality factor of teaching-learning processes and institutional accreditation. There is an increasing of these change processes, especially after 2000. However, the publications about this topic are more associated with ICTs in currently century. The main aim of the study was to determine the tendency of educational innovations around ICTs. The used method was mixed research design (content analysis, review of scientific literature and descriptive, comparative and correlation study) with 649 papers. In summary, the results indicated that, progressively, the educational innovation is associated with ICTs, in comparison with this type of change processes without ICTs. In conclusion, although this tendency, scientific literature must divulgate more kinds of pedagogical innovation with the aim of deepening in other new resources.
52
21745
Design of Visual Repository, Constraint and Process Modeling Tool Based on Eclipse Plug-Ins
Abstract:
Master Data Management requires creation of Central repository, applying constraints on Repository and designing processes to manage data. Designing of Repository, constraints on repository and business processes is very tedious and time consuming task for large Enterprise. Hence Visual Repository, constraints and Process (Workflow) modeling is the most critical step in Master Data Management.In this paper, we realize a Visual Modeling tool for implementing Repositories, Constraints and Processes based on Eclipse Plugin using GMF/EMF which follows principles of Model Driven Engineering (MDE).
51
126498
Brain Networks and Mathematical Learning Processes of Children
Abstract:
Neurological findings provide foundational results for many different disciplines. In this article we want to discuss these with a special focus on mathematics education. The intention is to make neuroscience research useful for the description of cognitive mathematical learning processes. A key issue of mathematics education is that students often behave as if their mathematical knowledge is constructed in isolated compartments with respect to the specific context of the original learning situation; supporting students to link these compartments to form a coherent mathematical society of mind is a fundamental task not only for mathematics teachers. This aspect goes hand in hand with the question if there is such a thing as abstract general mathematical knowledge detached from concrete reality. Educational Neuroscience may give answers to the question why students develop their mathematical knowledge in isolated subjective domains of experience and if it is generally possible to think in abstract terms. To address these questions, we will provide examples from different fields of mathematics education e.g. students’ development and understanding of the general concept of variables or the mathematical notion of universal proofs. We want to discuss these aspects in the reflection of functional studies which elucidate the role of specific brain regions in mathematical learning processes. In doing this the paper addresses concept formation processes of students in the mathematics classroom and how to support them adequately considering the results of (educational) neuroscience.
50
22868
Adsoption Tests of Two Industrial Dyes by Metallic Hydroxyds
Abstract:
Water pollution is nowadays a serious problem, due to the increasing scarcity of water and thus to the impact induced by such pollution on the human health. Various techniques are made use of to deal with water pollution. Among the most used ones, some can be enumerated: the bacterian bed, the activated mud, the Lagunage as biological processes and coagulation-floculation as a physic-chemical process. These processes are very expensive and an treatment efficiency which decreases along with the increase of the initial pollutants’ concentration. This is the reason why research has been reoriented towards the use of a process by adsorption as an alternative solution instead of the other traditional processes. In our study, we have tempted to exploit the characteristics of two metallic hydroxides Al and Fe to purify contaminated water by two industrial dyes SBL blue and SRL-150 orange. Results have shown the efficiency of the two materials on the blue SBL dye.
49
17529
Non-Universality in Barkhausen Noise Signatures of Thin Iron Films
Abstract:
We discuss angle dependent changes to the Barkhausen noise signatures of thin epitaxial Fe films upon altering the angle of the applied field. We observe a sub-critical to critical phase transition in the hysteresis loop of the sample upon increasing the out-of-plane component of the applied field. The observations are discussed in the light of simulations of a 2D Gaussian Random Field Ising Model with references to a reducible form of the Random Anisotropy Ising Model.
48
63592
Monte Carlo Methods and Statistical Inference of Multitype Branching Processes
Abstract:
A parametric estimation of the MBP with Power Series offspring distribution family is considered in this paper. The MLE for the parameters is obtained in the case when the observable data are incomplete and consist only with the generation sizes of the family tree of MBP. The parameter estimation is calculated by using the Monte Carlo EM algorithm. The estimation for the posterior distribution and for the offspring distribution parameters are calculated by using the Bayesian approach and the Gibbs sampler. The article proposes various examples with bivariate branching processes together with computational results, simulation and an implementation using R.
47
23718
The Relationship Study between Topological Indices in Contrast with Thermodynamic Properties of Amino Acids
Abstract:
In this study are computed some thermodynamic properties such as entropy and specific heat capacity, enthalpy, entropy and gibbs free energy in 10 type different Aminoacids using Gaussian software with DFT method and 6-311G basis set. Then some topological indices such as Wiener, shultz are calculated for mentioned molecules. Finaly is showed relationship between thermodynamic peoperties and above topological indices and with different curves is represented that there is a good correlation between some of the quantum properties with topological indices of them. The instructive example is directed to the design of the structure-property model for predicting the thermodynamic properties of the amino acids which are discussed here.
46
37526
Detection Characteristics of the Random and Deterministic Signals in Antenna Arrays
Abstract:
In this paper approach to incoherent signal detection in multi-element antenna array are researched and modeled. Two types of useful signals with unknown wavefront were considered. First one is deterministic (Barker code), the second one is random (Gaussian distribution). The derivation of the sufficient statistics took into account the linearity of the antenna array. The performance characteristics and detecting curves are modeled and compared for different useful signals parameters and for different number of elements of the antenna array. Results of researches in case of some additional conditions can be applied to a digital communications systems.
45
21000
Positioning Organisational Culture in Knowledge Management Research
Authors:
Abstract:
This paper proposes a conceptual model for understanding the impact of organisational culture on knowledge management processes and their link with organisational performance. It is suggested that organisational culture should be assessed as a multi-level construct comprising artifacts, espoused beliefs and values, and underlying assumptions. A holistic view of organisational culture and knowledge management processes, and their link with organisational performance, is presented. A comprehensive review of previous literature was undertaken in the development of the conceptual model. Taken together, the literature and the proposed model reveal possible relationships between organisational culture, knowledge management processes, and organisational performance. Potential implications of organisational culture levels for the creation, sharing, and application of knowledge are elaborated. In addition, the paper offers possible new insight into the impact of organisational culture on various knowledge management processes and their link with organisational performance. A number of possible relationships between organisational culture factors, knowledge management processes, and their link with organisational performance were employed to examine such relationships. The research model highlights the multi-level components of organisational culture. These are: the artifacts, the espoused beliefs and values, and the underlying assumptions. Through a conceptualisation of the relationships between organisational culture, knowledge management processes, and organisational performance, the study provides practical guidance for practitioners during the implementation of knowledge management processes. The focus of previous research on knowledge management has been on understanding organisational culture from the limited perspective of promoting knowledge creation and sharing. This paper proposes a more comprehensive approach to understanding organisational culture in that it draws on artifacts, espoused beliefs and values, and underlying assumptions, and reveals their impact on the creation, sharing, and application of knowledge which can affect overall organisational performance.
44
67953
Impacting the Processes of Freight Logistics at Upper Austrian Companies by the Use of Mobility Management
Abstract:
Traffic is being induced by companies due to their economic behavior. Basically, two different types of traffic occur at company sites: freight traffic and commuting traffic. Due to the fact that these traffic types are connected to each other in different kinds, an integrated approach to manage them is useful. Mobility management is a proved method for companies, to handle the traffic processes caused by their business activities. According to recent trend analysis in Austria, the freight traffic as well as the individual traffic, as part of the commuting traffic, will continue to increase. More traffic jams, as well as negative environmental impacts, are expected impacts for the future. Mobility management is a tool to control the traffic behavior with the scope to reduce emissions and other negative effects which are caused by traffic. Until now, mobility management is mainly used for optimizing commuting traffic without taking the freight logistics processes into consideration. However, the method of mobility management can be used to improve the freight traffic area of a company as well. The focus of this paper will be particularly laid on analyzing to what extent companies are already using mobility management to influence not only the commuting traffic they produce but also their processes of freight logistics. A further objective is to acquire knowledge about the motivating factors which persuade companies to introduce and apply mobility management. Additionally, advantages and disadvantages of this tool will be defined as well as limitations and factors of success, with a special focus on freight logistics, will be depicted. The first step of this paper is to conduct a literature review on the issue of mobility management with a special focus on freight logistics processes. To compare the theoretical findings with the practice, interviews, following a structured interview guidline, with mobility managers of different companies in Upper Austria will be undertaken. A qualitative analysis of these surveys will in a first step show the motivation behind using mobility management to improve traffic processes and how far this approach is already being used to especially influence the freight traffic of the companies. An evaluation to what extent the method of mobility management is already being approached at Upper Austrian companies to regulate freight logistics processes will be one outcome of this publication. Furthermore, the results of the theoretical and practical analysis will reveal not only the possibilities but also the limitations of using mobility management to influence the processes of freight logistics.
43
10148
Cutting Tools in Finishing Operations for CNC Rapid Manufacturing Processes: Experimental Studies
Abstract:
This paper reports an advanced approach in the application of CNC machining for rapid manufacturing processes (CNC-RM). The aim of this study is to improve the quality of machined parts by introducing different cutting tools during finishing operations. As the cutting is performed in different directions, the surfaces presented on part can be classified into several categories. Therefore, suitable cutting tools are assigned to machine particular surfaces and to improve the quality. Experimental studies have been carried out by fabricating several parts based on the suggested approach. The results provide further support for implementing this approach in rapid machining processes.
42
25163
A Learning-Based EM Mixture Regression Algorithm
Abstract:
The mixture likelihood approach to clustering is a popular clustering method where the expectation and maximization (EM) algorithm is the most used mixture likelihood method. In the literature, the EM algorithm had been used for mixture regression models. However, these EM mixture regression algorithms are sensitive to initial values with a priori number of clusters. In this paper, to resolve these drawbacks, we construct a learning-based schema for the EM mixture regression algorithm such that it is free of initializations and can automatically obtain an approximately optimal number of clusters. Some numerical examples and comparisons demonstrate the superiority and usefulness of the proposed learning-based EM mixture regression algorithm.
41
44284
Airborne Molecular Contamination in Clean Room Environment
Authors:
Abstract:
In clean room environment molecular contamination in very small concentrations can cause significant harm for the components and processes. This is commonly referred as airborne molecular contamination (AMC). There is a shortage of high sensitivity continuous measurement data for existence and behavior of several of these contaminants. Accordingly, in most cases correlation between concentration of harmful molecules and their effect on processes is not known. In addition, the formation and distribution of contaminating molecules are unclear. In this work sensitive optical techniques are applied in clean room facilities for investigation of concentrations, forming mechanisms and effects of contaminating molecules. Special emphasis is on reactive acid and base gases ammonia (NH3) and hydrogen fluoride (HF). They are the key chemicals in several operations taking place in clean room processes.
40
8577
Application of the Discrete-Event Simulation When Optimizing of Business Processes in Trading Companies
Abstract:
Optimization of business processes in trading companies is reviewed in the report. There is the presentation of the “Wholesale Customer Order Handling Process” business process model applicable for small and medium businesses. It is proposed to apply the algorithm for automation of the customer order processing which will significantly reduce labor costs and time expenditures and increase the profitability of companies. An optimized business process is an element of the information system of accounting of spare parts trading network activity. The considered algorithm may find application in the trading industry as well.
39
130750
Basic One-Dimensional Modelica®-Model for Simulation of Gas-Phase Adsorber Dynamics
Abstract:
Industrial adsorption processes are, mainly due to si-multaneous heat and mass transfer, characterized by a high level of complexity. The conception of such processes often does not take place systematically; instead scale-up/down respectively number-up/down methods based on existing systems are used. This paper shows how Modelica® can be used to develop a transient model enabling a more systematic design of such ad- and desorption components and processes. The core of this model is a lumped-element submodel of a single adsorbent grain, where the thermodynamic equilibria and the kinetics of the ad- and desorption processes are implemented and solved on the basis of mass-, momentum and energy balances. For validation of this submodel, a fixed bed adsorber, whose characteristics are described in detail in the literature, was modeled and simulated. The simulation results are in good agreement with the experimental results from the literature. Therefore, the model development will be continued, and the extended model will be applied to further adsorber types like rotor adsorbers and moving bed adsorbers.
38
34193
Proposing an Improved Managerial-Based Business Process Framework
Abstract:
Modeling of business processes, based on BPMN (Business Process Modeling Notation), helps analysts and managers to understand business processes, and, identify their shortages. These models provide a context to make rational decision of organizing business processes activities in an understandable manner. The purpose of this paper is to provide a framework for better understanding of business processes and their problems by reducing the cognitive load of displayed information for their audience at different managerial levels while keeping the essential information which are needed by them. For this reason, we integrate business process diagrams across the different managerial levels to develop a framework to improve the performance of business process management (BPM) projects. The proposed framework is entitled ‘Business process improvement framework based on managerial levels (BPIML)’. This framework, determine a certain type of business process diagrams (BPD) based on BPMN with respect to the objectives and tasks of the various managerial levels of organizations and their roles in BPM projects. This framework will make us able to provide the necessary support for making decisions about business processes. The framework is evaluated with a case study in a real business process improvement project, to demonstrate its superiority over the conventional method. A questionnaire consisted of 10 questions using Likert scale was designed and given to the participants (managers of Bank Refah Kargaran three managerial levels). By examining the results of the questionnaire, it can be said that the proposed framework provide support for correct and timely decisions by increasing the clarity and transparency of the business processes which led to success in BPM projects.
37
66878
Stakeholder Management for Successful Software Projects
Authors:
Abstract:
An alarming number of software projects fail to deliver the required functionalities within the provided budget and timeframe and with the required qualities. Some of the main reasons for this problem include bad stakeholder management, poor communications and informal change management. Informal processes to identify, engage and control stakeholders lead to these reasons. Recently, to emphasize its importance, the Project Management Institute (PMI) updated the Project Management Body of Knowledge (PMBoK) to explicitly include the stakeholder management knowledge area. This knowledge area consists of four processes to identify stakeholders, plan stakeholder management, and manage and control stakeholder engagement. The use of appropriate techniques for stakeholder management in software projects will definitely lead to higher quality and successful software. In this paper, we describe some of the proven techniques that can be used during the execution of the four processes for stakeholder management. Development of collaboration tools for automating these processes are recommended and need to be integrated in available software project management tools.
36
93068
Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.
35
17098
Negative Pressure Waves in Hydraulic Systems
Abstract:
Negative pressure phenomenon appears in many thermodynamic, geophysical and biophysical processes in the Nature and technological systems. For more than 100 years of the laboratory researches beginning from F. M. Donny’s tests, the great values of negative pressure have been achieved. But this phenomenon has not been practically applied, being only a nice lab toy due to the special demands for the purity and homogeneity of the liquids for its appearance. The possibility of creation of direct wave of negative pressure in real heterogeneous liquid systems was confirmed experimentally under the certain kinetic and hydraulic conditions. The negative pressure can be considered as the factor of both useful and destroying energies. The new approach to generation of the negative pressure waves in impure, unclean fluids has allowed the creation of principally new energy saving technologies and installations to increase the effectiveness and efficiency of different production processes. It was proved that the negative pressure is one of the main factors causing hard troubles in some technological and natural processes. Received results emphasize the necessity to take into account the role of the negative pressure as an energy factor in evaluation of many transient thermohydrodynamic processes in the Nature and production systems.
34
67608
Methodology for the Selection of Chemical Textile Products
Abstract:
The development of new processes in the textile industry entails designing methodologies to select adequate supplies that fit these new processes requirements. This paper presents a methodology to select chemicals that fulfill a new process technical specifications. The proposed methodology involves three major phases: (1) Data collection of chemical products, (2) Qualitative pre-selection and (3) Laboratory tests. We have applied this methodology to the selection of a binder which will form a protective film above the textile fibers and bond them. Our findings were that, there exist five possible products that can be used in our new process: Arkofil, Elvanol, Size plus A, Size plus AC and Starch. This new methodology has both qualitative and experimental variables, and can be used to select supplies for new textile processes.
33
67316
A Case from China on the Situation of Knowledge Management in Government
Authors:
Abstract:
Organizational scholars have paid enormous attention on how local governments manage their knowledge during the past two decades. Government knowledge management (KM) research recognizes that the management of knowledge flows and networks is critical to reforms on government service efficiency and the effect of administration. When dealing with complex affairs, all the limitations resulting from a lack of KM concept, processes and technologies among all the involved organizations begin to be exposed and further compound the processing difficulty of the affair. As a result, the challenges for individual or group knowledge sharing, knowledge digging and organizations’ collaboration in government's activities are diverse and immense. This analysis presents recent situation of government KM in China drawing from a total of more than 300 questionnaires and highlights important challenges that remain. The causes of the lapses in KM processes within and across the government agencies are discussed.
32
20234
Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment
Abstract:
Classification of data has been actively used for most effective and efficient means of conveying knowledge and information to users. The prima face has always been upon techniques for extracting useful knowledge from data such that returns are maximized. With emergence of huge datasets the existing classification techniques often fail to produce desirable results. The challenge lies in analyzing and understanding characteristics of massive data sets by retrieving useful geometric and statistical patterns. We propose a supervised parallel fuzzy rough support vector machine (PFRSVM) for data classification in cloud environment. The classification is performed by PFRSVM using hyperbolic tangent kernel. The fuzzy rough set model takes care of sensitiveness of noisy samples and handles impreciseness in training samples bringing robustness to results. The membership function is function of center and radius of each class in feature space and is represented with kernel. It plays an important role towards sampling the decision surface. The success of PFRSVM is governed by choosing appropriate parameter values. The training samples are either linear or nonlinear separable. The different input points make unique contributions to decision surface. The algorithm is parallelized with a view to reduce training times. The system is built on support vector machine library using Hadoop implementation of MapReduce. The algorithm is tested on large data sets to check its feasibility and convergence. The performance of classifier is also assessed in terms of number of support vectors. The challenges encountered towards implementing big data classification in machine learning frameworks are also discussed. The experiments are done on the cloud environment available at University of Technology and Management, India. The results are illustrated for Gaussian RBF and Bayesian kernels. The effect of variability in prediction and generalization of PFRSVM is examined with respect to values of parameter C. It effectively resolves outliers’ effects, imbalance and overlapping class problems, normalizes to unseen data and relaxes dependency between features and labels. The average classification accuracy for PFRSVM is better than other classifiers for both Gaussian RBF and Bayesian kernels. The experimental results on both synthetic and real data sets clearly demonstrate the superiority of the proposed technique.
31
55360
Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces
Abstract:
In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.
30
111054
The Relationship between Knowledge Management Processes and Strategic Thinking at the Organization Level
Abstract:
The role of knowledge management processes in achieving the strategic goals of organizations is crucial. To this end, understanding the relationship between knowledge management processes and different aspects of strategic thinking (followed by long-term organizational planning) should be considered. This research examines the relationship between each of the five knowledge management processes (creation, storage, transfer, audit, and deployment) with each dimension of strategic thinking (vision, creativity, thinking, communication and analysis) in one of the major sectors of the food industry in Iran. In this research, knowledge management and its dimensions (knowledge acquisition, knowledge storage, knowledge transfer, knowledge auditing, and finally knowledge utilization) as independent variables and strategic thinking and its dimensions (creativity, systematic thinking, vision, strategic analysis, and strategic communication) are considered as the dependent variable. The statistical population of this study consisted of 245 managers and employees of Minoo Food Industrial Group in Tehran. In this study, a simple random sampling method was used, and data were collected by a questionnaire designed by the research team. Data were analyzed using SPSS 21 software. LISERL software is also used for calculating and drawing models and graphs. Among the factors investigated in the present study, knowledge storage with 0.78 had the most effect, and knowledge transfer with 0.62 had the least effect on knowledge management and thus on strategic thinking.
29
5005
Optimal Control of Volterra Integro-Differential Systems Based on Legendre Wavelets and Collocation Method
Abstract:
In this paper, the numerical solution of optimal control problem (OCP) for systems governed by Volterra integro-differential (VID) equation is considered. The method is developed by means of the Legendre wavelet approximation and collocation method. The properties of Legendre wavelet accompany with Gaussian integration method are utilized to reduce the problem to the solution of nonlinear programming one. Some numerical examples are given to confirm the accuracy and ease of implementation of the method.
28
111625
Methodical Approach for the Integration of a Digital Factory Twin into the Industry 4.0 Processes
Authors:
Abstract:
The orientation of flexibility and adaptability with regard to factory planning is at machine and process level. Factory buildings are not the focus of current research. Factory planning has the task of designing products, plants, processes, organization, areas and the construction of a factory. The adaptability of a factory can be divided into three types: spatial, organizational and technical adaptability. Spatial adaptability indicates the ability to expand and reduce the size of a factory. Here, the area-related breathing capacity plays the essential role. It mainly concerns the factory site, the plant layout and the production layout. The organizational ability to change enables the change and adaptation of organizational structures and processes. This includes structural and process organization as well as logistical processes and principles. New and reconfigurable operating resources, processes and factory buildings are referred to as technical adaptability. These three types of adaptability can be regarded independently of each other as undirected potentials of different characteristics. If there is a need for change, the types of changeability in the change process are combined to form a directed, complementary variable that makes change possible. When planning adaptability, importance must be attached to a balance between the types of adaptability. The vision of the intelligent factory building and the 'Internet of Things' presupposes the comprehensive digitalization of the spatial and technical environment. Through connectivity, the factory building must be empowered to support a company's value creation process by providing media such as light, electricity, heat, refrigeration, etc. In the future, communication with the surrounding factory building will take place on a digital or automated basis. In the area of industry 4.0, the function of the building envelope belongs to secondary or even tertiary processes, but these processes must also be included in the communication cycle. An integrative view of a continuous communication of primary, secondary and tertiary processes is currently not yet available and is being developed with the aid of methods in this research work. A comparison of the digital twin from the point of view of production and the factory building will be developed. Subsequently, a tool will be elaborated to classify digital twins from the perspective of data, degree of visualization, and the trades. Thus a contribution is made to better integrate the secondary and tertiary processes in a factory into the added value.
27
131006
Object-Centric Process Mining Using Process Cubes
Abstract:
Process mining provides ways to analyze business processes. Common process mining techniques consider the process as a whole. However, in real-life business processes, different behaviors exist that make the overall process too complex to interpret. Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes. Process cubes organize event data using different dimensions. Each cell contains a set of events that can be used as an input to apply process mining techniques. Existing work on process cubes assume single case notions. However, in real processes, several case notions (e.g., order, item, package, etc.) are intertwined. Object-centric process mining is a new branch of process mining addressing multiple case notions in a process. To make a bridge between object-centric process mining and process comparison, we propose a process cube framework, which supports process cube operations such as slice and dice on object-centric event logs. To facilitate the comparison, the framework is integrated with several object-centric process discovery approaches.
26
3403
Productivity and Structural Design of Manufacturing Systems
Abstract:
Productivity of the manufacturing systems depends on technological processes, a technical data of machines and a structure of systems. Technology is presented by the machining mode and data, a technical data presents reliability parameters and auxiliary time for discrete production processes. The term structure of manufacturing systems includes the number of serial and parallel production machines and links between them. Structures of manufacturing systems depend on the complexity of technological processes. Mathematical models of productivity rate for manufacturing systems are important attributes that enable to define best structure by criterion of a productivity rate. These models are important tool in evaluation of the economical efficiency for production systems.
25
30649
Ubiquitous Collaborative Learning Activities with Virtual Teams Using CPS Processes to Develop Creative Thinking and Collaboration Skills
Abstract:
This study is a research and development which is intended to: 1) design ubiquitous collaborative learning activities with virtual teams using CPS processes to develop creative thinking and collaboration skills, and 2) assess the suitability of the ubiquitous collaborative learning activities. Its methods are divided into 2 phases. Phase 1 is the design of ubiquitous collaborative learning activities with virtual teams using CPS processes, phase 2 is the assessment of the suitability of the learning activities. The samples used in this study are 5 professionals in the field of learning activity design, ubiquitous learning, information technology, creative thinking, and collaboration skills. The results showed that ubiquitous collaborative learning activities with virtual teams using CPS processes to develop creative thinking and collaboration skills consist of 3 main steps which are: 1) preparation before learning, 2) learning activities processing and 3) performance appraisal. The result of the learning activities suitability assessment from the professionals is in the highest level.
24
14836
River Bank Erosion Studies: A Review on Investigation Approaches and Governing Factors
Abstract:
This paper provides detail review on river bank erosion studies with respect to their processes, methods of measurements and factors governing river bank erosion. Bank erosion processes are commonly associated with river changes initiation and development, through width adjustment and planform evolution. It consists of two main types of erosion processes; basal erosion due to fluvial hydraulic force and bank failure under the influence of gravity. Most studies had only focused on one factor rather than integrating both factors. Evidences of previous works have shown integration between both processes of fluvial hydraulic force and bank failure. Bank failure is often treated as probabilistic phenomenon without having physical characteristics and the geotechnical aspects of the bank. This review summarizes the findings of previous investigators with respect to measurement techniques and prediction rates of river bank erosion through field investigation, physical model and numerical model approaches. Factors governing river bank erosion considering physical characteristics of fluvial erosion are defined.
23
17216
Advanced Simulation of Power Consumption of Electric Vehicles
Abstract:
Electric vehicles are one of the most complicated electric devices to simulate due to the significant number of different processes involved in electrical structure of it. There are concurrent processes of energy consumption and generation with different onboard systems, which make simulation tasks more complicated to perform. More accurate simulation on energy consumption can provide a better understanding of all energy management for electric transport. As a result of all those processes, electric transport can allow for a more sustainable future and become more convenient in relation to the distance range and recharging time. This paper discusses the problems of energy consumption simulations for electric vehicles using different software packages to provide ideas on how to make this process more precise, which can help engineers create better energy management strategies for electric vehicles.
22
91441
The Grinding Influence on the Strength of Fan-Out Wafer-Level Packages
Abstract:
To build a thin fan-out wafer-level package, the package had to be ground to a thin level. In this work, the influence of the grinding processes on the strength of the fan-out wafer-level packages was investigated. After different grinding processes, all specimens were placed on a three-point-bending fixture installed on a universal tester for three-point-bending testing, and the strength of the fan-out wafer-level packages was measured. The experiments revealed that the average flexure strength increased with the decreasing surface roughness height of the fan-out wafer-level package tested. The grinding processes had a significant influence on the strength of the fan-out wafer-level packages investigated.
21
14039
Levy Model for Commodity Pricing
Abstract:
The aim in present paper is to construct an affordable and reliable commodity prices based on a recalculation of its cost through time which allows visualize the potential risks and thus, take more appropriate decisions regarding forecasts. Here attention has been focused on Levy model, more reliable and realistic than classical random Gaussian one as it takes into consideration observed abrupt jumps in case of sudden price variation. In application to Energy Trading sector where it has never been used before, equations corresponding to Levy model have been written for electricity pricing in European market. Parameters have been set in order to predict and simulate the price and its evolution through time to remarkable accuracy. As predicted by Levy model, the results show significant spikes which reach unconventional levels contrary to currently used Brownian model.
20
92371
Spatial Scale of Clustering of Residential Burglary and Its Dependence on Temporal Scale
Abstract:
Research has long focused on two main spatial aspects of crime: spatial patterns and spatial processes. When analyzing these patterns and processes, a key issue has been to determine the proper spatial scale. In addition, it is important to consider the possibility that these patterns and processes might differ appreciably for different temporal scales and might vary across geographic units of analysis. We examine the spatial-temporal dependence of residential burglary. This dependence is tested at varying geographical scales and temporal aggregations. The analyses are based on recorded incidents of crime in Columbus, Ohio during the 1994-2002 period. We implement point pattern analysis on the crime points using Ripley’s K function. The results indicate that spatial point patterns of residential burglary reveal spatial scales of clustering relatively larger than the average size of census tracts of the study area. Also, spatial scale is independent of temporal scale. The results of our analyses concerning the geographic scale of spatial patterns and processes can inform the development of effective policies for crime control.
19
78734
Ideological Stance in Political Discourse: A Transitivity Analysis of Nawaz Sharif's Address at 71st UN Assembly
Authors:
Abstract:
The present study uses Halliday’s transitivity model to analyze and interpret ideological stance in PM Nawaz Sharif’s political discourse. His famous speech at the 71st UN assembly was analyzed qualitatively using clausal analysis approach to investigate the communicative functions of the linguistic choices made in the address. The study discovers that among the six process types under the transitivity model, material, relational and mental processes appear most frequently in the speech, making up almost 86% of the whole. Verbal processes rank 4th, whereas existential and behavioral are the least occurring processes covering only 2 and 1 percent respectively. The dominant use of material processes suggests that Nawaz Sharif and his government are the main actors working on several concrete projects to produce a sense of developmental progression and continuity. Using relational and mental processes the PM, along with establishing proximity with masses and especially Kashmiri, gives guarantees and promises. The linguistic analysis concludes Kashmir dispute as being the central theme of the address, since it covers more than half of the discourse. The address calls for a strong action instead of formal assurances and wishful thoughts. The study establishes that language structures can yield certain connotations and ideologies which are not overt for readers. This is in affirmation to the supposition that language form performs a communicative function and is not merely fortuitous.
18
12972
Applying a Noise Reduction Method to Reveal Chaos in the River Flow Time Series
Abstract:
Chaotic analysis has been performed on the river flow time series before and after applying the wavelet based de-noising techniques in order to investigate the noise content effects on chaotic nature of flow series. In this study, 38 years of monthly runoff data of three gauging stations were used. Gauging stations were located in Ghar-e-Aghaj river basin, Fars province, Iran. The noise level of time series was estimated with the aid of Gaussian kernel algorithm. This step was found to be crucial in preventing removal of the vital data such as memory, correlation and trend from the time series in addition to the noise during de-noising process.
17
54571
An Ontology Model for Systems Engineering Derived from ISO/IEC/IEEE 15288: 2015: Systems and Software Engineering - System Life Cycle Processes
Abstract:
ISO/IEC/IEEE 15288: 2015, Systems and Software Engineering - System Life Cycle Processes is an international standard that provides generic top-level process descriptions to support systems engineering (SE). However, the processes defined in the standard needs improvement to lift integrity and consistency. The goal of this research is to explore the way by building an ontology model for the SE standard to manage the knowledge of SE. The ontology model gives a whole picture of the SE knowledge domain by building connections between SE concepts. Moreover, it creates a hierarchical classification of the concepts to fulfil different requirements of displaying and analysing SE knowledge.
16
42492
Removal of Oxytetracycline Using Sonophotocatalysis: Parametric Study
Abstract:
Water treatment and especially, medicament pollutants are nowadays important problems. Degradation of oxytetracycline was carried out using combined process of low-frequency ultrasound (US), ultraviolet irradiation and a catalyst. The effectiveness of the coupled processes has been evaluated by studying the effects of various operating parameters including initial OTC concentration, solution pH and catalyst mass. For the photolysis process, the monochromatic ultraviolet light wavelength utilized was 365 nm. The sonolysis experiments were performed with ultrasound at a frequency of 40 kHz. The heterogeneous photocatalysis was studied in the presence of TiO2. The processes were employed individually, and simultaneously to examine the details of the processes and to investigate the contribution of each process. Low UV intensity (12W), low pH and high mass of TiO2 conditions enhanced the sono-photocatalytic degradation of OTC. The results showed that the individual contribution sonochemical and photochemical reactions are very low, however, their coupling increases the degradation rate of 8 times compared to photolysis and 2 times compared to sonolysis. There is a synergistic effect between the two modes of radiation, UV and U.S. leading to 82.04% degradation yield. An application of these combined processes on the treatment of a real pharmaceutical wastewater was examined.
15
11373
Compare Hot Forming and Cold Forming in Rolling Process
Abstract:
In metalworking, rolling is a metal forming process in which metal stock is passed through a pair of rolls. Rolling is classified according to the temperature of the metal rolled. If the temperature of the metal is above its recrystallization temperature, then the process is termed as hot rolling. If the temperature of the metal is below its recrystallization temperature, the process is termed as cold rolling. In terms of usage, hot rolling processes more tonnage than any other manufacturing process, and cold rolling processes the most tonnage out of all cold working processes. This article describes the use of advanced tubing inspection NDT methods for boiler and heat exchanger equipment in the petrochemical industry to supplement major turnaround inspections. The methods presented include remote field eddy current, magnetic flux leakage, internal rotary inspection system and eddy current.
14
52244
Compounding and Blending in English and Hausa Languages
Abstract:
Words are the basic building blocks of a language. In everyday usage of a language, words are used and new words are formed and reformed in order to contain and accommodate all entities, phenomena, qualities and every aspect of the entire human life. This research study seeks to examine and compare some of the word formation processes and how they are used in forming new words in English and Hausa languages. The study focuses its main attention on blending and compounding as word formation processes and how the processes are used adequately in the formation of words in both English and Hausa languages. The research aims to find out, how compounding and blending are used, as processes of word formation in these two languages. And also, to investigate the word formation processes involved in compounding and blending in these languages, and the nature of words that are formed. Therefore, the research tries to find the answers to the following research questions; What types of compound and blended forms are found and how they are formed in the English and Hausa languages? How these compounded and blended forms functioned in both English and Hausa languages in different context such as in phrases and sentences structures? Findings of the study reveal that, there exist new kind of words formed in Hausa and English language under blending, which previous findings did not either reveal or explain in detail. Similarly, there are a lot of similarities found in the way these blends and compounds forms in the two languages, however, the data available shows that, blends in the Hausa language are more, when compared to the blends in English. The data of this study will be gathered based on discourse found in newspaper, articles, novels, and written literature of the Hausa and English languages.
13
66711
Investigating Transformative Processes through Personal, social, Professional and Educational Development of Adult Graduates in Second Chance Schools in Greece: a Quantitative and Qualitative Survey throughout the Country
Abstract:
The object of this research is to explore the views of Greek Second Chance Schools’ (SCS) graduates regarding their personal, social, professional and educational development after graduation. SCS are addressed to adults who had failed to complete their studies in the nine-year compulsory education. Furthermore, the research focuses on their motives as well as on any possible achievement of transformative processes. The quantitative survey involved in total 426 graduates while in the qualitative survey participated 38 persons, all of whom graduated in the period 2010-2012 from 27 schools throughout the country. The survey was conducted by filling in a structured questionnaire and by carrying out semi-structured interviews. As regards the results, the respondents decided to attend the SCS primarily to acquire knowledge while most of them feel that they managed to meet their goals. Also, graduates recognize that studying in SCS contributed primarily in their social and personal development. In addition, an encouraging fact is that some of the graduates recognize the transformative processes which they experienced during their studies in SCS.
12
3929
Data-Mining Approach to Analyzing Industrial Process Information for Real-Time Monitoring
Abstract:
This work presents a data-mining empirical monitoring scheme for industrial processes with partially unbalanced data. Measurement data of good operations are relatively easy to gather, but in unusual special events or faults it is generally difficult to collect process information or almost impossible to analyze some noisy data of industrial processes. At this time some noise filtering techniques can be used to enhance process monitoring performance in a real-time basis. In addition, pre-processing of raw process data is helpful to eliminate unwanted variation of industrial process data. In this work, the performance of various monitoring schemes was tested and demonstrated for discrete batch process data. It showed that the monitoring performance was improved significantly in terms of monitoring success rate of given process faults.
11
58498
Complex Learning Tasks and Their Impact on Cognitive Engagement for Undergraduate Engineering Students
Abstract:
This paper presents preliminary results from a two-year funded research program looking to analyze and understand the relationship between high cognitive engagement, higher order cognitive processes employed in situations of complex learning tasks, and the use of active learning pedagogies in engineering undergraduate programs. A mixed method approach was used to gauge student engagement and their cognitive processes when accomplishing complex tasks. Quantitative data collected from the self-report cognitive engagement scale shows that deep learning approach is positively correlated with high levels of complex learning tasks and the level of student engagement, in the context of classroom active learning pedagogies. Qualitative analyses of in depth face-to-face interviews reveal insights into the mechanisms influencing students’ cognitive processes when confronted with open-ended problem resolution. Findings also support evidence that students will adjust their level of cognitive engagement according to the specific didactic environment.
10
17106
Selection of Variogram Model for Environmental Variables
Abstract:
The present study investigates the selection of variogram model in analyzing spatial variations of environmental variables with the trend. Sometimes, the autofitted theoretical variogram does not really capture the true nature of the empirical semivariogram. So proper exploration and analysis are needed to select the best variogram model. For this study, an open source data collected from California Soil Resource Lab1 is used to explain the problems when fitting a theoretical variogram. Five most commonly used variogram models: Linear, Gaussian, Exponential, Matern, and Spherical were fitted to the experimental semivariogram. Ordinary kriging methods were considered to evaluate the accuracy of the selected variograms through cross-validation. This study is beneficial for selecting an appropriate theoretical variogram model for environmental variables.
9
8308
Adopting Collaborative Business Processes to Prevent the Loss of Information in Public Administration Organisations
Abstract:
Recently, the use of web 2.0 tools has increased in companies and public administration organizations. This phenomenon, known as "Enterprise 2.0", has, de facto, modified common organizational and operative practices. This has led “knowledge workers” to change their working practices through the use of Web 2.0 communication tools. Unfortunately, these tools have not been integrated with existing enterprise information systems, a situation that could potentially lead to a loss of information. This is an important problem in an organizational context, because knowledge of information exchanged within the organization is needed to increase the efficiency and competitiveness of the organization. In this article we demonstrate that it is possible to capture this knowledge using collaboration processes, which are processes of abstraction created in accordance with design patterns and applied to new organizational operative practices.
8
38599
Review: Wavelet New Tool for Path Loss Prediction
Abstract:
In this work, GSM signal strength (power) was monitored in an indoor environment. Samples of the GSM signal strength was measured on mobile equipment (ME). One-dimensional multilevel wavelet is used to predict the fading phenomenon of the GSM signal measured and neural network clustering to determine the average power received in the study area. The wavelet prediction revealed that the GSM signal is attenuated due to the fast fading phenomenon which fades about 7 times faster than the radio wavelength while the neural network clustering determined that -75dBm appeared more frequently followed by -85dBm. The work revealed that significant part of the signal measured is dominated by weak signal and the signal followed more of Rayleigh than Gaussian distribution. This confirmed the wavelet prediction.
7
90950
Investigation of the Effects of Simple Heating Processes on the Crystallization of Bi₂WO₆
Abstract:
In this study, the synthesis of photocatalytic Bi₂WO₆ was practiced with simple heating processes and the effects of these treatments on the production of the desired compound were investigated. For this purpose, experiments with Bi(NO₃)₃.5H₂O and H₂WO₄ precursors were carried out to synthesize Bi₂WO₆ by four different combinations. These four combinations were grouped in two main sets as ‘treated in microwave reactor’ and ‘directly filtrated’; additionally these main sets were grouped into two subsets as ‘calcined’ and ‘not calcined’. Calcination processes were conducted at temperatures of 400ᵒC, 600ᵒC, and 800ᵒC. X-ray diffraction (XRD) and environmental scanning electron microscopy (ESEM) analyses were performed in order to investigate the crystal structure of powdered product synthesized with each combination. The highest crystallization of produced compounds was observed for calcination at 600ᵒC from each main group.
6
30253
A Polynomial Approach for a Graphical-based Integrated Production and Transport Scheduling with Capacity Restrictions
Authors:
Abstract:
The performance of global manufacturing supply chains depends on the interaction of production and transport processes. Currently, the scheduling of these processes is done separately without considering mutual requirements, which leads to no optimal solutions. An integrated scheduling of both processes enables the improvement of supply chain performance. The integrated production and transport scheduling problem (PTSP) is NP-hard, so that heuristic methods are necessary to efficiently solve large problem instances as in the case of global manufacturing supply chains. This paper presents a heuristic scheduling approach which handles the integration of flexible production processes with intermodal transport, incorporating flexible land transport. The method is based on a graph that allows a reformulation of the PTSP as a shortest path problem for each job, which can be solved in polynomial time. The proposed method is applied to a supply chain scenario with a manufacturing facility in South Africa and shipments of finished product to customers within the Country. The obtained results show that the approach is suitable for the scheduling of large-scale problems and can be flexibly adapted to different scenarios.
5
57270
Retraction Free Motion Approach and Its Application in Automated Robotic Edge Finishing and Inspection Processes
Abstract:
In this paper, a motion generation algorithm for a six Degrees of Freedom (DoF) robotic hand in a static environment is presented. The purpose of developing this method is to be used in the path generation of the end-effector for edge finishing and inspection processes by utilizing the CAD model of the considered workpiece. Nonetheless, the proposed algorithm may be extended to be applicable for other similar manufacturing processes. A software package programmed in the application programming interface (API) of SolidWorks generates tool path data for the robot. The proposed method significantly simplifies the given problem, resulting in a reduction in the CPU time needed to generate the path, and offers an efficient overall solution. The ABB IRB2000 robot is chosen for executing the generated tool path.
4
36138
Modelling Kinetics of Colour Degradation in American Pokeweed (Phytolacca americana) Extract Concentration
Abstract:
The kinetics of colour changes of American Pokeweed extract, due to concentration by various heating methods was studied. Three different heating/evaporation processes were employed for production of American Pokeweed extract concentrate. The American Pokeweed extract was concentrated to a final 40 °Brix from an initial °Brix of 4 by microwave heating, rotary vacuum evaporator and evaporating at atmospheric pressure. The final American Pokeweed extract concentration of 40 °Brix was achieved in 188, 216 and 320 min by using microwave, rotary vacuum and atmospheric heating processes, respectively. The colour change during concentration processes was investigated. Total colour differences, Hunter L, a and b parameters were used to estimate the extent of colour loss. All Hunter colour parameters decreased with time. The zero-order, first-order and a combined kinetics model were applied to the changes in colour parameters. All models were found to describe the L, a and b-data adequately. Results indicated that variation in TCD followed both first-order and combined kinetics models. This model implied that the colour formation and pigment destruction occurred during concentration processes of American Pokeweed extract.
3
25158
Application of Rapid Prototyping to Create Additive Prototype Using Computer System
Abstract:
Rapid prototyping is a new group of manufacturing processes, which allows fabrication of physical of any complexity using a layer by layer deposition technique directly from a computer system. The rapid prototyping process greatly reduces the time and cost necessary to bring a new product to market. The prototypes made by these systems are used in a range of industrial application including design evaluation, verification, testing, and as patterns for casting processes. These processes employ a variety of materials and mechanisms to build up the layers to build the part. The present work was to build a FDM prototyping machine that could control the X-Y motion and material deposition, to generate two-dimensional and three-dimensional complex shapes. This study focused on the deposition of wax material. This work was to find out the properties of the wax materials used in this work in order to enable better control of the FDM process. This study will look at the integration of a computer controlled electro-mechanical system with the traditional FDM additive prototyping process. The characteristics of the wax were also analysed in order to optimize the model production process. These included wax phase change temperature, wax viscosity and wax droplet shape during processing.
2
36135
Production of Plum (Prunus Cerasifera) Concentrate as Edible Color and Evaluation of Color Change Kinetics
Abstract:
Improvement of color, as a quality attribute of Plum Concentrate, has been made possible by the increase in knowledge of kinetic of color change. Three different heating/evaporation processes were employed for the production of pPlum juice concentrate. The Plum juice was concentrated to a final 55 °Bx from an initial °Bx of 15 by microwave heating, rotary vacuum evaporator and evaporating at atmospheric pressure. The final Plum juice concentration of 55 °Bx was achieved in 17, 24 and 57 min by using the microwave, rotary vacuum and atmospheric heating processes, respectively. The colour change during concentration processes was investigated. Total colour differences, Hunter L, a and b parameters were used to estimate the extent of colour loss. All Hunter colour parameters decreased with time. The zero-order, first-order and a combined kinetics model were applied to the changes in colour parameters. Results indicated that variation in TCD followed both first-order and combined kinetics models, and parameters L, a and b followed only combined model. This model implied that the colour formation and pigment destruction occurred during concentration processes of plum juice.
1
85319
Implementation of an Open Source ERP for SMEs in the Automotive Sector in Peru: A Case Study
Abstract:
The Enterprise Resource Planning Systems (ERP) allows the integration of all the business processes of the functional areas of the companies, in order to automate and standardize the processes, obtain accurate information and improve decision making in time real. In Peru, 79% of medium and small companies (SMEs) do not use any management software, this is because it is believed that ERPs are expensive, complex and difficult to implement. However, for more than 20 years there have been Open Source ERPs, which are more accessible and have the same benefit as proprietary ERPs, but there is little information on the implementation process. In this work is made a case of study, in order to show the implementation process of an Open Source ERP, Odoo, based on the ASAP methodology (Accelerated SAP) and applied to a company of corrective and preventive maintenance services of vehicles. The ERP allowed the SME to standardize its business processes, increase its productivity, reducing up to 40% certain processes. The study of this case shows that it is feasible and profitable to implement an Open Source ERP in SMEs in the Automotive Sector of Peru. In addition, it is shown that the ASAP methodology is adequate to carry out Open Source ERPs implementation projects.