Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Abstract Count: 62262

200
90311
Output-Feedback Control Design for a General Class of Systems Subject to Sampling and Uncertainties
Authors:
Abstract:
The synthesis of output-feedback control law has been investigated by many researchers since the last century. While many results exist for the case of Linear Time Invariant systems whose measurements are continuously available, nowadays, control laws are usually implemented on micro-controller, then the measurements are discrete-time by nature. This fact has to be taken into account explicitly in order to obtain a satisfactory behavior of the closed-loop system. One considers here a general class of systems corresponding to an observability normal form and which is subject to uncertainties in the dynamics and sampling of the output. Indeed, in practice, the modeling of the system is never perfect, this results in unknown uncertainties in the dynamics of the model. We propose here an output feedback algorithm which is based on a linear state feedback and a continuous-discrete time observer. The main feature of the proposed control law is that only discrete-time measurements of the output are needed. Furthermore, it is formally proven that the state of the closed loop system exponentially converges toward the origin despite the unknown uncertainties. Finally, the performances of this control scheme are illustrated with simulations.
199
87913
Observer-Based Control Design for Double Integrators Systems with Long Sampling Periods and Actuator Uncertainty
Authors:
Abstract:
The design of control-law for engineering systems has been investigated for many decades. While many results are concerned with continuous systems with continuous output, nowadays, many controlled systems have to transmit their output measurements through network, hence making it discrete-time. But it is well known that the sampling of a system whose control-law is based on the continuous output may render the system unstable, especially when this sampling period is long compared to the system dynamics. The control design then has to be adapted in order to cope with this issue. In this paper, we consider systems which can be modeled as double integrator with uncertainty on the input since many mechanical systems can be put under such form. We present a control scheme based on an observer using only discrete time measurement and which provides continuous time estimation of the state, combined with a continuous control law, which stabilized a system with second-order dynamics even in the presence of uncertainty. It is further shown that arbitrarily long sampling periods can be dealt with properly setting the control scheme parameters.
198
25199
Optimization of Fourth Order Discrete-Approximation Inclusions
Abstract:
The paper concerns the necessary and sufficient conditions of optimality for Cauchy problem of fourth order discrete (PD) and discrete-approximate (PDA) inclusions. The main problem is formulation of the fourth order adjoint discrete and discrete-approximate inclusions and transversality conditions, which are peculiar to problems including fourth order derivatives and approximate derivatives. Thus the necessary and sufficient conditions of optimality are obtained incorporating the Euler-Lagrange and Hamiltonian forms of inclusions. Derivation of optimality conditions are based on the apparatus of locally adjoint mapping (LAM). Moreover in the application of these results we consider the fourth order linear discrete and discrete-approximate inclusions.
197
85408
Estimation of Population Mean under Random Non-Response in Two-Occasion Successive Sampling
Abstract:
In this paper, we have considered the problems of estimation for the population mean on current (second) occasion in two-occasion successive sampling under random non-response situations. Some modified exponential type estimators have been proposed and their properties are studied under the assumptions that the number of sampling unit follows a discrete distribution due to random non-response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.
196
41583
Multidimensional Integral and Discrete Opial–Type Inequalities
Abstract:
Over the last five decades, an enormous amount of work has been done on Opial’s integral inequality, dealing with new proofs, various generalizations, extensions and discrete analogs. The Opial inequality is recognized as a fundamental result in the analysis of qualitative properties of solution of differential equations. We use submultiplicative convex functions, appropriate representations of functions and inequalities involving means to obtain generalizations and extensions of certain known multidimensional integral and discrete Opial-type inequalities.
195
47740
Numerical Modelling of Dry Stone Masonry Structures Based on Finite-Discrete Element Method
Abstract:
This paper presents numerical model based on finite-discrete element method for analysis of the structural response of dry stone masonry structures under static and dynamic loads. More precisely, each discrete stone block is discretized by finite elements. Material non-linearity including fracture and fragmentation of discrete elements as well as cyclic behavior during dynamic load are considered through contact elements which are implemented within a finite element mesh. The application of the model was conducted on several examples of these structures. The performed analysis shows high accuracy of the numerical results in comparison with the experimental ones and demonstrates the potential of the finite-discrete element method for modelling of the response of dry stone masonry structures.
194
99922
Fundamental Solutions for Discrete Dynamical Systems Involving the Fractional Laplacian
Abstract:
In this work, we obtain representation results for solutions of a time-fractional differential equation involving the discrete fractional Laplace operator in terms of generalized Wright functions. Such equations arise in the modeling of many physical systems, for example, chain processes in chemistry and radioactivity. The focus is on the linear problem of the simplified Moore - Gibson - Thompson equation, where the discrete fractional Laplacian and the Caputo fractional derivate of order on (0,2] are involved. As a particular case, we obtain the explicit solution for the discrete heat equation and discrete wave equation. Furthermore, we show the explicit solution for the equation involving the perturbed Laplacian by the identity operator. The main tool for obtaining the explicit solution are the Laplace and discrete Fourier transforms, and Stirling's formula. The methodology mainly is to apply both transforms in the equation, to find the inverse of each transform, and to prove that this solution is well defined, using Stirling´s formula.
193
16927
A Hybrid Watermarking Scheme Using Discrete and Discrete Stationary Wavelet Transformation For Color Images
Abstract:
This paper presents a new method which includes robust and invisible digital watermarking on images that is colored. Colored images are used as watermark. Frequency region is used for digital watermarking. Discrete wavelet transform and discrete stationary wavelet transform are used for frequency region transformation. Low, medium and high frequency coefficients are obtained by applying the two-level discrete wavelet transform to the original image. Low frequency coefficients are obtained by applying one level discrete stationary wavelet transform separately to all frequency coefficient of the two-level discrete wavelet transformation of the original image. For every low frequency coefficient obtained from one level discrete stationary wavelet transformation, watermarks are added. Watermarks are added to all frequency coefficients of two-level discrete wavelet transform. Totally, four watermarks are added to original image. In order to get back the watermark, the original and watermarked images are applied with two-level discrete wavelet transform and one level discrete stationary wavelet transform. The watermark is obtained from difference of the discrete stationary wavelet transform of the low frequency coefficients. A total of four watermarks are obtained from all frequency of two-level discrete wavelet transform. Obtained watermark results are compared with real watermark results, and a similarity result is obtained. A watermark is obtained from the highest similarity values. Proposed methods of watermarking are tested against attacks of the geometric and image processing. The results show that proposed watermarking method is robust and invisible. All features of frequencies of two level discrete wavelet transform watermarking are combined to get back the watermark from the watermarked image. Watermarks have been added to the image by converting the binary image. These operations provide us with better results in getting back the watermark from watermarked image by attacking of the geometric and image processing.
192
22260
Superconvergence of the Iterated Discrete Legendre Galerkin Method for Fredholm-Hammerstein Equations
Abstract:
In this paper we analyse the iterated discrete Legendre Galerkin method for Fredholm-Hammerstein integral equations with smooth kernel. Using sufficiently accurate numerical quadrature rule, we obtain superconvergence rates for the iterated discrete Legendre Galerkin solutions in both infinity and $L^2$-norm. Numerical examples are given to illustrate the theoretical results.
191
15632
Discrete Tracking Control of Nonholonomic Mobile Robots: Backstepping Design Approach
Abstract:
In this paper, we propose a discrete tracking control of nonholonomic mobile robots with two degrees of freedom. The electro-mechanical model of a mobile robot moving on a horizontal surface without slipping, with two rear wheels controlled by two independent DC electric, and one front roal wheel is considered. We present back-stepping design based on the Euler approximate discrete-time model of a continuous-time plant. Theoretical considerations are verified by numerical simulation. The work was supported by RFFI (15-01-08482).
190
121074
Analytical Solution of Non–Autonomous Discrete Non-Linear Schrodinger Equation With Saturable Non-Linearity
Abstract:
It has been elucidated here that non- autonomous discrete non-linear Schrödinger equation is associated with saturable non-linearity through photo-refractive media. We have investigated the localized solution of non-autonomous saturable discrete non-linear Schrödinger equations. The similarity transformation has been involved in converting non-autonomous saturable discrete non-linear Schrödinger equation to constant-coefficient saturable discrete non-linear Schrödinger equation (SDNLSE), whose exact solution is already known. By back substitution, the solution of the non-autonomous version has been obtained. We have analysed our solution for the hyperbolic and periodic form of gain/loss term, and interesting results have been obtained. The most important characteristic role is that it helps us to analyse the propagation of electromagnetic waves in glass fibres and other optical wave mediums. Also, the usage of SDNLSE has been seen in tight binding for Bose-Einstein condensates in optical mediums. Even the solutions are interrelated, and its properties are prominently used in various physical aspects like optical waveguides, Bose-Einstein (B-E) condensates in optical mediums, Non-linear optics in photonic crystals, and non-linear kerr–type non-linearity effect and photo refracting medium.
189
58116
Numerical Investigation of the Effect of Blast Pressure on Discrete Model in Shock Tube
Abstract:
Blast waves are generated due to the explosions of high energy materials. An explosion yielding a blast wave has the potential to cause severe damage to buildings and its personnel. In order to understand the physics of effects of blast pressure on buildings, studies in the shock tube on generic configurations are carried out at various pressures on discrete models. The strength of shock wave is systematically varied by using different driver gases and diaphragm thickness. The basic material of the diaphragm is Aluminum. To simulate the effect of shock waves on discrete models a shock tube was used. Generic models selected for this study are suitably scaled cylinder, cone and cubical blocks. The experiments were carried out with 2mm diaphragm with burst pressure ranging from 28 to 31 bar. Numerical analysis was carried out over these discrete models. A 3D model of shock-tube with different discrete models inside the tube was used for CFD computation. It was found that cone has dissipated most of the shock pressure compared to cylinder and cubical block. The robustness and the accuracy of the numerical model were validation with the analytical and experimental data.
188
74319
Number of Parametrization of Discrete-Time Systems without Unit-Delay Element: Single-Input Single-Output Case
Abstract:
In this paper, we consider the parametrization of the discrete-time systems without the unit-delay element within the framework of the factorization approach. In the parametrization, we investigate the number of required parameters. We consider single-input single-output systems in this paper. By the investigation, we find, on the discrete-time systems without the unit-delay element, three cases that are (1) there exist plants which require only one parameter and (2) two parameters, and (3) the number of parameters is at most three.
187
4113
Discrete-Time Bulk Queue with Service Capacity Depending on Previous Service Time
Authors:
Abstract:
This paper considers a discrete-time bulk-arrival bulkservice queueing system, where service capacity varies depending on the previous service time. By using the generating function technique and the supplementary variable method, we compute the distributions of the queue length at an arbitrary slot boundary and a departure time.
186
37341
2.5D Face Recognition Using Gabor Discrete Cosine Transform
Abstract:
In this paper, we present a novel 2.5D face recognition method based on Gabor Discrete Cosine Transform (GDCT). In the proposed method, the Gabor filter is applied to extract feature vectors from the texture and the depth information. Then, Discrete Cosine Transform (DCT) is used for dimensionality and redundancy reduction to improve computational efficiency. The system is combined texture and depth information in the decision level, which presents higher performance compared to methods, which use texture and depth information, separately. The proposed algorithm is examined on publically available Bosphorus database including models with pose variation. The experimental results show that the proposed method has a higher performance compared to the benchmark.
185
25207
Estimating The Population Mean by Using Stratified Double Extreme Ranked Set Sample
Abstract:
Stratified double extreme ranked set sampling (SDERSS) method is introduced and considered for estimating the population mean. The SDERSS is compared with the simple random sampling (SRS), stratified ranked set sampling (SRSS) and stratified simple set sampling (SSRS). It is shown that the SDERSS estimator is an unbiased of the population mean and more efficient than the estimators using SRS, SRSS and SSRS when the underlying distribution of the variable of interest is symmetric or asymmetric.
184
13351
Use of Six-sigma Concept in Discrete Manufacturing Industry
Abstract:
Efficiency in manufacturing is critical in raising the value of exports so as to gainfully trade on the regional and international markets. There seems to be increasing popularity of continuous improvement strategies availed to manufacturing entities, but this research study established that there has not been a similar popularity accorded to the Six Sigma methodology. Thus this work was conducted to investigate the applicability, effectiveness, usefulness, application and suitability of the Six Sigma methodology as a competitiveness option for discrete manufacturing entity. Development of Six-sigma center in the country with continuous improvement information would go a long way in benefiting the entire industry
183
20283
Stability of Hybrid Stochastic Systems
Abstract:
This paper is concerned with robust mean square stability of uncertain stochastic switched discrete time-delay systems. The system to be considered is subject to interval time-varying delays, which allows the delay to be a fast time-varying function and the lower bound is not restricted to zero. Based on the discrete Lyapunov functional, a switching rule for the robust mean square stability for the uncertain stochastic discrete time-delay system is designed via linear matrix inequalities. Finally, some examples are exploited to illustrate the effectiveness of the proposed schemes.
182
19809
New Results on Stability of Hybrid Stochastic Systems
Abstract:
This paper is concerned with robust mean square stability of uncertain stochastic switched discrete time-delay systems. The system to be considered is subject to interval time-varying delays, which allows the delay to be a fast time-varying function and the lower bound is not restricted to zero. Based on the discrete Lyapunov functional, a switching rule for the robust mean square stability for the uncertain stochastic discrete time-delay system is designed via linear matrix inequalities. Finally, some examples are exploited to illustrate the effectiveness of the proposed schemes.
181
78603
Optimal ECG Sampling Frequency for Multiscale Entropy-Based HRV
Authors:
Abstract:
Multiscale entropy (MSE) is an extensively used index to provide a general understanding of multiple complexity of physiologic mechanism of heart rate variability (HRV) that operates on a wide range of time scales. Accurate selection of electrocardiogram (ECG) sampling frequency is an essential concern for clinically significant HRV quantification; high ECG sampling rate increase memory requirements and processing time, whereas low sampling rate degrade signal quality and results in clinically misinterpreted HRV. In this work, the impact of ECG sampling frequency on MSE based HRV have been quantified. MSE measures are found to be sensitive to ECG sampling frequency and effect of sampling frequency will be a function of time scale.
180
25249
Discretization of Cuckoo Optimization Algorithm for Solving Quadratic Assignment Problems
Authors:
Abstract:
Quadratic Assignment Problem (QAP) is one the combinatorial optimization problems about which research has been done in many companies for allocating some facilities to some locations. The issue of particular importance in this process is the costs of this allocation and the attempt in this problem is to minimize this group of costs. Since the QAP’s are from NP-hard problem, they cannot be solved by exact solution methods. Cuckoo Optimization Algorithm is a Meta-heuristicmethod which has higher capability to find the global optimal points. It is an algorithm which is basically raised to search a continuous space. The Quadratic Assignment Problem is the issue which can be solved in the discrete space, thus the standard arithmetic operators of Cuckoo Optimization Algorithm need to be redefined on the discrete space in order to apply the Cuckoo Optimization Algorithm on the discrete searching space. This paper represents the way of discretizing the Cuckoo optimization algorithm for solving the quadratic assignment problem.
179
30733
Bayesian Approach for Moving Extremes Ranked Set Sampling
Abstract:
In this paper, Bayesian estimation for the mean of exponential distribution is considered using Moving Extremes Ranked Set Sampling (MERSS). Three priors are used; Jeffery, conjugate and constant using MERSS and Simple Random Sampling (SRS). Some properties of the proposed estimators are investigated. It is found that the suggested estimators using MERSS are more efficient than its counterparts based on SRS.
178
83342
A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
Abstract:
This paper begins by describing some basic properties of finite field and elliptic curve cryptography over prime and binary fields. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the leading general attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.
177
83343
A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
Abstract:
This paper begins by describing some basic properties of finite field and elliptic curve cryptography over prime and binary fields. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the leading general attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.
176
83341
A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
Abstract:
This paper begins by describing some basic properties of finite field and elliptic curve cryptography over prime and binary fields. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the leading general attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.
175
83337
A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
Abstract:
This paper begins by describing some basic properties of finite field and elliptic curve cryptography over prime and binary fields. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the leading general attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.
174
83340
A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
Abstract:
This paper begins by describing some basic properties of finite field and elliptic curve cryptography over prime and binary fields. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the leading general attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.
173
82212
A Study of General Attacks on Elliptic Curve Discrete Logarithm Problem over Prime Field and Binary Field
Abstract:
This paper begins by describing basic properties of finite field and elliptic curve cryptography over prime field and binary field. Then we discuss the discrete logarithm problem for elliptic curves and its properties. We study the general common attacks on elliptic curve discrete logarithm problem such as the Baby Step, Giant Step method, Pollard’s rho method and Pohlig-Hellman method, and describe in detail experiments of these attacks over prime field and binary field. The paper finishes by describing expected running time of the attacks and suggesting strong elliptic curves that are not susceptible to these attacks.c
172
36311
Numerical Solution of Integral Equations by Using Discrete GHM Multiwavelet
Abstract:
In this paper, numerical method based on discrete GHM multiwavelets is presented for solving the Fredholm integral equations of second kind. There is hardly any article available in the literature in which the integral equations are numerically solved using discrete GHM multiwavelet. A number of examples are demonstrated to justify the applicability of the method. In GHM multiwavelets, the values of scaling and wavelet functions are calculated only at t = 0, 0.5 and 1. The numerical solution obtained by the present approach is compared with the traditional Quadrature method. It is observed that the present approach is more accurate and computationally efficient as compared to quadrature method.
171
112408
An Efficient Discrete Chaos in Generalized Logistic Maps with Applications in Image Encryption
Authors:
Abstract:
In the last few decades, the discrete chaos of difference equations has gained a massive attention of academicians and scholars due to its tremendous applications in each and every branch of science, such as cryptography, traffic control models, secure communications, weather forecasting, and engineering. In this article, a generalized logistic discrete map is established and discrete chaos is reported through period doubling bifurcation, period three orbit and Lyapunov exponent. It is interesting to see that the generalized logistic map exhibits superior chaos due to the presence of an extra degree of freedom of an ordered parameter. The period doubling bifurcation and Lyapunov exponent are demonstrated for some particular values of parameter and the discrete chaos is determined in the sense of Devaney's definition of chaos theoretically as well as numerically. Moreover, the study discusses an extended chaos based image encryption and decryption scheme in cryptography using this novel system. Surprisingly, a larger key space for coding and more sensitive dependence on initial conditions are examined for encryption and decryption of text messages, images and videos which secure the system strongly from external cyber attacks, coding attacks, statistic attacks and differential attacks.
170
14891
A New Approach to the Digital Implementation of Analog Controllers for a Power System Control
Abstract:
In this paper, a comparison of discrete time PID, PSS controllers is presented through small signal stability of power system comprising of one machine connected to infinite bus system. This comparison achieved by using a new approach of discretization which converts the S-domain model of analog controllers to a Z-domain model to enhance the damping of a single machine power system. The new method utilizes the Plant Input Mapping (PIM) algorithm. The proposed algorithm is stable for any sampling rate, as well as it takes the closed loop characteristic into consideration. On the other hand, the traditional discretization methods such as Tustin’s method is produce satisfactory results only; when the sampling period is sufficiently low.
169
69379
A Robust Hybrid Blind Digital Image Watermarking System Using Discrete Wavelet Transform and Contourlet Transform
Abstract:
In this paper, a hybrid blind digital watermarking system using Discrete Wavelet Transform (DWT) and Contourlet Transform (CT) has been implemented and tested. The implemented combined digital watermarking system has been tested against five common types of image attacks. The performance evaluation shows improved results in terms of imperceptibility, robustness, and high tolerance against these attacks; accordingly, the system is very effective and applicable.
168
82214
Application of the Micropolar Beam Theory for the Construction of the Discrete-Continual Model of Carbon Nanotubes
Abstract:
Together with the study of electron-optical properties of nanostructures and proceeding from experiment-based data, the study of the mechanical properties of nanostructures has become quite actual. For the study of the mechanical properties of fullerene, carbon nanotubes, graphene and other nanostructures one of the crucial issues is the construction of their adequate mathematical models. Among all mathematical models of graphene or carbon nano-tubes, this so-called discrete-continuous model is specifically important. It substitutes the interactions between atoms by elastic beams or springs. The present paper demonstrates the construction of the discrete-continual beam model for carbon nanotubes or graphene, where the micropolar beam model based on the theory of moment elasticity is accepted. With the account of the energy balance principle, the elastic moment constants for the beam model, expressed by the physical and geometrical parameters of carbon nanotube or graphene, are determined. By switching from discrete-continual beam model to the continual, the models of micropolar elastic cylindrical shell and micropolar elastic plate are confirmed as continual models for carbon nanotube and graphene respectively.
167
43831
Discrete Element Modeling on Bearing Capacity Problems
Authors:
Abstract:
In this paper, the classical bearing capacity problem is re-considered from discrete element analysis. In the discrete element approach, the bearing capacity problem is considered from the elastic stage to plastic stage to rupture stage (large displacement). The bearing capacity failure mechanism of a strip footing on soil is investigated, and the influence of micro-parameters on the bearing capacity of soil is also observed. It is found that the distinct element method (DEM) gives very good visualized results, and basically coincides well with that derived by the classical methods.
166
48966
Exact Solutions of Discrete Sine-Gordon Equation
Authors:
Abstract:
Two families of exact travelling solutions for the discrete sine-Gordon equation are constructed based on the variable-coefficient Jacobian elliptic function method and different transformations. When the modulus of Jacobian elliptic function solutions tends to 1, soliton solutions can be obtained. Some soliton solutions degenerate into the known solutions in literatures. Moreover, dynamical properties of exact solutions are investigated. Our analysis and results may have potential values for certain applications in modern nonlinear science and textile engineering.
165
116555
Speech Intelligibility Improvement Using Variable Level Decomposition Discrete Wavelet Transform
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in a speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with universal discrete wavelet transform (DWT) thresholding and minimum mean square error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods.
164
116554
Speech Intelligibility Improvement Using Variable Level Decomposition Discrete Wavelet Transform
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in a speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with universal discrete wavelet transform (DWT) thresholding and minimum mean square error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods.
163
115620
Speech Intelligibility Improvement Using Variable Level Decomposition Discrete Wavelet Transform
Abstract:
Intelligibility is an essential characteristic of a speech signal, which is used to help in the understanding of information in a speech signal. Background noise in the environment can deteriorate the intelligibility of a recorded speech. In this paper, we presented a simple variance subtracted - variable level discrete wavelet transform, which improve the intelligibility of speech. The proposed algorithm does not require an explicit estimation of noise, i.e., prior knowledge of the noise; hence, it is easy to implement, and it reduces the computational burden. The proposed algorithm decides a separate decomposition level for each frame based on signal dominant and dominant noise criteria. The performance of the proposed algorithm is evaluated with speech intelligibility measure (STOI), and results obtained are compared with universal discrete wavelet transform (DWT) thresholding and minimum mean square error (MMSE) methods. The experimental results revealed that the proposed scheme outperformed competing methods.
162
74571
Design of Bayesian MDS Sampling Plan Based on the Process Capability Index
Abstract:
In this paper, a variable multiple dependent state (MDS) sampling plan is developed based on the process capability index using Bayesian approach. The optimal parameters of the developed sampling plan with respect to constraints related to the risk of consumer and producer are presented. Two comparison studies have been done. First, the methods of double sampling model, sampling plan for resubmitted lots and repetitive group sampling (RGS) plan are elaborated and average sample numbers of the developed MDS plan and other classical methods are compared. A comparison study between the developed MDS plan based on Bayesian approach and the exact probability distribution is carried out.
161
27531
Discrete Choice Modeling in Education: Evaluating Early Childhood Educators’ Practices
Abstract:
Discrete choice models belong to the family of Conjoint analysis that are applied on the preferences of the respondents towards a set of scenarios that describe alternative choices. The scenarios have been pre-designed to cover all the attributes of the alternatives that may affect the choices. In this study, we examine how preschool educators integrate physical activities into their everyday teaching practices through the use of discrete choice models. One of the advantages of discrete choice models compared to other more traditional data collection methods (e.g. questionnaires and interviews that use ratings) is that the respondent is called to select among competitive and realistic alternatives, rather than objectively rate each attribute that the alternatives may have. We present the effort to construct and choose representative attributes that would cover all possible choices of the respondents, and the scenarios that have arisen. For the purposes of the study, we used a sample of 50 preschool educators in Greece that responded to 4 scenarios (from the total of 16 scenarios that the orthogonal design resulted), with each scenario having three alternative teaching practices. Seven attributes of the alternatives were used in the scenarios. For the analysis of the data, we used multinomial logit model with random effects, multinomial probit model and generalized mixed logit model. The conclusions drawn from the estimated parameters of the models are discussed.
160
20725
Optimizing Coal Yard Management Using Discrete Event Simulation
Authors:
Abstract:
A Coal-Fired Power Plant has some integrated facilities to handle coal from three separated coal yards to eight units power plant’s bunker. But nowadays the facilities are not reliable enough for supporting the system. Management planned to invest some facilities to increase the reliability. They also had a plan to make single spesification of coal used all of the units, called Single Quality Coal (SQC). This simulation would compare before and after improvement with two scenarios i.e First In First Out (FIFO) and Last In First Out (LIFO). Some parameters like stay time, reorder point and safety stock is determined by the simulation. Discrete event simulation based software, Flexsim 5.0, is used to help the simulation. Based on the simulation, Single Quality Coal with FIFO scenario has the shortest staytime with 8.38 days.
159
84418
Continuous Plug Flow and Discrete Particle Phase Coupling Using Triangular Parcels
Abstract:
Various processes are modelled using a discrete phase, where particles are seeded from a source. Such particles can represent liquid water droplets, which are affecting the continuous phase by exchanging thermal energy, momentum, species etc. Discrete phases are typically modelled using parcel, which represents a collection of particles, which share properties such as temperature, velocity etc. When coupling the phases, the exchange rates are integrated over the cell, in which the parcel is located. This can cause spikes and fluctuating exchange rates. This paper presents an alternative method of coupling a discrete and a continuous plug flow phase. This is done using triangular parcels, which span between nodes following the dynamics of single droplets. Thus, the triangular parcels are propagated using the corner nodes. At each time step, the exchange rates are spatially integrated over the surface of the triangular parcels, which yields a smooth continuous exchange rate to the continuous phase. The results shows that the method is more stable, converges slightly faster and yields smooth exchange rates compared with the steam tube approach. However, the computational requirements are about five times greater, so the applicability of the alternative method should be limited to processes, where the exchange rates are important. The overall balances of the exchanged properties did not change significantly using the new approach.
158
36087
Methods of Variance Estimation in Two-Phase Sampling
Abstract:
The two-phase sampling which is also known as double sampling was introduced in 1938. In two-phase sampling, samples are selected in phases. In the first phase, a relatively large sample of size is selected by some suitable sampling design and only information on the auxiliary variable is collected. During the second phase, a sample of size is selected either from, the sample selected in the first phase or from the entire population by using a suitable sampling design and information regarding the study and auxiliary variable is collected. Evidently, two phase sampling is useful if the auxiliary information is relatively easy and cheaper to collect than the study variable as well as if the strength of the relationship between the variables and is high. If the sample is selected in more than two phases, the resulting sampling design is called a multi-phase sampling. In this article we will consider how one can use data collected at the first phase sampling at the stages of estimation of the parameter, stratification, selection of sample and their combinations in the second phase in a unified setup applicable to any sampling design and wider classes of estimators. The problem of the estimation of variance will also be considered. The variance of estimator is essential for estimating precision of the survey estimates, calculation of confidence intervals, determination of the optimal sample sizes and for testing of hypotheses amongst others. Although, the variance is a non-negative quantity but its estimators may not be non-negative. If the estimator of variance is negative, then it cannot be used for estimation of confidence intervals, testing of hypothesis or measure of sampling error. The non-negativity properties of the variance estimators will also be studied in details.
157
36200
Discrete Group Search Optimizer for the Travelling Salesman Problem
Abstract:
In this study, we apply Discrete Group Search Optimizer (DGSO) for solving Traveling Salesman Problem (TSP). The DGSO is a nature inspired optimization algorithm that imitates the animal behavior, especially animal searching behavior. The proposed DGSO uses a vector representation and some discrete operators, such as destruction, construction, differential evolution, swap and insert. The TSP is a well-known hard combinatorial optimization problem, which seeks to find the shortest path among numbers of cities. The performance of the proposed DGSO is evaluated and tested on benchmark instances which listed in LIBTSP dataset. The experimental results show that the performance of the proposed DGSO is comparable with the other methods in the state of the art for some instances. The results show that DGSO outperform Ant Colony System (ACS) in some instances whilst outperform other metaheuristic in most instances. In addition to that, the new results obtained a number of optimal solutions and some best known results. DGSO was able to obtain feasible and good quality solution across all dataset.
156
28561
Assessment of Seismic Behavior of Masonry Minarets by Discrete Element Method
Abstract:
Mosques and minarets can be severely damaged as a result of earthquakes. Non-linear behavior of minarets of Mihrimah Sultan and Süleymaniye Mosques and the minaret of St. Sophia are analyzed to investigate seismic response, damage and failure mechanisms of minarets during earthquake. Selected minarets have different height and diameter. Discrete elements method was used to create the numerical minaret models. Analyses were performed using sine waves. Two parameters were used for evaluating the results: the maximum relative dislocation of adjacent drums and the maximum displacement at the top of the minaret. Both parameters were normalized by the drum diameter. The effects of minaret geometry on seismic behavior were evaluated by comparing the results of analyses.
155
51597
Secure Proxy Signature Based on Factoring and Discrete Logarithm
Abstract:
A digital signature is an electronic signature form used by an original signer to sign a specific document. When the original signer is not in his office or when he/she travels outside, he/she delegates his signing capability to a proxy signer and then the proxy signer generates a signing message on behalf of the original signer. The two parties must be able to authenticate one another and agree on a secret encryption key, in order to communicate securely over an unreliable public network. Authenticated key agreement protocols have an important role in building a secure communications network between the two parties. In this paper, we present a secure proxy signature scheme over an efficient and secure authenticated key agreement protocol based on factoring and discrete logarithm problem.
154
56985
Investigating the Efficiency of Stratified Double Median Ranked Set Sample for Estimating the Population Mean
Abstract:
Stratified double median ranked set sampling (SDMRSS) method is suggested for estimating the population mean. The SDMRSS is compared with the simple random sampling (SRS), stratified simple random sampling (SSRS), and stratified ranked set sampling (SRSS). It is shown that SDMRSS estimator is an unbiased of the population mean and more efficient than SRS, SSRS, and SRSS. Also, by SDMRSS, we can increase the efficiency of mean estimator for specific value of the sample size. SDMRSS is applied on real life examples, and the results of the example agreed the theoretical results.
153
81916
High Performance Electrocardiogram Steganography Based on Fast Discrete Cosine Transform
Abstract:
Based on fast discrete cosine transform (FDCT), the authors present a high capacity and high perceived quality method for electrocardiogram (ECG) signal. By using a simple adjusting policy to the 1-dimentional (1-D) DCT coefficients, a large volume of secret message can be effectively embedded in an ECG host signal and be successfully extracted at the intended receiver. Simulations confirmed that the resulting perceived quality is good, while the hiding capability of the proposed method significantly outperforms that of existing techniques. In addition, our proposed method has a certain degree of robustness. Since the computational complexity is low, it is feasible for our method being employed in real-time applications.
152
17539
Using Discrete Event Simulation Approach to Reduce Waiting Times in Computed Tomography Radiology Department
Abstract:
The purpose of this study was to reduce patient waiting times, improve system throughput and improve resources utilization in radiology department. A discrete event simulation model was developed using Arena simulation software to investigate different alternatives to improve the overall system delivery based on adding resource scenarios due to the linkage between patient waiting times and resource availability. The study revealed that there is no addition investment need to procure additional scanner but hospital management deploy managerial tactics to enhance machine utilization and reduce the long waiting time in the department.
151
17856
Investigation of the Effects of Sampling Frequency on the THD of 3-Phase Inverters Using Space Vector Modulation
Abstract:
This paper presents the simulation results of the effects of sampling frequency on the total harmonic distortion (THD) of three-phase inverters using the space vector pulse width modulation (SVPWM) and space vector control (SVC) algorithms. The relationship between the variables was studied using curve fitting techniques, and it has been shown that, for 50 Hz inverters, there is an exponential relation between the sampling frequency and THD up to around 8500 Hz, beyond which the performance of the model becomes irregular, and there is an negative exponential relation between the sampling frequency and the marginal improvement to the THD. It has also been found that the performance of SVPWM is better than that of SVC with the same sampling frequency in most frequency range, including the range where the performance of the former is irregular.
150
50507
Crack Width Analysis of Reinforced Concrete Members under Shrinkage Effect by Pseudo-Discrete Crack Model
Abstract:
Crack caused by shrinkage movement of concrete is a serious problem especially when restraint is provided. It may cause severe serviceability and durability problems. The existing prediction methods for crack width of concrete due to shrinkage movement are mainly numerical methods under simplified circumstances, which do not agree with each other. To get a more unified prediction method applicable to more sophisticated circumstances, finite element crack width analysis for shrinkage effect should be developed. However, no existing finite element analysis can be carried out to predict the crack width of concrete due to shrinkage movement because of unsolved reasons of conventional finite element analysis. In this paper, crack width analysis implemented by finite element analysis is presented with pseudo-discrete crack model, which combines traditional smeared crack model and newly proposed crack queuing algorithm. The proposed pseudo-discrete crack model is capable of simulating separate and single crack without adopting discrete crack element. And the improved finite element analysis can successfully simulate the stress redistribution when concrete is cracked, which is crucial for predicting crack width, crack spacing and crack number.
149
35462
Computational Simulations on Stability of Model Predictive Control for Linear Discrete-Time Stochastic Systems
Abstract:
Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial time and a moving terminal time. This paper examines the stability of model predictive control for linear discrete-time systems with additive stochastic disturbances. A sufficient condition for the stability of the closed-loop system with model predictive control is derived by means of a linear matrix inequality. The objective of this paper is to show the results of computational simulations in order to verify the validity of the obtained stability condition.
148
63970
Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization
Abstract:
Recently, feedback control systems using random dither quantizers have been proposed for linear discrete-time systems. However, the constraints imposed on state and control variables have not yet been taken into account for the design of feedback control systems with random dither quantization. Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial and terminal time. An important advantage of model predictive control is its ability to handle constraints imposed on state and control variables. Based on the model predictive control approach, the objective of this paper is to present a control method that satisfies probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization. In other words, this paper provides a method for solving the optimal control problems subject to probabilistic state constraints for linear discrete-time feedback control systems with random dither quantization.
147
31450
Fault Diagnosis in Induction Motors Using the Discrete Wavelet Transform
Authors:
Abstract:
This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.
146
76811
Different Sampling Schemes for Semi-Parametric Frailty Model
Abstract:
Frailty model is a survival model that takes into account the unobserved heterogeneity for exploring the relationship between the survival of an individual and several covariates. In the recent years, proposed survival models become more complex and this feature causes convergence problems especially in large data sets. Therefore selection of sample from these big data sets is very important for estimation of parameters. In sampling literature, some authors have defined new sampling schemes to predict the parameters correctly. For this aim, we try to see the effect of sampling design in semi-parametric frailty model. We conducted a simulation study in R programme to estimate the parameters of semi-parametric frailty model for different sample sizes, censoring rates under classical simple random sampling and ranked set sampling schemes. In the simulation study, we used data set recording 17260 male Civil Servants aged 40–64 years with complete 10-year follow-up as population. Time to death from coronary heart disease is treated as a survival-time and age, systolic blood pressure are used as covariates. We select the 1000 samples from population using different sampling schemes and estimate the parameters. From the simulation study, we concluded that ranked set sampling design performs better than simple random sampling for each scenario.
145
22046
Fault Diagnosis in Induction Motors Using Discrete Wavelet Transform
Abstract:
This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.
144
105620
Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus
Abstract:
Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time.
143
41630
Optimal Linear Quadratic Digital Tracker for the Discrete-Time Proper System with an Unknown Disturbance
Abstract:
In this paper, we first construct a new state and disturbance estimator using discrete-time proportional plus integral observer to estimate the system state and the unknown external disturbance for the discrete-time system with an input-to-output direct-feedthrough term. Then, the generalized optimal linear quadratic digital tracker design is applied to construct a proportional plus integral observer-based tracker for the system with an unknown external disturbance to have a desired tracking performance. Finally, a numerical simulation is given to demonstrate the effectiveness of the new application of our proposed approach.
142
27296
Some Generalized Multivariate Estimators for Population Mean under Multi Phase Stratified Systematic Sampling
Abstract:
The generalized multivariate ratio and regression type estimators for population mean are suggested under multi-phase stratified systematic sampling (MPSSS) using multi auxiliary information. Estimators are developed under the two different situations of availability of auxiliary information. The expressions of bias and mean square error (MSE) are developed. Special cases of suggested estimators are also discussed and simulation study is conducted to observe the performance of estimators.
141
15719
Bridging Stress Modeling of Composite Materials Reinforced by Fiber Using Discrete Element Method
Abstract:
The problem of toughening in brittle materials reinforced by fibers is complex, involving all the mechanical properties of fibers, matrix, the fiber/matrix interface, as well as the geometry of the fiber. An appropriate method applicable to the simulation and analysis of toughening is essential. In this work, we performed simulations and analysis of toughening in brittle matrix reinforced by randomly distributed fibers by means of the discrete elements method. At first, we put forward a mechanical model of the contribution of random fibers to the toughening of composite. Then with numerical programming, we investigated the stress, damage and bridging force in the composite material when a crack appeared in the brittle matrix. From the results obtained, we conclude that: (i) fibers with high strength and low elasticity modulus benefit toughening; (ii) fibers with relatively high elastic modulus compared to the matrix may result in considerable matrix damage (spalling effect); (iii) employment of high-strength synthetic fiber is a good option. The present work makes it possible to optimize the parameters in order to produce advanced ceramic with desired performance. We believe combination of the discrete element method (DEM) with the finite element method (FEM) can increase the versatility and efficiency of the software developed.
140
78538
Numerical Simulations on Feasibility of Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization
Abstract:
The random dither quantization method enables us to achieve much better performance than the simple uniform quantization method for the design of quantized control systems. Motivated by this fact, the stochastic model predictive control method in which a performance index is minimized subject to probabilistic constraints imposed on the state variables of systems has been proposed for linear feedback control systems with random dither quantization. In other words, a method for solving optimal control problems subject to probabilistic state constraints for linear discrete-time control systems with random dither quantization has been already established. To our best knowledge, however, the feasibility of such a kind of optimal control problems has not yet been studied. Our objective in this paper is to investigate the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization. To this end, we provide the results of numerical simulations that verify the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization.
139
66150
Distributed Control Strategy for Dispersed Energy Storage Units in the DC Microgrid Based on Discrete Consensus
Abstract:
The SOC (state of charge) based droop control has limitations on the load power sharing among different energy storage units, due to the line impedance. In this paper, a distributed control strategy for dispersed energy storage units in the DC microgrid based on discrete consensus is proposed. Firstly, a sparse information communication network is built. Thus, local controllers can communicate with its neighbors using voltage, current and SOC information. An average voltage of grid can be evaluated to compensate voltage offset by droop control, and an objective virtual resistance fulfilling above requirement can be dynamically calculated to distribute load power according to the SOC of the energy storage units. Then, the stability of the whole system and influence of communication delay are analyzed. It can be concluded that this control strategy can improve the robustness and flexibility, because of having no center controller. Finally, a model of DC microgrid with dispersed energy storage units and loads is built, the discrete distributed algorithm is established and communication protocol is developed. The co-simulation between Matlab/Simulink and JADE (Java agent development framework) has verified the effectiveness of proposed control strategy.
138
19429
A Comparison between Empirical and Theoretical OC Curves Related to Acceptance Sampling for Attributes
Abstract:
Many companies use the technique named as acceptance sampling which consists on the inspection and decision making regarding products. According to the results derived from this method, the company takes the decision of acceptance or rejection of a product. The acceptance sampling can be applied to the technology management, since the acceptance sampling can be seen as a tool to improve the design planning, operation and control of technological products. The theoretical operating characteristic (OC) curves are widely used when dealing with acceptance sampling. In this paper, we carry out Monte Carlo simulation studies to compare numerically the empirical OC curves derived from the empirical results to the customary theoretical OC curves. We analyze various possible scenarios in such a way that the differences between the empirical and theoretical curves can be observed under different situations.
137
20238
Discrete State Prediction Algorithm Design with Self Performance Enhancement Capacity
Abstract:
This work presents a discrete quantitative state prediction algorithm with intelligent behavior making it able to self-improve some performance aspects. The specificity of this algorithm is the capacity of self-rectification of the prediction strategy before the final decision. The auto-rectification mechanism is based on two parallel mathematical models. In one hand, the algorithm predicts the next state based on event transition matrix updated after each observation. In the other hand, the algorithm extracts its residues trend with a linear regression representing historical residues data-points in order to rectify the first decision if needs. For a normal distribution, the interactivity between the two models allows the algorithm to self-optimize its performance and then make better prediction. Designed key performance indicator, computed during a Monte Carlo simulation, shows the advantages of the proposed approach compared with traditional one.
136
51477
Sampling Effects on Secondary Voltage Control of Microgrids Based on Network of Multiagent
Abstract:
This paper studies a secondary voltage control framework of the microgrids based on the consensus for a communication network of multiagent. The proposed control is designed by the communication network with one-way links. The communication network is modeled by a directed graph. At this time, the concept of sampling is considered as the communication constraint among each distributed generator in the microgrids. To analyze the sampling effects on the secondary voltage control of the microgrids, by using Lyapunov theory and some mathematical techniques, the sufficient condition for such problem will be established regarding linear matrix inequality (LMI). Finally, some simulation results are given to illustrate the necessity of the consideration of the sampling effects on the secondary voltage control of the microgrids.
135
92744
A Discrete Logit Survival Model with a Smooth Baseline Hazard for Age at First Alcohol Intake among Students at Tertiary Institutions in Thohoyandou, South Africa
Abstract:
We employ a discrete logit survival model to investigate the risk factors for early alcohol intake among students at two tertiary institutions in Thohoyandou, South Africa. Data were collected from a sample of 744 students using a self-administered questionnaire. Significant covariates were arrived at through a regularization algorithm implemented using the glmmLasso package. The tuning parameter was determined using a five-fold cross-validation algorithm. The baseline hazard was modelled as a smooth function of time through the use of spline functions. The results show that the hazard of initial alcohol intake peaks at the age of about 16 years and that at any given time, being of a male gender, prior use of other drugs, having drinking peers, having experienced negative life events and physical abuse are associated with a higher risk of alcohol intake debut.
134
118023
Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error
Abstract:
This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.
133
45295
The Variable Sampling Interval Xbar Chart versus the Double Sampling Xbar Chart
Abstract:
The Shewhart Xbar control chart is a useful process monitoring tool in manufacturing industries to detect the presence of assignable causes. However, it is insensitive in detecting small process shifts. To circumvent this problem, adaptive control charts are suggested. An adaptive chart enables at least one of the chart’s parameters to be adjusted to increase the chart’s sensitivity. Two common adaptive charts that exist in the literature are the double sampling (DS) Xbar and variable sampling interval (VSI) Xbar charts. This paper compares the performances of the DS and VSI Xbar charts, based on the average time to signal (ATS) criterion. The ATS profiles of the DS Xbar and VSI Xbar charts are obtained using the Mathematica and Statistical Analysis System (SAS) programs, respectively. The results show that the VSI Xbar chart is generally superior to the DS Xbar chart.
132
34239
Investigating the Shear Behaviour of Fouled Ballast Using Discrete Element Modelling
Abstract:
For several hundred years, the design of railway tracks has practically remained unchanged. Traditionally, rail tracks are placed on a ballast layer due to several reasons, including economy, rapid drainage, and high load bearing capacity. The primary function of ballast is to distributing dynamic track loads to sub-ballast and subgrade layers, while also providing lateral resistance and allowing for rapid drainage. Upon repeated trainloads, the ballast becomes fouled due to ballast degradation and the intrusion of fines which adversely affects the strength and deformation behaviour of ballast. This paper presents the use of three-dimensional discrete element method (DEM) in studying the shear behaviour of the fouled ballast subjected to direct shear loading. Irregularly shaped particles of ballast were modelled by grouping many spherical balls together in appropriate sizes to simulate representative ballast aggregates. Fouled ballast was modelled by injecting a specified number of miniature spherical particles into the void spaces. The DEM simulation highlights that the peak shear stress of the ballast assembly decreases and the dilation of fouled ballast increases with an increase level of fouling. Additionally, the distributions of contact force chain and particle displacement vectors were captured during shearing progress, explaining the formation of shear band and the evolutions of volumetric change of fouled ballast.
131
36267
A Coupled Extended-Finite-Discrete Element Method: On the Different Contact Schemes between Continua and Discontinua
Abstract:
Recently, advanced geotechnical engineering problems related to soil movement, particle loss, and modeling of local failure (i.e. discontinua) as well as modeling the in-contact structures (i.e. continua) are of the great interest among researchers. The aim of this research is to meet the requirements with respect to the modeling of the above-mentioned two different domains simultaneously. To this end, a coupled numerical method is introduced based on Discrete Element Method (DEM) and eXtended-Finite Element Method (X-FEM). In the coupled procedure, DEM is employed to capture the interactions and relative movements of soil particles as discontinua, while X-FEM is utilized to model in-contact structures as continua, which may consist of different types of discontinuities. For verification purposes, the new coupled approach is utilized to examine benchmark problems including different contacts between/within continua and discontinua. Results are validated by comparison with those of existing analytical and numerical solutions. This study proves that extended-finite-discrete element method can be used to robustly analyze not only contact problems, but also other types of discontinuities in continua such as (i) crack formations and propagations, (ii) voids and bimaterial interfaces, and (iii) combination of previous cases. In essence, the proposed method can be used vastly in advanced soil-structure interaction problems to investigate the micro and macro behaviour of the surrounding soil and the response of the embedded structure that contains discontinuities.
130
22006
Stator Short-Circuits Fault Diagnosis in Induction Motors Using Extended Park’s Vector Approach through the Discrete Wavelet Transform
Abstract:
This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental, results show the effectiveness of the used method.
129
81818
Chaotic Semiflows with General Acting Topological Monoids
Authors:
Abstract:
A semiflow is a triple consisting of a Hausdorff topological space $X$, a commutative topological monoid $T$ and a continuous monoid action of $T$ on $X$. The acting monoid $T$ is usually either the discrete monoid $\N_0$ of nonnegative integers (in which case the semiflow can be defined as a pair $(X,f)$ consisting of a phase space $X$ and a continuous function $f:X\to X$), or the monoid $\R_+$ of nonnegative real numbers (the so-called one-parameter monoid). However, it turns out that there are real-life situations where it is useful to consider the acting monoids that are a combination of discrete and continuous monoids. That, for example, happens, when we are observing certain dynamical system at discrete moments, but after some time realize that it would be beneficial to continue our observations in real time. The acting monoid in that case would be $T=\{0, t_0, 2t_0, \dots, (n-1)t_0\} \cup [nt_0,\infty)$ with the operation and topology induced from real numbers. This partly explains the motivation for the level of generality which is pursued in our research. We introduce the PSP monoids, which include all but ``pathological'' monoids, and most of our statements hold for them. The topic of our presentation are some recent results about chaos-related properties in semiflows, indecomposability and sensitivity of semiflows in the described general context.
128
74116
The Impact of Inpatient New Boarding Policy on Emergency Department Overcrowding: A Discrete Event Simulation Study
Abstract:
In this study, we investigate the effect of a new boarding policy - short stay, on the overcrowding efficiency in emergency department (ED). The decision variables are no. of short stay beds for least acuity ED patients. The performance measurements used are national emergency department overcrowding score (NEDOCS) and ED retention rate (the percentage that patients stay in ED over than 48 hours in one month). Discrete event simulation (DES) is used as an analysis tool to evaluate the strategy. Also, common random number (CRN) technique is applied to enhance the simulation precision. The DES model was based on a census of 6 months' patients who were treated in the ED of the National Taiwan University Hospital Yunlin Branch. Our results show that the new short-stay boarding significantly impacts both the NEDOCS and ED retention rate when the no. of short stay beds is more than three.
127
68071
Coarse-Grained Computational Fluid Dynamics-Discrete Element Method Modelling of the Multiphase Flow in Hydrocyclones
Abstract:
Hydrocyclones are widely used to classify particles by size in industries such as mineral processing and chemical processing. The particles to be handled usually have a broad range of size distributions and sometimes density distributions, which has to be properly considered, causing challenges in the modelling of hydrocyclone. The combined approach of Computational Fluid Dynamics (CFD) and Discrete Element Method (DEM) offers convenience to model particle size/density distribution. However, its direct application to hydrocyclones is computationally prohibitive because there are billions of particles involved. In this work, a CFD-DEM model with the concept of the coarse-grained (CG) model is developed to model the solid-fluid flow in a hydrocyclone. The DEM is used to model the motion of discrete particles by applying Newton’s laws of motion. Here, a particle assembly containing a certain number of particles with same properties is treated as one CG particle. The CFD is used to model the liquid flow by numerically solving the local-averaged Navier-Stokes equations facilitated with the Volume of Fluid (VOF) model to capture air-core. The results are analyzed in terms of fluid and solid flow structures, and particle-fluid, particle-particle and particle-wall interaction forces. Furthermore, the calculated separation performance is compared with the measurements. The results obtained from the present study indicate that this approach can offer an alternative way to examine the flow and performance of hydrocyclones
126
59289
Artificial Neural Network in Predicting the Soil Response in the Discrete Element Method Simulation
Abstract:
This paper attempts to bridge the soil properties and the mechanical response of soil in the discrete element method (DEM) simulation. The artificial neural network (ANN) was therefore adopted, aiming to reproduce the stress-strain-volumetric response when soil properties are given. 31 biaxial shearing tests with varying soil parameters (e.g., initial void ratio and interparticle friction coefficient) were generated using the DEM simulations. Based on these 45 sets of training data, a three-layer neural network was established which can output the entire stress-strain-volumetric curve during the shearing process from the input soil parameters. Beyond the training data, 2 additional sets of data were generated to examine the validity of the network, and the stress-strain-volumetric curves for both cases were well reproduced using this network. Overall, the ANN was found promising in predicting the soil behavior and reducing repetitive simulation work.
125
51438
Variational Evolutionary Splines for Solving a Model of Temporomandibular Disorders
Abstract:
The aim of this work is to modelize the occlusion of a person with temporomandibular disorders as an evolutionary equation and approach its solution by the construction and characterizing of discrete variational splines. To formulate the problem, certain boundary conditions have been considered. After showing the existence and the uniqueness of the solution of such a problem, a convergence result of a discrete variational evolutionary spline is shown. A stress analysis of the occlusion of a human jaw with temporomandibular disorders by finite elements is carried out in FreeFem++ in order to prove the validity of the presented method.
124
49404
Blind Watermarking Using Discrete Wavelet Transform Algorithm with Patchwork
Abstract:
This study is about blind watermarking on images with different categories and properties using two algorithms namely, Discrete Wavelet Transform and Patchwork Algorithm. A program is created to perform watermark embedding, extraction and evaluation. The evaluation is based on three watermarking criteria namely: image quality degradation, perceptual transparency and security. Image quality is measured by comparing the original properties with the processed one. Perceptual transparency is measured by a visual inspection on a survey. Security is measured by implementing geometrical and non-geometrical attacks through a pass or fail testing. Values used to measure the following criteria are mostly based on Mean Squared Error (MSE) and Peak Signal to Noise Ratio (PSNR). The results are based on statistical methods used to interpret and collect data such as averaging, z Test and survey. The study concluded that the combined DWT and Patchwork algorithms were less efficient and less capable of watermarking than DWT algorithm only.
123
109363
Effects of Tool State on the Output Parameters of Front Milling Using Discrete Wavelet Transform
Abstract:
The state of the cutting tool is an important factor to consider during machining to achieve a good surface quality. The vibration generated during material cutting can also directly affect the surface quality and life of the cutting tool. In this work, the effect of mechanical broken failure (MBF) on carbide insert tools during face milling of AISI 304 stainless steel was evaluated using three levels of feed rate and two spindle speeds for each tool condition: three carbide inserts have perfect geometry, and three other carbide inserts have MBF. The axial and radial depths remained constant. The cutting forces were determined through a sensory system that consists of a piezoelectric dynamometer and data acquisition system. Discrete Wavelet Transform was used to separate the static part of the signals of force and vibration. The roughness of the machined surface was analyzed for each machining condition. The MBF of the tool increased the intensity and force of vibration and worsened the roughness factors.
122
25870
Dynamical Systems and Fibonacci Numbers
Abstract:
The Dynamical systems concept is a mathematical formalization for any fixed rule that describes the time dependence of a points position in its ambient space. e.g. pendulum of a clock, the number of fish each spring in a lake, the number of rabbits spring in an enclosure, etc. The Dynamical system theory used to describe the complex nature that is dynamical systems with differential equations called continuous dynamical system or dynamical system with difference equations called discrete dynamical system. The concept of dynamical system has its origin in Newtonian mechanics.
121
86638
Modelling of Relocation and Battery Autonomy Problem on Electric Cars Sharing Dynamic by Using Discrete Event Simulation and Petri Net
Abstract:
Electric car sharing system as ecologic transportation increasing in the world. The complexity of managing electric car sharing systems, especially one-way trips and battery autonomy have direct influence to on supply and demand of system. One must be able to precisely model the demand and supply of these systems to better operate electric car sharing and estimate its effect on mobility management and the accessibility that it provides in urban areas. In this context, our work focus to develop performances optimization model of the system based on discrete event simulation and stochastic Petri net. The objective is to search optimal decisions and management parameters of the system in order to fulfil at best demand while minimizing undesirable situations. In this paper, we present new model of electric cars sharing with relocation based on monitoring system. The proposed approach also help to precise the influence of battery charging level on the behaviour of system as important decision parameter of this complex and dynamical system.
120
47802
Implementation of Invisible Digital Watermarking
Abstract:
Over the decade, the applications about multimedia have been developed rapidly. The advancement in the communication field at the faster pace, it is necessary to protect the data during transmission. Thus, security of multimedia contents becomes a vital issue, and it is a need for protecting the digital content against malfunctions. Digital watermarking becomes the solution for the copyright protection and authentication of data in the network. In multimedia applications, embedded watermarks should be robust, and imperceptible. For improving robustness, the discrete wavelet transform is used. Both encoding and extraction algorithm can be done using MATLAB R2012a. In this Discrete wavelet transform (DWT) domain of digital image, watermarking algorithm is used, and hardware implementation can be done on Xilinx based FPGA.
119
26867
Temporal Variation of PM10-Bound Benzo(a)Pyrene Concentration in an Urban and a Rural Site of Northwestern Hungary
Abstract:
The main objective of this study was to assess the annual concentration and seasonal variation of benzo(a)pyrene (BaP) associated with PM10 in an urban site of Győr and in a rural site of Sarród in the sampling period of 2008–2012. A total of 280 PM10 aerosol samples were collected in each sampling site and analyzed for BaP by gas chromatography method. The BaP concentrations ranged from undetected to 8 ng/m3 with the mean value of 1.01 ng/m3 in the sampling site of Győr, and from undetected to 4.07 ng/m3 with the mean value of 0.52 ng/m3 in the sampling site of Sarród, respectively. Relatively higher concentrations of BaP were detected in samples collected in both sampling sites in the heating seasons compared with non-heating periods. The annual mean BaP concentrations were comparable with the published data of different other Hungarian sites.
118
31946
A Double Acceptance Sampling Plan for Truncated Life Test Having Exponentiated Transmuted Weibull Distribution
Abstract:
The main purpose of this paper is to design a double acceptance sampling plan under the time truncated life test when the product lifetime follows an exponentiated transmuted Weibull distribution. Here, the motive is to meet both the consumer’s risk and producer’s risk simultaneously at the specified quality levels, while the termination time is specified. A comparison between the results of the double and single acceptance sampling plans is conducted. We demonstrate the applicability of our results to real data sets.
117
24714
A Comparative Study on Sampling Techniques of Polynomial Regression Model Based Stochastic Free Vibration of Composite Plates
Abstract:
This paper presents an exhaustive comparative investigation on sampling techniques of polynomial regression model based stochastic natural frequency of composite plates. Both individual and combined variations of input parameters are considered to map the computational time and accuracy of each modelling techniques. The finite element formulation of composites is capable to deal with both correlated and uncorrelated random input variables such as fibre parameters and material properties. The results obtained by Polynomial regression (PR) using different sampling techniques are compared. Depending on the suitability of sampling techniques such as 2k Factorial designs, Central composite design, A-Optimal design, I-Optimal, D-Optimal, Taguchi’s orthogonal array design, Box-Behnken design, Latin hypercube sampling, sobol sequence are illustrated. Statistical analysis of the first three natural frequencies is presented to compare the results and its performance.
116
79636
Comparative Study of Estimators of Population Means in Two Phase Sampling in the Presence of Non-Response
Abstract:
A comparative study of estimators of population means in two phase sampling in the presence of non-response when Unknown population means of the auxiliary variable(s) and incomplete information of study variable y as well as of auxiliary variable(s) is made. Three real data sets of University students, hospital and unemployment are used for comparison of all the available techniques in two phase sampling in the presence of non-response with the newly generalized ratio estimators.
115
8577
Application of the Discrete-Event Simulation When Optimizing of Business Processes in Trading Companies
Abstract:
Optimization of business processes in trading companies is reviewed in the report. There is the presentation of the “Wholesale Customer Order Handling Process” business process model applicable for small and medium businesses. It is proposed to apply the algorithm for automation of the customer order processing which will significantly reduce labor costs and time expenditures and increase the profitability of companies. An optimized business process is an element of the information system of accounting of spare parts trading network activity. The considered algorithm may find application in the trading industry as well.
114
58045
Robust H∞ State Feedback Control for Discrete Time T-S Fuzzy Systems Based on Fuzzy Lyapunov Function Approach
Authors:
Abstract:
This paper presents the problem of robust state feedback H∞ for discrete time nonlinear system represented by Takagi-Sugeno fuzzy systems. Based on fuzzy lyapunov function, the condition ,which is represented in the form of Liner Matrix Inequalities (LMI), guarantees the H∞ performance of the T-S fuzzy system with uncertainties. By comparison with recent literature, this approach will be more relaxed condition. Finally, an example is given to illustrate the proposed result.
113
3426
Numerical Treatment of Block Method for the Solution of Ordinary Differential Equations
Authors:
Abstract:
Discrete linear multistep block method of uniform order for the solution of first order Initial Value Problems (IVPs) in Ordinary Differential Equations (ODEs) is presented in this paper. The approach of interpolation and collocation approximation are adopted in the derivation of the method which is then applied to first order ordinary differential equations with associated initial conditions. The continuous hybrid formulations enable us to differentiate and evaluate at some grids and off – grid points to obtain four discrete schemes, which were used in block form for parallel or sequential solutions of the problems. Furthermore, a stability analysis and efficiency of the block method are tested on ordinary differential equations, and the results obtained compared favorably with the exact solution.
112
78773
Estimation of Population Mean under Random Non-Response in Two-Phase Successive Sampling
Abstract:
In this paper, we have considered the problem of estimation for population mean, on current (second) occasion in the presence of random non response in two-occasion successive sampling under two phase set-up. Modified exponential type estimators have been proposed, and their properties are studied under the assumptions that numbers of sampling units follow a distribution due to random non response situations. The performances of the proposed estimators are compared with linear combinations of two estimators, (a) sample mean estimator for fresh sample and (b) ratio estimator for matched sample under the complete response situations. Results are demonstrated through empirical studies which present the effectiveness of the proposed estimators. Suitable recommendations have been made to the survey practitioners.
111
52891
Fractal Analysis of Some Bifurcations of Discrete Dynamical Systems in Higher Dimensions
Abstract:
The main purpose of this paper is to study the box dimension as fractal property of bifurcations of discrete dynamical systems in higher dimensions. The paper contains the fractal analysis of the orbits near the hyperbolic and non-hyperbolic fixed points in discrete dynamical systems. It is already known that in one-dimensional case the orbit near the hyperbolic fixed point has the box dimension equal to zero. On the other hand, the orbit near the non-hyperbolic fixed point has strictly positive box dimension which is connected to the non-degeneracy condition of certain bifurcation. One of the main results in this paper is the generalisation of results about box dimension near the hyperbolic and non-hyperbolic fixed points to higher dimensions. In the process of determining box dimension, the restriction of systems to stable, unstable and center manifolds, Lipschitz property of box dimension and the notion of projective box dimension are used. The analysis of the bifurcations in higher dimensions with one multiplier on the unit circle is done by using the normal forms on one-dimensional center manifolds. This specific change in box dimension of an orbit at the moment of bifurcation has already been explored for some bifurcations in one and two dimensions. It was shown that specific values of box dimension are connected to appropriate bifurcations such as fold, flip, cusp or Neimark-Sacker bifurcation. This paper further explores this connection of box dimension as fractal property to some specific bifurcations in higher dimensions, such as fold-flip and flip-Neimark-Sacker. Furthermore, the application of the results to the unit time map of continuous dynamical system near hyperbolic and non-hyperbolic singularities is presented. In that way, box dimensions which are specific for certain bifurcations of continuous systems can be obtained. The approach to bifurcation analysis by using the box dimension as specific fractal property of orbits can lead to better understanding of bifurcation phenomenon. It could also be useful in detecting the existence or nonexistence of bifurcations of discrete and continuous dynamical systems.
110
48950
Spatially Random Sampling for Retail Food Risk Factors Study
Authors:
Abstract:
In 2013 and 2014, the U.S. Food and Drug Administration (FDA) collected data from selected fast food restaurants and full service restaurants for tracking changes in the occurrence of foodborne illness risk factors. This paper discussed how we customized spatial random sampling method by considering financial position and availability of FDA resources, and how we enriched restaurants data with location. Location information of restaurants provides opportunity for quantitatively determining random sampling within non-government units (e.g.: 240 kilometers around each data-collector). Spatial analysis also could optimize data-collectors’ work plans and resource allocation. Spatial analytic and processing platform helped us handling the spatial random sampling challenges. Our method fits in FDA’s ability to pinpoint features of foodservice establishments, and reduced both time and expense on data collection.
109
77680
Sampling and Characterization of Fines Created during the Shredding of Non Hazardous Waste
Abstract:
Fines are heterogeneous residues created during the shredding of non-hazardous waste. They are one of the most challenging issues faced by recyclers, because they are at the present time considered as non-sortable and non-reusable mixtures destined to landfill. However, fines contain a large amount of recoverable materials that could be recycled or reused for the production of solid recovered fuel. This research is conducted in relation to a project named ValoRABES. The aim is to characterize fines and establish a suitable sorting process in order to extract the materials contained in the mixture and define their suitable recovery paths. This paper will highlight the importance of a good sampling and will propose a sampling methodology for fines characterization. First results about the characterization will be also presented.
108
33624
Efficient DCT Architectures
Abstract:
This paper presents an efficient area and delay architectures for the implementation of one dimensional and two dimensional discrete cosine transform (DCT). These are supported to different lengths (4, 8, 16, and 32). DCT blocks are used in the different video coding standards for the image compression. The 2D- DCT calculation is made using the 2D-DCT separability property, such that the whole architecture is divided into two 1D-DCT calculations by using a transpose buffer. Based on the existing 1D-DCT architecture two different types of 2D-DCT architectures, folded and parallel types are implemented. Both of these two structures use the same transpose buffer. Proposed transpose buffer occupies less area and high speed than existing transpose buffer. Hence the area, low power and delay of both the 2D-DCT architectures are reduced.
107
51519
Effects of Various Wavelet Transforms in Dynamic Analysis of Structures
Abstract:
Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.
106
32529
A Survey of Discrete Facility Location Problems
Abstract:
Facility location is a complex real-world problem which needs a strategic management decision. This paper provides a general review on studies, efforts and developments in Facility Location Problems which are classical optimization problems having a wide-spread applications in various areas such as transportation, distribution, production, supply chain decisions and telecommunication. Our goal is not to review all variants of different studies in FLPs or to describe very detailed computational techniques and solution approaches, but rather to provide a broad overview of major location problems that have been studied, indicating how they are formulated and what are proposed by researchers to tackle the problem. A brief, elucidative table based on a grouping according to “General Problem Type” and “Methods Proposed” used in the studies is also presented at the end of the work.
105
43018
Testing a Flexible Manufacturing System Facility Production Capacity through Discrete Event Simulation: Automotive Case Study
Abstract:
In the age of automation and computation aiding manufacturing, it is clear that manufacturing systems have become more complex than ever before. Although technological advances provide the capability to gain more value with fewer resources, sometimes utilisation of the manufacturing capabilities available to organisations is difficult to achieve. Flexible manufacturing systems (FMS) provide a unique capability to manufacturing organisations where there is a need for product range diversification by providing line efficiency through production flexibility. This is very valuable in trend driven production set-ups or niche volume production requirements. Although FMS provides flexible and efficient facilities, its optimal set-up is key in achieving production performance. As many variables are interlinked due to the flexibility provided by the FMS, analytical calculations are not always sufficient to predict the FMS’ performance. Simulation modelling is capable of capturing the complexity and constraints associated with FMS. This paper demonstrates how discrete event simulation (DES) can address complexity in an FMS to optimise the production line performance. A case study of an automotive FMS is presented. The DES model demonstrates different configuration options depending on prioritising objectives: utilisation and throughput. Additionally, this paper provides insight into understanding the impact of system set-up constraints on the FMS performance and demonstrates the exploration into the optimal production set-up.
104
67045
Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future
Abstract:
Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.
103
35653
Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language
Abstract:
Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively.
102
51599
An Efficient Proxy Signature Scheme Over a Secure Communications Network
Abstract:
Proxy signature scheme permits an original signer to delegate his/her signing capability to a proxy signer, and then the proxy signer generates a signing message on behalf of the original signer. The two parties must be able to authenticate one another and agree on a secret encryption key, in order to communicate securely over an unreliable public network. Authenticated key agreement protocols have an important role in building secure communications network between the two parties. In this paper, we present a secure proxy signature scheme over an efficient and secure authenticated key agreement protocol based on the discrete logarithm problem.
101
50749
A Continuous Boundary Value Method of Order 8 for Solving the General Second Order Multipoint Boundary Value Problems
Authors:
Abstract:
This paper deals with the numerical integration of the general second order multipoint boundary value problems. This has been achieved by the development of a continuous linear multistep method (LMM). The continuous LMM is used to construct a main discrete method to be used with some initial and final methods (also obtained from the continuous LMM) so that they form a discrete analogue of the continuous second order boundary value problems. These methods are used as boundary value methods and adapted to cope with the integration of the general second order multipoint boundary value problems. The convergence, the use and the region of absolute stability of the methods are discussed. Several numerical examples are implemented to elucidate our solution process.
100
19699
A Simulation Modeling Approach for Optimization of Storage Space Allocation in Container Terminal
Abstract:
Container handling problems at container terminals are NP-hard problems. This paper presents an approach using discrete-event simulation modeling to optimize solution for storage space allocation problem, taking into account all various interrelated container terminal handling activities. The proposed approach is applied on a real case study data of container terminal at Alexandria port. The computational results show the effectiveness of the proposed model for optimization of storage space allocation in container terminal where 54% reduction in containers handling time in port is achieved.
99
24612
Probability Sampling in Matched Case-Control Study in Drug Abuse
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.
98
94150
Effective Supply Chain Coordination with Hybrid Demand Forecasting Techniques
Authors:
Abstract:
Effective supply chain is the main priority of every organization which is the outcome of strategic corporate investments with deliberate management action. Value-driven supply chain is defined through development, procurement and by configuring the appropriate resources, metrics and processes. However, responsiveness of the supply chain can be improved by proper coordination. So the Bullwhip effect (BWE) and Net stock amplification (NSAmp) values were anticipated and used for the control of inventory in organizations by both discrete wavelet transform-Artificial neural network (DWT-ANN) and Adaptive Network-based fuzzy inference system (ANFIS). This work presents a comparative methodology of forecasting for the customers demand which is non linear in nature for a multilevel supply chain structure using hybrid techniques such as Artificial intelligence techniques including Artificial neural networks (ANN) and Adaptive Network-based fuzzy inference system (ANFIS) and Discrete wavelet theory (DWT). The productiveness of these forecasting models are shown by computing the data from real world problems for Bullwhip effect and Net stock amplification. The results showed that these parameters were comparatively less in case of discrete wavelet transform-Artificial neural network (DWT-ANN) model and using Adaptive network-based fuzzy inference system (ANFIS).
97
72418
Discrete Element Modeling of the Effect of Particle Shape on Creep Behavior of Rockfills
Abstract:
Rockfills are widely used in civil engineering, such as dams, railways, and airport foundations in mountain areas. A significant long-term post-construction settlement may affect the serviceability or even the safety of rockfill infrastructures. The creep behavior of rockfills is influenced by a number of factors, such as particle size, strength and shape, water condition and stress level. However, the effect of particle shape on rockfill creep still remains poorly understood, which deserves a careful investigation. Particle-based discrete element method (DEM) was used to simulate the creep behavior of rockfills under different boundary conditions. Both angular and rounded particles were considered in this numerical study, in order to investigate the influence of particle shape. The preliminary results showed that angular particles experience more breakages and larger creep strains under one-dimensional compression than rounded particles. On the contrary, larger creep strains were observed in he rounded specimens in the direct shear test. The mechanism responsible for this difference is that the possibility of the existence of key particle in rounded particles is higher than that in angular particles. The above simulations demonstrate that the influence of particle shape on the creep behavior of rockfills can be simulated by DEM properly. The method of DEM simulation may facilitate our understanding of deformation properties of rockfill materials.
96
47346
CFD Simulation for Thermo-Hydraulic Performance V-Shaped Discrete Ribs on the Absorber Plate of Solar Air Heater
Abstract:
A computational investigation of various flow characteristics with artificial roughness in the form of V-types discrete ribs, heated wall of rectangular duct for turbulent flow with Reynolds number range (3800-15000) and p/e (5 to 12) has been carried out with k-e turbulence model is selected by comparing the predictions of different turbulence models with experimental results available in literature. The current study evaluates thermal performance behavior, heat transfer and fluid flow behavior in a v shaped duct with discrete roughened ribs mounted on one of the principal wall (solar plate) by computational fluid dynamics software (Fluent 6.3.26 Solver). In this study, CFD has been carried out through designing 3-demensional model of experimental solar air heater model analysis has been used to perform a numerical simulation to enhance turbulent heat transfer and Reynolds-Averaged Navier–Stokes analysis is used as a numerical technique and the k-epsilon model with near-wall treatment as a turbulent model. The thermal efficiency enhancement because of selected roughness is found to be 16-24%. The result predicts a significant enhancement of heat transfer as compared to that of for a smooth surface with different P’ and various range of Reynolds number.
95
69736
Oracle JDE Enterprise One ERP Implementation: A Case Study
Abstract:
The paper intends to bring out a real life experience encountered during actual implementation of a large scale Tier-1 Enterprise Resource Planning (ERP) system in a multi-location, discrete manufacturing organization in India, involved in manufacturing of auto components and aggregates. The business complexities, prior to the implementation of ERP, include multi-product with hierarchical product structures, geographically distributed multiple plant locations with disparate business practices, lack of inter-plant broadband connectivity, existence of disparate legacy applications for different business functions, and non-standardized codifications of products, machines, employees, and accounts apart from others. On the other hand, the manufacturing environment consisted of processes like Assemble-to-Order (ATO), Make-to-Stock (MTS), and Engineer-to-Order (ETO) with a mix of discrete and process operations. The paper has highlighted various business plan areas and concerns, prior to the implementation, with specific focus on strategic issues and objectives. Subsequently, it has dealt with the complete process of ERP implementation, starting from strategic planning, project planning, resource mobilization, and finally, the program execution. The step-by-step process provides a very good learning opportunity about the implementation methodology. At the end, various organizational challenges and lessons emerged, which will act as guidelines and checklist for organizations to successfully align and implement ERP and achieve their business objectives.
94
27109
Symbiotic Organism Search (SOS) for Solving the Capacitated Vehicle Routing Problem
Abstract:
This paper introduces symbiotic organism search (SOS) for solving capacitated vehicle routing problem (CVRP). SOS is a new approach in metaheuristics fields and never been used to solve discrete problems. A sophisticated decoding method to deal with a discrete problem setting in CVRP is applied using the basic symbiotic organism search (SOS) framework. The performance of the algorithm was evaluated on a set of benchmark instances and compared results with best known solution. The computational results show that the proposed algorithm can produce good solution as a preliminary testing. These results indicated that the proposed SOS can be applied as an alternative to solve the capacitated vehicle routing problem.
93
17097
Calibration of the Discrete Element Method Using a Large Shear Box
Abstract:
One of the main challenges in using the Discrete Element Method (DEM) is to specify the correct input parameter values. In general, the models are sensitive to the input parameter values and accurate results can only be achieved if the correct values are specified. For the linear contact model, micro-parameters such as the particle density, stiffness, coefficient of friction, as well as the particle size and shape distributions are required. There is a need for a procedure to accurately calibrate these parameters before any attempt can be made to accurately model a complete bulk materials handling system. Since DEM is often used to model applications in the mining and quarrying industries, a calibration procedure was developed for materials that consist of relatively large (up to 40 mm in size) particles. A coarse crushed aggregate was used as the test material. Using a specially designed large shear box with a diameter of 590 mm, the confined Young’s modulus (bulk stiffness) and internal friction angle of the material were measured by means of the confined compression test and the direct shear test respectively. DEM models of the experimental setup were developed and the input parameter values were varied iteratively until a close correlation between the experimental and numerical results was achieved. The calibration process was validated by modelling the pull-out of an anchor from a bed of material. The model results compared well with experimental measurement.
92
36824
Lossless Secret Image Sharing Based on Integer Discrete Cosine Transform
Abstract:
This paper proposes a new secret image sharing method based on integer discrete cosine transform (IntDCT). It first transforms the original image into the frequency domain (DCT coefficients) using IntDCT, which are operated on each block with size 8*8. Then, it generates shares among each DCT coefficients in the same place of each block, that is, all the DC components are used to generate DC shares, the ith AC component in each block are utilized to generate ith AC shares, and so on. The DC and AC shares components with the same number are combined together to generate DCT shadows. Experimental results and analyses show that the proposed method can recover the original image lossless than those methods based on traditional DCT and is more sensitive to tiny change in both the coefficients and the content of the image.
91
92612
A Discrete Element Method-Based Simulation of Toppling Failure Considering Block Interaction
Abstract:
The toppling failure mode in a rock mass is considerably different from the most common sliding failure type along an existing or an induced slip plane. Block toppling is observed in a rock mass which consists of both a widely-spaced basal cross-joint set and a closely-spaced discontinuity set dipping into the slope. For this case, failure occurs when the structure cannot bear the tensile portion of bending stress, and the columns or blocks overturn by their own weight. This paper presents a particle-based discrete element model of rock blocks subjected to a toppling failure where geometric conditions and interaction among blocks are investigated. A series of parametric studies have been conducted on particles’ size, arrangement and bond contact among of particles which are made the blocks. Firstly, a numerical investigation on a one-block system was verified. Afterward, a slope consisting of multi-blocks was developed to study toppling failure and interaction forces between blocks. The results show that the formation of blocks, especially between the block and basal plane surface, can change the process of failure. The results also demonstrate that the initial configuration of particles used to form the blocks has a significant role in achieving accurate simulation results. The size of particles and bond contacts have a considerable influence to change the progress of toppling failure.
90
57601
Simulation of Nonlinear Behavior of Reinforced Concrete Slabs Using Rigid Body-Spring Discrete Element Method
Abstract:
Most analysis procedures of reinforced concrete (RC) slabs are based on elastic theory. When subjected to large forces, however, slabs deform beyond elastic range and the study of their behavior and performance require nonlinear analysis. This paper presents a numerical model to simulate nonlinear behavior of RC slabs using rigid body-spring discrete element method. The proposed slab model composed of rigid plate elements and nonlinear springs is based on the yield line theory which assumes that the nonlinear behavior of the RC slab subjected to transverse loads is contained in plastic or yield-lines. In this model, the displacement of the slab is completely described by the rigid elements and the deformation energy is concentrated in the flexural springs uniformly distributed at the potential yield lines. The spring parameters are determined from comparison of transverse displacements and stresses developed in the slab obtained using FEM and the proposed model with assumed homogeneous material. Numerical models of typical RC slabs with varying geometry, reinforcement, support conditions, and loading conditions, show reasonable agreement with available experimental data. The model was also shown to be useful in investigating dynamic behavior of slabs.
89
105793
Spatiotemporal Variability in Rainfall Trends over Sinai Peninsula Using Nonparametric Methods and Discrete Wavelet Transforms
Authors:
Abstract:
Knowledge of the temporal and spatial variability of rainfall trends has been of great concern for efficient water resource planning, management. In this study annual, seasonal and monthly rainfall trends over the Sinai Peninsula were analyzed by using absolute homogeneity tests, nonparametric Mann–Kendall (MK) test and Sen’s slope estimator methods. The homogeneity of rainfall time-series was examined using four absolute homogeneity tests namely, the Pettitt test, standard normal homogeneity test, Buishand range test, and von Neumann ratio test. Further, the sequential change in the trend of annual and seasonal rainfalls is conducted using sequential MK (SQMK) method. Then the trend analysis based on discrete wavelet transform technique (DWT) in conjunction with SQMK method is performed. The spatial patterns of the detected rainfall trends were investigated using a geostatistical and deterministic spatial interpolation technique. The results achieved from the Mann–Kendall test to the data series (using the 5% significance level) highlighted that rainfall was generally decreasing in January, February, March, November, December, wet season, and annual rainfall. A significant decreasing trend in the winter and annual rainfall with significant levels were inferred based on the Mann-Kendall rank statistics and linear trend. Further, the discrete wavelet transform (DWT) analysis reveal that in general, intra- and inter-annual events (up to 4 years) are more influential in affecting the observed trends. The nature of the trend captured by both methods is similar for all of the cases. On the basis of spatial trend analysis, significant rainfall decreases were also noted in the investigated stations. Overall, significant downward trends in winter and annual rainfall over the Sinai Peninsula was observed during the study period.
88
110786
Aliasing Free and Additive Error in Spectra for Alpha Stable Signals
Authors:
Abstract:
This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the "Aliasing phenomenon" encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.
87
12987
The Application of Variable Coefficient Jacobian elliptic Function Method to Differential-Difference Equations
Authors:
Abstract:
In modern nonlinear science and textile engineering, nonlinear differential-difference equations are often used to describe some nonlinear phenomena. In this paper, we extend the variable coefficient Jacobian elliptic function method, which was used to find new exact travelling wave solutions of nonlinear partial differential equations, to nonlinear differential-difference equations. As illustration, we derive two series of Jacobian elliptic function solutions of the discrete sine-Gordon equation.
86
12268
Democratic Political Socialization of the 5th and 6th Graders under the Authority of Dusit District Office, Bangkok
Abstract:
This research aims to study the democratic political socialization of the 5th and 6th Graders under the Authority of Dusit District Office, Bangkok by using stratified sampling for probability sampling and using purposive sampling for non-probability sampling to collect data toward the distribution of questionnaires to 300 respondents. This covers all of the schools under the authority of Dusit District Office. The researcher analyzed the data by using descriptive statistics which include arithmetic mean and standard deviation. The result shows that 5th and 6th graders under the authority of Dusit District Office, Bangkok, have displayed some characteristics following democratic political socialization both inside and outside classroom as well as outside school. However, the democratic political socialization in classroom through grouping and class participation is much more emphasized.
85
6930
Video Compression Using Contourlet Transform
Abstract:
Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.
84
24792
Patient Service Improvement in Public Emergency Department Using Discrete Event Simulation
Abstract:
We study the patient service performance at the emergency department of a major Kuwaiti public hospital, using discrete simulation and lean concepts. In addition to the common problems in such health care systems (over crowdedness, facilities planning and usage, scheduling and staffing, capacity planning) the emergency department suffered from several cultural and patient behavioural issues. Those contributed significantly to the system problems and constituted major obstacles in maintaining the performance in control. This led to overly long waiting times and the potential of delaying providing help to critical cases. We utilized the visual management tools to mitigate the impact of the patients’ behaviours and attitudes and improve the logistics inside the system. In addition a proposal is made to automate the date collection and communication within the department using RFID-based barcoding system. Discrete event simulation models were developed as decision support systems; to study the operational problems and assess achieved improvements. The simulation analysis resulted in cutting the patient delays to about 35% of their current values by reallocating and rescheduling the medical staff. Combined with the application of the visual management concepts, this provided the basis to improving patient service without any major investments.
83
86040
Robust and Transparent Spread Spectrum Audio Watermarking
Abstract:
In this paper, we propose a blind and robust audio watermarking scheme based on spread spectrum in Discrete Wavelet Transform (DWT) domain. Watermarks are embedded in the low-frequency coefficients, which is less audible. The key idea is dividing the audio signal into small frames, and magnitude of the 6th level of DWT approximation coefficients is modifying based upon the Direct Sequence Spread Spectrum (DSSS) technique. Also, the psychoacoustic model for enhancing in imperceptibility, as well as Savitsky-Golay filter for increasing accuracy in extraction, is used. The experimental results illustrate high robustness against most common attacks, i.e. Gaussian noise addition, Low pass filter, Resampling, Requantizing, MP3 compression, without significant perceptual distortion (ODG is higher than -1). The proposed scheme has about 83 bps data payload.
82
83476
Assessment and Control for Oil Aerosol
Abstract:
This study conducted an assessment of sampling result by using the new development rotation filtration device (RFD) filled with porous media filters integrating the method of cyclone centrifugal spins. The testing system established for the experiment used corn oil and potassium sodium tartrate tetrahydrate (PST) as challenge aerosols and were produced by using an Ultrasonic Atomizing Nozzle, a Syringe Pump, and a Collison nebulizer. The collection efficiency of RFD for oil aerosol was assessed by using an Aerodynamic Particle Sizer (APS) and a Fidas® Frog. The results of RFD for the liquid particles condition indicated the cutoff size was 1.65 µm and 1.02 µm for rotation of 0 rpm and 9000 rpm, respectively, under an 80 PPI (pores per inch)foam with a thickness of 80 mm, and sampling velocity of 13.5 cm/s. As the experiment increased the foam thickness of RFD, the cutoff size reduced from 1.62 µm to 1.02 µm. However, when increased the foam porosity of RFD, the cutoff size reduced from 1.26 µm to 0.96 µm. Moreover, as increased the sampling velocity of RFD, the cutoff size reduced from 1.02 µm to 0.76 µm. These discrepancies of above cutoff sizes of RFD all had statistical significance (P < 0.05). The cutoff size of RFD for three experimental conditions of generated liquid oil particles, solid PST particles or both liquid oil and solid PST particles was 1.03 µm, 1.02 µm, or 0.99 µm, respectively, under a 80 PPI foam with thickness of 80 mm, rotation of 9000 rpm, and sampling velocity of 13.5 cm/s. In addition, under the best condition of the experiment, two hours of sampling loading, the RFD had better collection efficiency for particle diameter greater than 0.45 µm, under a 94 PPI nickel mesh with a thickness of 68 mm, rotation of 9000 rpm, and sampling velocity of 108.3 cm/s. The experiment concluded that increased the thickness of porous media, face velocity, and porosity of porous media of RFD could increase the collection efficiency of porous media for sampling oil particles. Moreover, increased the rotation speed of RFD also increased the collection efficiency for sampling oil particles. Further investigation is required for those above operation parameters for RFD in this study in the future.
81
98539
The Potential of On-Demand Shuttle Services to Reduce Private Car Use
Abstract:
Findings of an ongoing discrete choice study of future transport mode choice will be presented. Many urban centers face the triple challenge of having to cope with ever increasing traffic congestion, environmental pollution, and greenhouse gas emission brought about by private car use. In principle, private car use may be diminished by extending public transport systems like bus lines, trams, tubes, and trains. However, there are limits to increasing the (perceived) spatial and temporal flexibility and reducing peak-time crowding of classical public transport systems. An emerging new type of system, publicly or privately operated on-demand shuttle bus services, seem suitable to ameliorate the situation. A fleet of on-demand shuttle busses operates without fixed stops and schedules. It may be deployed efficiently in that each bus picks up passengers whose itineraries may be combined into an optimized route. Crowding may be minimized by limiting the number of seats and the inter-seat distance for each bus. The study is conducted as a discrete choice experiment. The choice between private car, public transport, and shuttle service is registered as a function of several push and pull factors (financial costs, travel time, walking distances, mobility tax/congestion charge, and waiting time/parking space search time). After the completion of the discrete choice items, the study participant is asked to rate the three modes of transport with regard to the pull factors of comfort, safety, privacy, and opportunity to engage in activities like reading or surfing the internet. These ratings are entered as additional predictors into the discrete choice experiment regression model. The study is conducted in the region of Stuttgart in southern Germany. N=1000 participants are being recruited. Participants are between 18 and 69 years of age, hold a driver’s license, and live in the city or the surrounding region of Stuttgart. In the discrete choice experiment, participants are asked to assume they lived within the Stuttgart region, but outside of the city, and were planning the journey from their apartment to their place of work, training, or education during the peak traffic time in the morning. Then, for each item of the discrete choice experiment, they are asked to choose between the transport modes of private car, public transport, and on-demand shuttle in the light of particular values of the push and pull factors studied. The study will provide valuable information on the potential of switching from private car use to the use of on-demand shuttles, but also on the less desirable potential of switching from public transport to on-demand shuttle services. Furthermore, information will be provided on the modulation of these switching potentials by pull and push factors.
80
50405
Joint Discrete Hartley Transform-Clipping for Peak to Average Power Ratio Reduction in Orthogonal Frequency Division Multiplexing System
Abstract:
Orthogonal frequency division multiplexing (OFDM) is promising technique for the modern wireless communications systems due to its robustness against multipath environment. The high peak to average power ratio (PAPR) of the transmitted signal is one of the major drawbacks of OFDM system, PAPR degrade the performance of bit error rate (BER) and effect on the linear characteristics of high power amplifier (HPA). In this paper, we proposed DHT-Clipping reduction technique to reduce the high PAPR by the combination between discrete Hartley transform (DHT) and Clipping techniques. From the simulation results, we notified that DHT-Clipping technique offers better PAPR reduction than DHT and Clipping, as well as DHT-Clipping introduce improved BER performance better than clipping.
79
76853
Attitude Stabilization of Satellites Using Random Dither Quantization
Abstract:
Recently, the effectiveness of random dither quantization method for linear feedback control systems has been shown in several papers. However, the random dither quantization method has not yet been applied to nonlinear feedback control systems. The objective of this paper is to verify the effectiveness of random dither quantization method for nonlinear feedback control systems. For this purpose, we consider the attitude stabilization problem of satellites using discrete-level actuators. Namely, this paper provides a control method based on the random dither quantization method for stabilizing the attitude of satellites using discrete-level actuators.
78
72569
Temporal Effects on Chemical Composition of Treated Wastewater and Borehole Water Used for Irrigation in Limpopo Province, South Africa
Abstract:
Increasing incidents of drought spells in most Sub-Saharan Africa call for using alternative sources of water for irrigation in arid and semi-arid regions. A study was conducted to investigate chemical composition of borehole and treated wastewater from different sampling disposal sites at University of Limpopo Experimental Farm (ULEF). A 4 × 5 factorial experiment, with the borehole as a reference sampling site and three other sampling sites along the wastewater disposal system was conducted over five months. Water samples were collected at four sites namely, (a) exit from Pond 16 into the furrow, (b) entry into night-dam, (c) exit from night dam to irrigated fields and (d) exit from borehole to irrigated fields. Water samples were collected in the middle of each month, starting from July to November 2016. Samples were analysed for pH, EC, Ca, Mg, Na, K, Al, B, Zn, Cu, Cr, Pb, Cd and As. The site × time interactions were highly significant for Ca, Mg, Zn, Cu, Cr, Pb, Cd, and As variables, but not for Na and K. Sampling site was highly significant on all variables, with sampling period not significant for K and Na. Relative to water from the borehole, Na concentration in wastewater samples from the night-dam exit, night-dam entry and Pond16 exit were lower by 69, 34 and 55%, respectively. Relative to borehole water, Al was higher in wastewater sampling sites. In conclusion, both sampling site and period affected the chemical composition of treated wastewater.
77
18113
Statistical Wavelet Features, PCA, and SVM-Based Approach for EEG Signals Classification
Abstract:
The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the support-vectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification.
76
16405
Analysis of Nonlinear Bertrand Duopoly Game with Heterogeneous Players
Authors:
Abstract:
A dynamic of Bertrand duopoly game is analyzed, where players use different production methods and choose their prices with bounded rationality. The equilibriums of the corresponding discrete dynamical systems are investigated. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability of Nash equilibrium, as some parameters of the model are varied, gives rise to complex dynamics such as cycles of higher order and chaos. On this basis, we discover that an increase of adjustment speed of bounded rational player can make Bertrand market sink into the chaotic state. Finally, the complex dynamics, bifurcations and chaos are displayed by numerical simulation.
75
21936
Ratio Type Estimators for the Estimation of Population Coefficient of Variation under Two-Stage Sampling
Abstract:
In this paper we propose two ratio and ratio type exponential estimator for the estimation of population coefficient of variation using the auxiliary information under two-stage sampling. The properties of these estimators are derived up to first order of approximation. The efficiency conditions under which suggested estimator are more efficient, are obtained. Numerical and simulated studies are conducted to support the superiority of the estimators. Theoretically and numerically, we have found that our proposed estimator is always more efficient as compared to its competitor estimator.
74
72754
Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.
73
72755
Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.
72
72753
Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.
71
72446
Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.
70
72447
Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.
69
60389
Metropolis-Hastings Sampling Approach for High Dimensional Testing Methods of Autonomous Vehicles
Abstract:
As recently stated by National Highway Traffic Safety Administration (NHTSA), to demonstrate the expected performance of a highly automated vehicles system, test approaches should include a combination of simulation, test track, and on-road testing. In this paper, we propose a new validation method for autonomous vehicles involving on-road tests (Field Operational Tests), test track (Test Matrix) and simulation (Worst Case Scenarios). We concentrate our discussion on the simulation aspects, in particular, we extend recent work based on Importance Sampling by using a Metropolis-Hasting algorithm (MHS) to sample collected data from the Safety Pilot Model Deployment (SPMD) in lane-change scenarios. Our proposed MH sampling method will be compared to the Importance Sampling method, which does not perform well in high-dimensional problems. The importance of this study is to obtain a sampler that could be applied to high dimensional simulation problems in order to reduce and optimize the number of test scenarios that are necessary for validation and certification of autonomous vehicles.
68
37350
Dynamic of Nonlinear Duopoly Game with Heterogeneous Players
Abstract:
A dynamic of Bertrand duopoly game is analyzed, where players use different production methods and choose their prices with bounded rationality. The equilibriums of the corresponding discrete dynamical systems are investigated. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability conditions of Nash equilibrium under a local adjustment process are studied. The stability of Nash equilibrium, as some parameters of the model are varied, gives rise to complex dynamics such as cycles of higher order and chaos. On this basis, we discover that an increase of adjustment speed of bounded rational player can make Bertrand market sink into the chaotic state. Finally, the complex dynamics, bifurcations and chaos are displayed by numerical simulation.
67
89088
A Discrete Event Simulation Model to Manage Bed Usage for Non-Elective Admissions in a Geriatric Medicine Speciality
Abstract:
Over the past decade, the non-elective admissions in the UK have increased significantly. Taking into account limited resources (i.e. beds), the related service managers are obliged to manage their resources effectively due to the non-elective admissions which are mostly admitted to inpatient specialities via A&amp;E departments. Geriatric medicine is one of specialities that have long length of stay for the non-elective admissions. This study aims to develop a discrete event simulation model to understand how possible increases on non-elective demand over the next 12 months affect the bed occupancy rate and to determine required number of beds in a geriatric medicine speciality in a UK hospital. In our validated simulation model, we take into account observed frequency distributions which are derived from a big data covering the period April, 2009 to January, 2013, for the non-elective admission and the length of stay. An experimental analysis, which consists of 16 experiments, is carried out to better understand possible effects of case studies and scenarios related to increase on demand and number of bed. As a result, the speciality does not achieve the target level in the base model although the bed occupancy rate decreases from 125.94% to 96.41% by increasing the number of beds by 30%. In addition, the number of required beds is more than the number of beds considered in the scenario analysis in order to meet the bed requirement. This paper sheds light on bed management for service managers in geriatric medicine specialities.
66
25228
Real-Time Image Encryption Using a 3D Discrete Dual Chaotic Cipher
Abstract:
In this paper, an encryption algorithm is proposed for real-time image encryption. The scheme employs a dual chaotic generator based on a three dimensional (3D) discrete Lorenz attractor. Encryption is achieved using non-autonomous modulation where the data is injected into the dynamics of the master chaotic generator. The second generator is used to permute the dynamics of the master generator using the same approach. Since the data stream can be regarded as a random source, the resulting permutations of the generator dynamics greatly increase the security of the transmitted signal. In addition, a technique is proposed to mitigate the error propagation due to the finite precision arithmetic of digital hardware. In particular, truncation and rounding errors are eliminated by employing an integer representation of the data which can easily be implemented. The simple hardware architecture of the algorithm makes it suitable for secure real-time applications.
65
42513
Discrete PID and Discrete State Feedback Control of a Brushed DC Motor
Abstract:
Today, digital servo systems are extensively used in industrial manufacturing processes, robotic applications, vehicles and other areas. In such control systems, control action is provided by digital controllers with different compensation algorithms, which are designed to meet specific requirements for a given application. Due to the constant search for optimization in industrial processes, it is of interest to design digital controllers that offer ease of realization, improved computational efficiency, affordable return rates, and ease of tuning that ultimately improve the performance of the controlled actuators. There is a vast range of options of compensation algorithms that could be used, although in the industry, most controllers used are based on a PID structure. This research article compares different types of digital compensators implemented in a servo system for DC motor position control. PID compensation is evaluated on its two most common architectures: PID position form (1 DOF), and PID speed form (2 DOF). State feedback algorithms are also evaluated, testing two modern control theory techniques: discrete state observer for non-measurable variables tracking, and a linear quadratic method which allows a compromise between the theoretical optimal control and the realization that most closely matches it. The compared control systems’ performance is evaluated through simulations in the Simulink platform, in which it is attempted to model accurately each of the system’s hardware components. The criteria by which the control systems are compared are reference tracking and disturbance rejection. In this investigation, it is considered that the accurate tracking of the reference signal for a position control system is particularly important because of the frequency and the suddenness in which the control signal could change in position control applications, while disturbance rejection is considered essential because the torque applied to the motor shaft due to sudden load changes can be modeled as a disturbance that must be rejected, ensuring reference tracking. Results show that 2 DOF PID controllers exhibit high performance in terms of the benchmarks mentioned, as long as they are properly tuned. As for controllers based on state feedback, due to the nature and the advantage which state space provides for modelling MIMO, it is expected that such controllers evince ease of tuning for disturbance rejection, assuming that the designer of such controllers is experienced. An in-depth multi-dimensional analysis of preliminary research results indicate that state feedback control method is more satisfactory, but PID control method exhibits easier implementation in most control applications.
64
73293
A Ground Structure Method to Minimize the Total Installed Cost of Steel Frame Structures
Abstract:
This paper presents a ground structure method to optimize the topology and discrete member sizing of steel frame structures in order to minimize total installed cost, including material, fabrication and erection components. The proposed method improves upon existing cost-based ground structure methods by incorporating constructability considerations well as satisfying both strength and serviceability constraints. The architecture for the method is a bi-level Multidisciplinary Feasible (MDF) architecture in which the discrete member sizing optimization is nested within the topology optimization process. For each structural topology generated, the sizing optimization process seek to find a set of discrete member sizes that result in the lowest total installed cost while satisfying strength (member utilization) and serviceability (node deflection and story drift) criteria. To accurately assess cost, the connection details for the structure are generated automatically using accurate site-specific cost information obtained directly from fabricators and erectors. Member continuity rules are also applied to each node in the structure to improve constructability. The proposed optimization method is benchmarked against conventional weight-based ground structure optimization methods resulting in an average cost savings of up to 30% with comparable computational efficiency.
63
75617
Virtual Approach to Simulating Geotechnical Problems under Both Static and Dynamic Conditions
Abstract:
Recent studies on the numerical simulation of geotechnical problems show the importance of considering the soil micro-structure. At this scale, soil is a discrete particle medium where the particles can interact with each other and with water flow under external forces, structure loads or natural events. This paper presents research conducted in a virtual laboratory named SiGran, developed at IREQ (Institut de recherche d’Hydro-Quebec) for the purpose of investigating a broad range of problems encountered in geotechnics. Using Discrete Element Method (DEM), SiGran simulated granular materials directly by applying Newton’s laws to each particle. The water flow was simulated by using Marker and Cell method (MAC) to solve the full form of Navier-Stokes’s equation for non-compressible viscous liquid. In this paper, examples of numerical simulation and their comparisons with real experiments have been selected to show the complexity of geotechnical research at the micro level. These examples describe transient flows into a porous medium, interaction of particles in a viscous flow, compacting of saturated and unsaturated soils and the phenomenon of liquefaction under seismic load. They also provide an opportunity to present SiGran’s capacity to compute the distribution and evolution of energy by type (particle kinetic energy, particle internal elastic energy, energy dissipated by friction or as a result of viscous interaction into flow, and so on). This work also includes the first attempts to apply micro discrete results on a macro continuum level where the Smoothed Particle Hydrodynamics (SPH) method was used to resolve the system of governing equations. The material behavior equation is based on the results of simulations carried out at a micro level. The possibility of combining three methods (DEM, MAC and SPH) is discussed.
62
82115
Stator Short-Circuits Fault Diagnosis in Induction Motors
Abstract:
This paper deals with the problem of stator faults diagnosis in induction motors. Using the discrete wavelet transform (DWT) for the current Park’s vector modulus (CPVM) analysis, the inter-turn short-circuit faults diagnosis can be achieved. This method is based on the decomposition of the CPVM signal, where wavelet approximation and detail coefficients of this signal have been extracted. The energy evaluation of a known bandwidth detail permits to define a fault severity factor (FSF). This method has been tested through the simulation of an induction motor using a mathematical model based on the winding-function approach. Simulation, as well as experimental results, show the effectiveness of the used method.
61
76473
Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform
Abstract:
Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3&times;3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.
60
26668
Discrete Crack Modeling of Side Face FRP-Strengthened Concrete Beam
Abstract:
Shear strengthening can be carried out in concrete structures by external fibre reinforced polymer (FRP). In the present investigation, a new fracture mechanics model is developed to model side face of strengthened concrete beam by external FRP. Discrete crack is simulated by a spring element with softening behavior ahead of the crack tip to model the cohesive zone in concrete. A truss element is used, parallel to the spring element, to simulate the energy dissipation rate by the FRP. The strain energy release rate is calculated directly by using a virtual crack closure technique and then, the crack propagation criterion is presented. The results are found acceptable when compared to previous experimental results and ABAQUS software data. It is observed that the length of the fracture process zone (FPZ) increases with the application of FRP in side face at the same load in comparison with that of the control beam.
59
58605
Petri Net Modeling and Simulation of a Call-Taxi System
Authors:
Abstract:
A call-taxi system is a type of taxi service where a taxi could be requested through a phone call or mobile app. A schematic functioning of a call-taxi system is modeled using Petri net, which provides the necessary conditions for a taxi to be assigned by a dispatcher to pick a customer as well as the conditions for the taxi to be released by the customer. A Petri net is a graphical modeling tool used to understand sequences, concurrences, and confluences of activities in the working of discrete event systems. It uses tokens on a directed bipartite multi-graph to simulate the activities of a system. The Petri net model is translated into a simulation model and a call-taxi system is simulated. The simulation model helps in evaluating the operation of a call-taxi system based on the fleet size as well as the operating policies for call-taxi assignment and empty call-taxi repositioning. The developed Petri net based simulation model can be used to decide the fleet size as well as the call-taxi assignment policies for a call-taxi system.
58
22392
A Hybrid Watermarking Model Based on Frequency of Occurrence
Abstract:
Ownership proofs of multimedia such as text, image, audio or video files can be achieved by the burial of watermark is them. It is achieved by introducing modifications into these files that are imperceptible to the human senses but easily recoverable by a computer program. These modifications would be in the time domain or frequency domain or both. This paper presents a procedure for watermarking by mixing amplitude modulation with frequency transformation histogram; namely a specific value is used to modulate the intensity component Y of the YIQ components of the carrier image. This scheme is referred to as histogram embedding technique (HET). Results comparison with those of other techniques such as discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) have shown an enhance efficiency in terms of ease and performance. It has manifested a good degree of robustness against various environment effects such as resizing, rotation and different kinds of noise. This method would prove very useful technique for copyright protection and ownership judgment.
57
70394
UWB Channel Estimation Using an Efficient Sub-Nyquist Sampling Scheme
Abstract:
Recently, low-complexity sub-Nyquist sampling schemes based on the Finite Rate of Innovation (FRI) theory have been introduced to sample parametric signals at minimum rates. The multichannel modulating waveforms (MCMW) is such an efficient scheme, where the received signal is mixed with an appropriate set of arbitrary waveforms, integrated and sampled at rates far below the Nyquist rate. In this paper, the MCMW scheme is adapted to the special case of ultra wideband (UWB) channel estimation, characterized by dense multipaths. First, an appropriate structure, which accounts for the bandpass spectrum feature of UWB signals, is defined. Then, a novel approach to decrease the number of processing channels and reduce the complexity of this sampling scheme is presented. Finally, the proposed concepts are validated by simulation results, obtained with real filters, in the framework of a coherent Rake receiver.
56
20588
Frequency of Occurrence Hybrid Watermarking Scheme
Abstract:
Generally, a watermark is information that identifies the ownership of multimedia (text, image, audio or video files). It is achieved by introducing modifications into these files that are imperceptible to the human senses but easily recoverable by a computer program. These modifications are done according to a secret key in a descriptive model that would be either in the time domain or frequency domain or both. This paper presents a procedure for watermarking by mixing amplitude modulation with frequency transformation histogram; namely a specific value is used to modulate the intensity component Y of the YIQ components of the carrier image. This scheme is referred to as histogram embedding technique (HET). Results comparison with those of other techniques such as discrete wavelet transform (DWT), discrete cosine transform (DCT) and singular value decomposition (SVD) have shown an enhance efficiency in terms of ease and performance. It has manifested a good degree of robustness against various environment effects such as resizing, rotation and different kinds of noise. This method would prove very useful technique for copyright protection and ownership judgment.
55
17954
Image Compression Based on Regression SVM and Biorthogonal Wavelets
Abstract:
In this paper, we propose an effective method for image compression based on SVM Regression (SVR), with three different kernels, and biorthogonal 2D Discrete Wavelet Transform. SVM regression could learn dependency from training data and compressed using fewer training points (support vectors) to represent the original data and eliminate the redundancy. Biorthogonal wavelet has been used to transform the image and the coefficients acquired are then trained with different kernels SVM (Gaussian, Polynomial, and Linear). Run-length and Arithmetic coders are used to encode the support vectors and its corresponding weights, obtained from the SVM regression. The peak signal noise ratio (PSNR) and their compression ratios of several test images, compressed with our algorithm, with different kernels are presented. Compared with other kernels, Gaussian kernel achieves better image quality. Experimental results show that the compression performance of our method gains much improvement.
54
8876
A New Method to Estimate the Low Income Proportion: Monte Carlo Simulations
Abstract:
Estimation of a proportion has many applications in economics and social studies. A common application is the estimation of the low income proportion, which gives the proportion of people classified as poor into a population. In this paper, we present this poverty indicator and propose to use the logistic regression estimator for the problem of estimating the low income proportion. Various sampling designs are presented. Assuming a real data set obtained from the European Survey on Income and Living Conditions, Monte Carlo simulation studies are carried out to analyze the empirical performance of the logistic regression estimator under the various sampling designs considered in this paper. Results derived from Monte Carlo simulation studies indicate that the logistic regression estimator can be more accurate than the customary estimator under the various sampling designs considered in this paper. The stratified sampling design can also provide more accurate results.
53
30174
A Novel Integration of Berth Allocation, Quay Cranes and Trucks Scheduling Problems in Container Terminals
Abstract:
As maritime container transport is developing fast, the need arises for efficient operations at container terminals. One of the most important determinants of container handling efficiency is the productivity of quay cranes and internal transportation vehicles, which are responsible transporting of containers for unloading and loading operations for container vessels. For this reason, this paper presents an integrated mathematical model formulation for discrete berths with quay cranes and internal transportations vehicles. The problems have received increasing attention in the literature and the present paper deals with the integration of these interrelated problems. A new mixed integer linear formulation is developed for the Berth Allocation Problem (BAP), Quay Crane Assignment and Scheduling Problem (QCASP) and Internal Transportation Scheduling (ITS), which accounts for cranes and trucks positioning conditions.
52
19532
A Similar Image Retrieval System for Auroral All-Sky Images Based on Local Features and Color Filtering
Abstract:
The aurora is an attractive phenomenon but it is difficult to understand the whole mechanism of it. An approach of data-intensive science might be an effective approach to elucidate such a difficult phenomenon. To do that we need labeled data, which shows when and what types of auroras, have appeared. In this paper, we propose an image retrieval system for auroral all-sky images, some of which include discrete and diffuse aurora, and the other do not any aurora. The proposed system retrieves images which are similar to the query image by using a popular image recognition method. Using 300 all-sky images obtained at Tromso Norway, we evaluate two methods of image recognition methods with or without our original color filtering method. The best performance is achieved when SIFT with the color filtering is used and its accuracy is 81.7% for discrete auroras and 86.7% for diffuse auroras.
51
43377
Image Compression on Region of Interest Based on SPIHT Algorithm
Abstract:
Image abbreviation is utilized for reducing the size of a file without demeaning the quality of the image to an objectionable level. The depletion in file size permits more images to be deposited in a given number of spaces. It also minimizes the time necessary for images to be transferred. Storage of medical images is a most researched area in the current scenario. To store a medical image, there are two parameters on which the image is divided, regions of interest and non-regions of interest. The best way to store an image is to compress it in such a way that no important information is lost. Compression can be done in two ways, namely lossy, and lossless compression. Under that, several compression algorithms are applied. In the paper, two algorithms are used which are, discrete cosine transform, applied to non-region of interest (lossy), and discrete wavelet transform, applied to regions of interest (lossless). The paper introduces SPIHT (set partitioning hierarchical tree) algorithm which is applied onto the wavelet transform to obtain good compression ratio from which an image can be stored efficiently.
50
29799
Wavelet Based Signal Processing for Fault Location in Airplane Cable
Abstract:
Wavelet analysis is an exciting method for solving difficult problems in mathematics, physics, and engineering, with modern applications as diverse as wave propagation, data compression, signal processing, image processing, pattern recognition, etc. Wavelets allow complex information such as signals, images and patterns to be decomposed into elementary forms at different positions and scales and subsequently reconstructed with high precision. In this paper a wavelet-based signal processing algorithm for airplane cable fault location is proposed. An orthogonal discrete wavelet decomposition and reconstruction algorithm is used to eliminate the noise in the aircraft cable fault signal. The experiment result has shown that the character of emission pulse and reflect pulse used to test the aircraft cable fault point are reserved and the high-frequency noise are eliminated by means of the proposed algorithm in this paper.
49
23382
Parameter Fitting of the Discrete Element Method When Modeling the DISAMATIC Process
Abstract:
In sand casting of metal parts for the automotive industry such as brake disks and engine blocks, the molten metal is poured into a sand mold to get its final shape. The DISAMATIC molding process is a way to construct these sand molds for casting of steel parts and in the present work numerical simulations of this process are presented. During the process green sand is blown into a chamber and subsequently squeezed to finally obtain the sand mould. The sand flow is modelled with the Discrete Element method (DEM) and obtaining the correct material parameters for the simulation is the main goal. Different tests will be used to find or calibrate the DEM parameters needed; Poisson ratio, Young modulus, rolling friction coefficient, sliding friction coefficient and coefficient of restitution (COR). The Young modulus and Poisson ratio are found from compression tests of the bulk material and subsequently used in the DEM model according to the Hertz-Mindlin model. The main focus will be on calibrating the rolling resistance and sliding friction in the DEM model with respect to the behavior of “real” sand piles. More specifically, the surface profile of the “real” sand pile will be compared to the sand pile predicted with the DEM for different values of the rolling and sliding friction coefficients. When the DEM parameters are found for the particle-particle (sand-sand) interaction, the particle-wall interaction parameter values are also found. Here the sliding coefficient will be found from experiments and the rolling resistance is investigated by comparing with observations of how the green sand interacts with the chamber wall during experiments and the DEM simulations will be calibrated accordingly. The coefficient of restitution will be tested with different values in the DEM simulations and compared to video footages of the DISAMATIC process. Energy dissipation will be investigated in these simulations for different particle sizes and coefficient of restitution, where scaling laws will be considered to relate the energy dissipation for these parameters. Finally, the found parameter values are used in the overall discrete element model and compared to the video footage of the DISAMATIC process.
48
67655
Analytical Technique for Definition of Internal Forces in Links of Robotic Systems and Mechanisms with Statically Indeterminate and Determinate Structures Taking into Account the Distributed Dynamical Loads and Concentrated Forces
Abstract:
The distributed inertia forces of complex nature appear in links of rod mechanisms within the motion process. Such loads raise a number of problems, as the problems of destruction caused by a large force of inertia; elastic deformation of the mechanism can be considerable, that can bring the mechanism out of action. In this work, a new analytical approach for the definition of internal forces in links of robotic systems and mechanisms with statically indeterminate and determinate structures taking into account the distributed inertial and concentrated forces is proposed. The relations between the intensity of distributed inertia forces and link weight with geometrical, physical and kinematic characteristics are determined in this work. The distribution laws of inertia forces and dead weight make it possible at each position of links to deduce the laws of distribution of internal forces along the axis of the link, in which loads are found at any point of the link. The approximation matrixes of forces of an element under the action of distributed inertia loads with the trapezoidal intensity are defined. The obtained approximation matrixes establish the dependence between the force vector in any cross-section of the element and the force vector in calculated cross-sections, as well as allow defining the physical characteristics of the element, i.e., compliance matrix of discrete elements. Hence, the compliance matrixes of an element under the action of distributed inertial loads of trapezoidal shape along the axis of the element are determined. The internal loads of each continual link are unambiguously determined by a set of internal loads in its separate cross-sections and by the approximation matrixes. Therefore, the task is reduced to the calculation of internal forces in a final number of cross-sections of elements. Consequently, it leads to a discrete model of elastic calculation of links of rod mechanisms. The discrete model of the elements of mechanisms and robotic systems and their discrete model as a whole are constructed. The dynamic equilibrium equations for the discrete model of the elements are also received in this work as well as the equilibrium equations of the pin and rigid joints expressed through required parameters of internal forces. Obtained systems of dynamic equilibrium equations are sufficient for the definition of internal forces in links of mechanisms, which structure is statically definable. For determination of internal forces of statically indeterminate mechanisms (in the way of determination of internal forces), it is necessary to build a compliance matrix for the entire discrete model of the rod mechanism, that is reached in this work. As a result by means of developed technique the programs in the MAPLE18 system are made and animations of the motion of the fourth class mechanisms of statically determinate and statically indeterminate structures with construction on links the intensity of cross and axial distributed inertial loads, the bending moments, cross and axial forces, depending on kinematic characteristics of links are obtained.
47
8251
Content Based Face Sketch Images Retrieval in WHT, DCT, and DWT Transform Domain
Abstract:
Content based face sketch retrieval can be used to find images of criminals from their sketches for 'Crime Prevention'. This paper investigates the problem of CBIR of face sketch images in transform domain. Face sketch images that are similar to the query image are retrieved from the face sketch database. Features of the face sketch image are extracted in the spectrum domain of a selected transforms. These transforms are Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), and Walsh Hadamard Transform (WHT). For the performance analyses of features selection methods three face images databases are used. These are 'Sheffield face database', 'Olivetti Research Laboratory (ORL) face database', and 'Indian face database'. The City block distance measure is used to evaluate the performance of the retrieval process. The investigation concludes that, the retrieval rate is database dependent. But in general, the DCT is the best. On the other hand, the WHT is the best with respect to the speed of retrieving images.
46
79163
Direct Approach in Modeling Particle Breakage Using Discrete Element Method
Abstract:
Current study is aimed to develop an available in-house discrete element method (DEM) code and link it with direct breakage event. So, it became possible to determine the particle breakage and then its fragments size distribution, simultaneous with DEM simulation. It directly applies the particle breakage inside the DEM computation algorithm and if any breakage happens the original particle is replaced with daughters. In this way, the calculation will be followed based on a new updated particles list which is very similar to the real grinding environment. To validate developed model, a grinding ball impacting an unconfined particle bed was simulated. Since considering an entire ball mill would be too computationally demanding, this method provided a simplified environment to test the model. Accordingly, a representative volume of the ball mill was simulated inside a box, which could emulate media (ball)–powder bed impacts in a ball mill and during particle bed impact tests. Mono, binary and ternary particle beds were simulated to determine the effects of granular composition on breakage kinetics. The results obtained from the DEM simulations showed a reduction in the specific breakage rate for coarse particles in binary mixtures. The origin of this phenomenon, commonly known as cushioning or decelerated breakage in dry milling processes, was explained by the DEM simulations. Fine particles in a particle bed increase mechanical energy loss, and reduce and distribute interparticle forces thereby inhibiting the breakage of the coarse component. On the other hand, the specific breakage rate of fine particles increased due to contacts associated with coarse particles. Such phenomenon, known as acceleration, was shown to be less significant, but should be considered in future attempts to accurately quantify non-linear breakage kinetics in the modeling of dry milling processes.
45
5007
Fault Detection of Pipeline in Water Distribution Network System
Abstract:
Water pipe network is installed underground and once equipped; it is difficult to recognize the state of pipes when the leak or burst happens. Accordingly, post management is often delayed after the fault occurs. Therefore, the systematic fault management system of water pipe network is required to prevent the accident and minimize the loss. In this work, we develop online fault detection system of water pipe network using data of pipes such as flow rate or pressure. The transient model describing water flow in pipelines is presented and simulated using Matlab. The fault situations such as the leak or burst can be also simulated and flow rate or pressure data when the fault happens are collected. Faults are detected using statistical methods of fast Fourier transform and discrete wavelet transform, and they are compared to find which method shows the better fault detection performance.
44
14790
Time Truncated Group Acceptance Sampling Plans for Exponentiated Half Logistic Distribution
Abstract:
In this article, we considered a group acceptance sampling plans for exponentiated half logistic distribution when the life-test is truncated at a pre-specified time. It is assumed that the index parameter of the exponentiated half logistic distribution is known. The design parameters such as the number of groups and the acceptance number are obtained by satisfying the producer’s and consumer’s risks at the specified quality levels in terms of medians and 10th percentiles under the assumption that the termination time and the number of items in each group are pre-fixed. Finally, an example is given to illustration the methodology.
43
115264
Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) to malignant (cancerous). Patients with Lung Cancer (LC) have an average of 5 years life span expectancy provided diagnosis, detection, and prediction, reduce many treatment options to risk of invasive surgery, increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) used for determining patient condition as normal or abnormal while Artificial Neural Networks (ANN) used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm used for main feature extraction leading to the best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.
42
111311
Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.
41
56360
Behavior of Common Philippine-Made Concrete Hollow Block Structures Subjected to Seismic Load Using Rigid Body Spring-Discrete Element Method
Abstract:
Concrete hollow blocks (CHB) are the most commonly used masonry block for walls in residential houses, school buildings and public buildings in the Philippines. During the recent 2013 Bohol earthquake (Mw 7.2), it has been proven that CHB walls are very vulnerable to severe external action like strong ground motion. In this paper, a numerical model of CHB structures is proposed, and seismic behavior of CHB houses is presented. In modeling, the Rigid Body Spring-Discrete Element method (RBS-DEM)) is used wherein masonry blocks are discretized into rigid elements and connected by nonlinear springs at preselected contact points. The shear and normal stiffness of springs are derived from the material properties of CHB unit incorporating the grout and mortar fillings through the volumetric transformation of the dimension using material ratio. Numerical models of reinforced and unreinforced walls are first subjected to linearly-increasing in plane loading to observe the different failure mechanisms. These wall models are then assembled to form typical model masonry houses and then subjected to the El Centro and Pacoima earthquake records. Numerical simulations show that the elastic, failure and collapse behavior of the model houses agree well with shaking table tests results. The effectiveness of the method in replicating failure patterns will serve as a basis for the improvement of the design and provides a good basis of strengthening the structure.
40
32482
Performance Evaluation of Discrete Fourier Transform Algorithm Based PMU for Wide Area Measurement System
Abstract:
Implementation of advanced technologies requires sophisticated instruments that deal with the operation, control, restoration and protection of rapidly growing power system network under normal and abnormal conditions. Presently, the applications of Phasor Measurement Unit (PMU) are widely found in real time operation, monitoring, controlling and analysis of power system network as it eliminates the various limitations of Supervisory Control and Data Acquisition System (SCADA) conventionally used in power system. The use of PMU data is very rapidly increasing its importance for online and offline analysis. Wide Area Measurement System (WAMS) is developed as new technology by use of multiple PMUs in power system. The present paper proposes a model of MATLAB based PMU using Discrete Fourier Transform (DFT) algorithm and evaluation of its operation under different contingencies. In this paper, PMU based two bus system having WAMS network is presented as a case study.
39
110579
Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field
Abstract:
The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.
38
64223
Computational Fluid Dynamics (CFD) Simulation of Transient Flow in a Rectangular Bubble Column Using a Coupled Discrete Phase Model (DPM) and Volume of Fluid (VOF) Model
Abstract:
In this work, we present a computational study for the characterization of the flow in a rectangular bubble column. To simulate the dynamic characteristics of the flow, a three-dimensional transient numerical simulations based on a coupled discrete phase model (DPM) and Volume of Fluid (VOF) model are performed. Modeling of bubble column reactor is often carried out under the assumption of a flat liquid surface with a degassing boundary condition. However, the dynamic behavior of the top surface surmounting the liquid phase will to some extent influence the meandering oscillations of the bubble plume. Therefore it is important to capture the surface behavior, and the assumption of a flat surface may not be applicable. So, the modeling approach needs to account for a dynamic liquid surface induced by the rising bubble plume. The volume of fluid (VOF) model was applied for the liquid and top gas which both interacts with bubbles implemented with a discrete phase model. This model treats the bubbles as Lagrangian particles and the liquid and the top gas as Eulerian phases with a sharp interface. Two-way coupling between Eulerian phases and Lagrangian bubbles are accounted for in a single set continuous phase momentum equation for the mixture of the two Eulerian phases. The effect of gas flow rate on the dynamic and time-averaged flow properties was studied. The time averaged liquid velocity field predicted from simulations and from our previous PIV measurements shows that the liquid is entrained up flow in the wake of the bubbles and down flow near the walls. The simulated and measured vertical velocity profiles exhibit a reasonable agreement looking at the minimum velocity values near the walls and the maximum values at the column center.
37
6132
The Effects of Seasonal Variation on the Microbial-N Flow to the Small Intestine and Prediction of Feed Intake in Grazing Karayaka Sheep
Abstract:
The objectives of the present study were to estimate the microbial-N flow to the small intestine and to predict the digestible organic matter intake (DOMI) in grazing Karayaka sheep based on urinary excretion of purine derivatives (xanthine, hypoxanthine, uric acid, and allantoin) by the use of spot urine sampling under field conditions. In the trial, 10 Karayaka sheep from 2 to 3 years of age were used. The animals were grazed in a pasture for ten months and fed with concentrate and vetch plus oat hay for the other two months (January and February) indoors. Highly significant linear and cubic relationships (P< 0.001) were found among months for purine derivatives index, purine derivatives excretion, purine derivatives absorption, microbial-N and DOMI. Through urine sampling and the determination of levels of excreted urinary PD and Purine Derivatives / Creatinine ratio (PDC index), microbial-N values were estimated and they indicated that the protein nutrition of the sheep was insufficient. In conclusion, the prediction of protein nutrition of sheep under the field conditions may be possible with the use of spot urine sampling, urinary excreted PD and PDC index. The mean purine derivative levels in spot urine samples from sheep were highest in June, July and October. Protein nutrition of pastured sheep may be affected by weather changes, including rainfall. Spot urine sampling may useful in modeling the feed consumption of pasturing sheep. However, further studies are required under different field conditions with different breeds of sheep to develop spot urine sampling as a model.
36
117344
A Simulation Study on the Applicability of Overbooking Strategies in Inland Container Transport
Abstract:
The inland transportation of maritime containers entails the use of different modalities whose capacity is typically booked in advance. Containers may miss their scheduled departure time at a terminal for several reasons, such as delays, change of transport modes, multiple bookings pending. In those cases, it may be difficult for transport service providers to find last minute containers to fill the vacant capacity. Similarly to other industries, overbooking could potentially limit these drawbacks at the cost of a lower service level in case of actual excess of capacity in overbooked rides. However, the presence of multiple modalities may provide the required flexibility in rescheduling and limit the dissatisfaction of the shippers in case of containers in overbooking. This flexibility is known with the term 'synchromodality'. In this paper, we evaluate via discrete event simulation the application of overbooking. Results show that in certain conditions overbooking can significantly increase profit and utilization of high-capacity means of transport, such as barges and trains. On the other hand, in case of high penalty costs and limited no-show, overbooking may lead to an excessive use of expensive trucks.
35
44000
A Simple and Efficient Method for Accurate Measurement and Control of Power Frequency Deviation
Authors:
Abstract:
In the presented technique, a simple method is given for accurate measurement and control of power frequency deviation. The sinusoidal signal for which the frequency deviation measurement is required is transformed to a low voltage level and passed through a zero crossing detector to convert it into a pulse train. Another stable square wave signal of 10 KHz is obtained using a crystal oscillator and decade dividing assemblies (DDA). These signals are combined digitally and then passed through decade counters to give a unique combination of pulses or levels, which are further encoded to make them equally suitable for both control applications and display units. The developed circuit using discrete components has a resolution of 0.5 Hz and completes measurement within 20 ms. The realized circuit is simulated and synthesized using Verilog HDL and subsequently implemented on FPGA. The results of measurement on FPGA are observed on a very high resolution logic analyzer. These results accurately match the simulation results as well as the results of same circuit implemented with discrete components. The proposed system is suitable for accurate measurement and control of power frequency deviation.
34
27000
Modeling and Implementation of a Hierarchical Safety Controller for Human Machine Collaboration
Abstract:
This paper primarily describes the concept of a hierarchical safety control (HSC) in discrete manufacturing to up-hold productivity with human intervention and machine failures using a systematic approach, through increasing the system availability and using additional knowledge on machines so as to improve the human machine collaboration (HMC). It also highlights the implemented PLC safety algorithm, in applying this generic concept to a concrete pro-duction line using a lab demonstrator called FATIE (Factory Automation Test and Integration Environment). Furthermore, the paper describes a model and provide a systematic representation of human-machine collabora-tion in discrete manufacturing and to this end, the Hierarchical Safety Control concept is proposed. This offers a ge-neric description of human-machine collaboration based on Finite State Machines (FSM) that can be applied to vari-ous discrete manufacturing lines instead of using ad-hoc solutions for each line. With its reusability, flexibility, and extendibility, the Hierarchical Safety Control scheme allows upholding productivity while maintaining safety with reduced engineering effort compared to existing solutions. The approach to the solution begins with a successful partitioning of different zones around the Integrated Manufacturing System (IMS), which are defined by operator tasks and the risk assessment, used to describe the location of the human operator and thus to identify the related po-tential hazards and trigger the corresponding safety functions to mitigate it. This includes selective reduced speed zones and stop zones, and in addition with the hierarchical safety control scheme and advanced safety functions such as safe standstill and safe reduced speed are used to achieve the main goals in improving the safe Human Ma-chine Collaboration and increasing the productivity. In a sample scenarios, It is shown that an increase of productivity in the order of 2.5% is already possible with a hi-erarchical safety control, which consequently under a given assumptions, a total sum of 213 € could be saved for each intervention, compared to a protective stop reaction. Thereby the loss is reduced by 22.8%, if occasional haz-ard can be refined in a hierarchical way. Furthermore, production downtime due to temporary unavailability of safety devices can be avoided with safety failover that can save millions per year. Moreover, the paper highlights the proof of the development, implementation and application of the concept on the lab demonstrator (FATIE), where it is realized on the new safety PLCs, Drive Units, HMI as well as Safety devices in addition to the main components of the IMS.
33
76068
Evaluation of Negative Air Ions in Bioaerosol Removal: Indoor Concentration of Airborne Bacterial and Fungal in Residential Building in Qom City, Iran
Abstract:
The present investigation was conducted to detect the type and concentrations of bacterial and fungal bioaerosols in one room (bedroom) of each selected residential building located in different regions of Qom during February 2015 (n=9) to July 2016 (n=11). Moreover, we evaluated the efficiency of negative air ions (NAIs) in bioaerosol reduction in indoor air in residential buildings. In the first step, the mean concentrations of bacterial and fungal in nine sampling sites evaluated in winter were 744 and 579 colony forming units (CFU)/m3, while these values were 1628.6 and 231 CFU/m3 in the 11 sampling sites evaluated in summer, respectively. The most predominant genera between bacterial and fungal in all sampling sites were detected as Micrococcus spp. and Staphylococcus spp. and also, Aspergillus spp. and Penicillium spp., respectively. The 95% and 45% of sampling sites have bacterial and fungal concentrations over the recommended levels, respectively. In the removal step, we achieved a reduction with a range of 38% to 93% for bacterial genera and 25% to 100% for fungal genera by using NAIs. The results suggested that NAI is a highly effective, simple and efficient technique in reducing the bacterial and fungal concentration in the indoor air of residential buildings.
32
43320
Transfer Rate of Organic Water Contaminants through a Passive Sampler Membrane of Polyethersulfone (PES)
Abstract:
Accurate assessments of contaminant concentrations based on traditional grab sampling methods are not always possible. Passive samplers offer an attractive alternative to traditional sampling methods that overcomes these limitations. The POCIS approach has been used as a screening tool for determining the presence/absence, possible sources and relative amounts of organic compounds at field sites. The objective for the present research is on mass transfer of five water contaminants (atrazine, caffeine, bentazon, ibuprofen, atenolol) through the Water Boundary Layer (WBL) and membrane. More specific objectives followed by establishing a relationship between the sampling rate and water solubility of the compounds, as well as comparing the molecular weight of the compounds and concentration of the compounds at the time of equilibrium. To determine whether water boundary layer effects transport rate through the membrane is another main objective in this paper. After GC mass analysis of compounds, regarding the WBL effect in this experiment, Sherwood number for the experimental tank developed. A close relationship between feed concentration of compound and sampling rate has been observed.
31
79265
An Evaluation of Discontinuities in Rock Mass Using Coupled Hydromechanical Finite Element and Discrete Element Analyses
Abstract:
The paper will present the design and construction of the underground excavations of a pump station forebay and its related components including connector tunnels, access shaft, riser shaft and well shafts. The underground openings include an 8 m-diameter riser shaft, an 8-m-diameter access shaft, 34 2.4-m-diameter well shafts, a 107-m-long forebay with a cross section having a height of 11 m and width of 10 m, and a 6 m by 6 m stub connector tunnel between the access shaft and a future forebay extension. The riser shaft extends down from the existing forebay connector tunnel at elevation 247 m to the crown of the forebay at elevation 770.0 feet. The access shaft will extend from the platform at the surface down to El. 223.5 m. The pump station will have the capacity to deliver 600 million gallons per day. The project is located on an uplifted horst consisting of a mass of Precambrian metamorphic rock trending in a north-south direction. The eastern slope of the area is very steep and pronounced and is likely the result of high-angle normal faulting. Toward the west, the area is bordered by a high angle normal fault and recent alluvial, lacustrine, and colluvial deposits. An evaluation of rock mass properties, fault and discontinuities, foliation and joints, and in situ stresses was performed. The response of the rock mass was evaluated in 3DEC using Discrete Element Method (DEM) by explicitly accounting for both major and minor discontinuities within the rock mass (i.e. joints, shear zones, faults). Moreover, the stability of the entire subsurface structure including the forebay, access and riser shafts, future forebay, well shafts, and connecting tunnels and their interactions with each other were evaluated using a 3D coupled hydromechanical Finite Element Analysis (FEA).
30
95355
Improved Estimation Strategies of Sensitive Characteristics Using Scrambled Response Techniques in Successive Sampling
Abstract:
This research work is an effort to analyse the consequences of scrambled response technique to estimate the current population mean in two-occasion successive sampling when the characteristic of interest is sensitive in nature. The generalized estimation procedures have been proposed using sensitive auxiliary variables under additive and multiplicative scramble models. The properties of resultant estimators have been deeply examined. Simulation, as well as empirical studies, are carried out to evaluate the performances of the proposed estimators with respect to other competent estimators. The results of our studies suggest that the proposed estimation procedures are highly effective under the presence of non-response situation. The result of this study also suggests that additive scrambled response model is a better choice in the perspective of cost of the survey and privacy of the respondents.
29
52388
Integration of Climatic Factors in the Meta-Population Modelling of the Dynamic of Malaria Transmission, Case of Douala and Yaoundé, Two Cities of Cameroon
Abstract:
The goal of our study is to analyse the impact of climatic factors in malaria transmission taking into account migration between Douala and Yaoundé, two cities of Cameroon country. We show how variations of climatic factors such as temperature and relative humidity affect the malaria spread. We propose a meta-population model of the dynamic transmission of malaria that evolves in space and time and that takes into account temperature and relative humidity and the migration between Douala and Yaoundé. We also integrate the variation of environmental factors as events also called mathematical impulsion that can disrupt the model evolution at any time. Our modelling has been done using the Discrete EVents System Specification (DEVS) formalism. Our implementation has been done on Virtual Laboratory Environment (VLE) that uses DEVS formalism and abstract simulators for coupling models by integrating the concept of DEVS.
28
90130
Iterative Solver for Solving Large-Scale Frictional Contact Problems
Abstract:
Since the precise formulation of the elastic part is irrelevant for the description of the algorithm, we shall consider a generic case. In practice, however, we will have to deal with a non linear material (for instance a Mooney-Rivlin model). We are interested in solving a finite element approximation of the problem, leading to large-scale non linear discrete problems and, after linearization, to large linear systems and ultimately to calculations needing iterative methods. This also implies that penalty method, and therefore augmented Lagrangian method, are to be banned because of their negative effect on the condition number of the underlying discrete systems and thus on the convergence of iterative methods. This is in rupture to the mainstream of methods for contact in which augmented Lagrangian is the principal tool. We shall first present the problem and its discretization; this will lead us to describe a general solution algorithm relying on a preconditioner for saddle-point problems which we shall describe in some detail as it is not entirely standard. We will propose an iterative approach for solving three-dimensional frictional contact problems between elastic bodies, including contact with a rigid body, contact between two or more bodies and also self-contact.
27
49161
Long-Term Monitoring and Seasonal Analysis of PM10-Bound Benzo(a)pyrene in the Ambient Air of Northwestern Hungary
Abstract:
Atmospheric aerosols have several important environmental impacts and health effects in point of air quality. Monitoring the PM10-bound polycyclic aromatic hydrocarbons (PAHs) could have important environmental significance and health protection aspects. Benzo(a)pyrene (BaP) is the most relevant indicator of these PAH compounds. In Hungary, the Hungarian Air Quality Network provides air quality monitoring data for several air pollutants including BaP, but these data show only the annual mean concentrations and maximum values. Seasonal variation of BaP concentrations comparing the heating and non-heating periods could have important role and difference as well. For this reason, the main objective of this study was to assess the annual concentration and seasonal variation of BaP associated with PM10 in the ambient air of Northwestern Hungary seven different sampling sites (six urban and one rural) in the sampling period of 2008–2013. A total of 1475 PM10 aerosol samples were collected in the different sampling sites and analyzed for BaP by gas chromatography method. The BaP concentrations ranged from undetected to 8 ng/m3 with the mean value range of 0.50-0.96 ng/m3 referring to all sampling sites. Relatively higher concentrations of BaP were detected in samples collected in each sampling site in the heating seasons compared with non-heating periods. The annual mean BaP concentrations were comparable with the published data of the other Hungarian sites.
26
56783
Modelling a Hospital as a Queueing Network: Analysis for Improving Performance
Abstract:
In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.
25
19549
Modeling User Departure Time Choice for Trips in Urban Streets
Abstract:
Modeling users’ decisions on departure time choice is the main motivation for this research. In particular, it examines the impact of social-demographic features, household, job characteristics and trip qualities on individuals’ departure time choice. Departure time alternatives are presented as adjacent discrete time periods. The choice between these alternatives is done using a discrete choice model. Since a great deal of early morning trips and traffic congestion at that time of the day comprise work trips, the focus of this study is on the work trip over the entire day. Therefore, this study by using questionnaire of stated preference models users’ departure time choice affected by congestion pricing plan in downtown Tehran. Experimental results demonstrate efficient social-demographic impact on work trips’ departure time. These findings have substantial outcomes for the analysis of transportation planning. Particularly, the analysis shows that ignoring the effects of these variables could result in erroneous information and consequently decisions in the field of transportation planning and air quality would fail and cause financial resources loss.
24
98991
Advanced Simulation and Enhancement for Distributed and Energy Efficient Scheduling for IEEE802.11s Wireless Enhanced Distributed Channel Access Networks
Abstract:
As technology is advancing and wireless applications are becoming dependable sources, while the physical layer of the applications are been embedded into tiny layer, so the more the problem on energy efficiency and consumption. This paper reviews works done in recent years in wireless applications and distributed computing, we discovered that applications are becoming dependable, and resource allocation sharing with other applications in distributed computing. Applications embedded in distributed system are suffering from power stability and efficiency. In the reviews, we also prove that discrete event simulation has been left behind untouched and not been adapted into distributed system as a simulation technique in scheduling of each event that took place in the development of distributed computing applications. We shed more lights on some researcher proposed techniques and results in our reviews to prove the unsatisfactory results, and to show that more work still have to be done on issues of energy efficiency in wireless applications, and congestion in distributed computing.
23
2554
A New Approach of Preprocessing with SVM Optimization Based on PSO for Bearing Fault Diagnosis
Abstract:
Bearing fault diagnosis has attracted significant attention over the past few decades. It consists of two major parts: vibration signal feature extraction and condition classification for the extracted features. In this paper, feature extraction from faulty bearing vibration signals is performed by a combination of the signal’s Kurtosis and features obtained through the preprocessing of the vibration signal samples using Db2 discrete wavelet transform at the fifth level of decomposition. In this way, a 7-dimensional vector of the vibration signal feature is obtained. After feature extraction from vibration signal, the support vector machine (SVM) was applied to automate the fault diagnosis procedure. To improve the classification accuracy for bearing fault prediction, particle swarm optimization (PSO) is employed to simultaneously optimize the SVM kernel function parameter and the penalty parameter. The results have shown feasibility and effectiveness of the proposed approach
22
13539
Developing Variable Repetitive Group Sampling Control Chart Using Regression Estimator
Abstract:
In this article, we propose a control chart based on repetitive group sampling scheme for the location parameter. This charting scheme is based on the regression estimator; an estimator that capitalize the relationship between the variables of interest to provide more sensitive control than the commonly used individual variables. The control limit coefficients have been estimated for different sample sizes for less and highly correlated variables. The monitoring of the production process is constructed by adopting the procedure of the Shewhart’s x-bar control chart. Its performance is verified by the average run length calculations when the shift occurs in the average value of the estimator. It has been observed that the less correlated variables have rapid false alarm rate.
21
78099
Design of a Low Cost Programmable LED Lighting System
Abstract:
Smart LED-based lighting systems have significant advantages over traditional lighting systems due to their capability of producing tunable light spectrums on demand. The main challenge in the design of smart lighting systems is to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area. This paper outlines the programmable LED lighting system design principles of design to achieve the two aims. In this paper, a seven-channel design using low-cost discrete LEDs is presented. Optimization algorithms are used to calculate the number of required LEDs, LEDs arrangements and optimum LED separation distance. The results show the illumination uniformity for each channel. The results also show that the maximum color error is below 0.0808 on the CIE1976 chromaticity scale. In conclusion, this paper considered the simulation and design of a seven-channel programmable lighting system using low-cost discrete LEDs to produce sufficient luminous flux and uniformly accurate output spectrum for sufficiently broad area.
20
32846
A Two-Stage Airport Ground Movement Speed Profile Design Methodology Using Particle Swarm Optimization
Abstract:
Automation of airport operations can greatly improve ground movement efficiency. In this paper, we study the speed profile design problem for advanced airport ground movement control and guidance. The problem is constrained by the surface four-dimensional trajectory generated in taxi planning. A decomposed approach of two stages is presented to solve this problem efficiently. In the first stage, speeds are allocated at control points which ensure smooth speed profiles can be found later. In the second stage, detailed speed profiles of each taxi interval are generated according to the allocated control point speeds with the objective of minimizing the overall fuel consumption. We present a swarm intelligence based algorithm for the first-stage problem and a discrete variable driven enumeration method for the second-stage problem since it only has a small set of discrete variables. Experimental results demonstrate the presented methodology performs well on real world speed profile design problems.
19
64247
Anomaly: A Case of Babri Masjid Dispute
Abstract:
Religion as a discrete system through its lawful internal working produces an output in the form of realised spatial order with its social logic and a social order with its spatial logic. Thus, it appears to exhibit its duality of spatial and trans-spatial. The components of this system share a relevance forming a collective. This shared relevance creates meaning forming a group where all collectives share one identity. This group with its new social order and its spatial logic revive the already existing spatial order. These religious groups do so having a tendency to expand resulting in the production of space in a situation of encounter where they have found relevance. But an encounter without a lawful internal working of a discrete system results in anomaly because groups do not find relevance due to the absence of collective identity. Events happen all around. One of the main reasons we could say that something became an event is because of conflict. Conflict not in its definitive sense but any occurrence that happens because of an intervention that creates an event worth remembering. The unfolding of such events creates Cities and Urban spaces which exhibit their duality of spatial and trans-spatial by behaving as a discrete system. This system through its lawful internal working produces an output in the form of realized spatial order with its social logic and a social order with spatial logic. The components of this system form a collective through a shared a relevance. This shared relevance creates meaning forming a group where all collectives share one identity. This group with its new social order and its spatial logic revives the already existing spatial order. These groups do so having a tendency to expand resulting in the production of space in a situation of encounter where they have found relevance. But an encounter without a lawful internal working of the discrete system results in anomaly because groups do not find relevance due to the absence of collective identity. This paper makes an effort to explore one such even in the case of Babri Mosque and Ramjanmabhumi, Ayodhya to explain the anomaly as transposition of social and spatial. The paper through the case studies makes an attempt to generate an equation explaining the two different situations of religious encounters, former reviving the social and spatial order and the other resulting in anomaly. Through the case study, it makes an attempt to generate an equation explaining the two different situations of religious encounters, former reviving the social and spatial order and the other resulting in anomaly.
18
67718
Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method
Abstract:
Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.
17
117427
E-Consumers’ Attribute Non-Attendance Switching Behavior: Effect of Providing Information on Attributes
Abstract:
Discrete Choice Experiments (DCE) are used to investigate how product attributes affect decision-makers’ choices. In DCEs, choice situations consisting of several alternatives are presented from which choice-makers select the preferred alternative. Standard multinomial logit models based on random utility theory can be used to estimate the utilities for the attributes. The overarching principle in these models is that respondents understand and use all the attributes when making choices. However, studies suggest that respondents sometimes ignore some attributes (commonly referred to as Attribute Non-Attendance/ANA). The choice modeling literature presents ANA as a static process, i.e., respondents’ ANA behavior does not change throughout the experiment. However, respondents may ignore attributes due to changing factors like availability of information on attributes, learning/fatigue in experiments, etc. We develop a dynamic mixture latent Markov model to model changes in ANA when information on attributes is provided. The model is illustrated on e-consumers’ webshop choices. The results indicate that the dynamic ANA model describes the behavioral changes better than modeling the impact of information using changes in parameters. Further, we find that providing information on attributes leads to an increase in the attendance probabilities for the investigated attributes.
16
87469
An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
Abstract:
Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.
15
43417
Atmospheric Polycyclic Aromatic Hydrocarbons (PAHs) in Rural and Urban of Central Taiwan
Abstract:
This study analyzed 16 atmospheric PAHs species which were controlled by USEPA and IARC. To measure the concentration of PAHs, four rural sampling sites and two urban sampling sites were selected in Central Taiwan during spring and summer. In central Taiwan, the rural sampling stations were located in the downstream of Da-An River, Da-Jang River, Wu River and Chuo-shui River. On the other hand, the urban sampling sites were located in Taichung district and close to the roadside. Ambient air samples of both vapor phase and particle phase of PAHs compounds were collected using high volume sampling trains (Analitica). The sampling media were polyurethane foam (PUF) with XAD2 and quartz fiber filters. Diagnostic ratio, Principal component analysis (PCA), Positive Matrix Factorization (PMF) models were used to evaluate the apportionment of PAHs in the atmosphere and speculate the relative contribution of various emission sources. Because of the high temperature and low wind speed, high PAHs concentration in the atmosphere was observed. The total PAHs concentration, especially in vapor phase, had significant change during summer. During the sampling periods the total PAHs concentration of atmospheric at four rural and two urban sampling sites in spring and summer were 3.70±0.40 ng/m3,3.40±0.63 ng/m3,5.22±1.24 ng/m3,7.23±0.37 ng/m3,7.46±2.36 ng/m3,6.21±0.55 ng/m3 ; 15.0± 0.14 ng/m3,18.8±8.05 ng/m3,20.2±8.58 ng/m3,16.1±3.75 ng/m3,29.8±10.4 ng/m3,35.3±11.8 ng/m3, respectively. In order to identify PAHs sources, we used diagnostic ratio to classify the emission sources. The potential sources were diesel combustion and gasoline combustion in spring and summer, respectively. According to the principal component analysis (PCA), the PC1 and PC2 had 23.8%, 20.4% variance and 21.3%, 17.1% variance in spring and summer, respectively. Especially high molecular weight PAHs (BaP, IND, BghiP, Flu, Phe, Flt, Pyr) were dominated in spring when low molecular weight PAHs (AcPy, Ant, Acp, Flu) because of the dominating high temperatures were dominated in the summer. Analysis by using PMF model found the sources of PAHs in spring were stationary sources (34%), vehicle emissions (24%), coal combustion (23%) and petrochemical fuel gas (19%), while in summer the emission sources were petrochemical fuel gas (34%), the natural environment of volatile organic compounds (29%), coal combustion (19%) and stationary sources (18%).
14
30894
Metabolic Profiling in Breast Cancer Applying Micro-Sampling of Biological Fluids and Analysis by Gas Chromatography – Mass Spectrometry
Abstract:
Recently, collection of biological fluids on special filter papers has become a popular micro-sampling technique. Especially, the dried blood spot (DBS) micro-sampling technique has gained much attention and is momently applied in various life sciences reserach areas. As a result of this popularity, DBS are not only intensively competing with the venous blood sampling method but are at this moment widely applied in numerous bioanalytical assays. In particular, in the screening of inherited metabolic diseases, pharmacokinetic modeling and in therapeutic drug monitoring. Recently, microsampling techniques were also introduced in “omics” areas, whereunder metabolomics. For a metabolic profiling study we applied micro-sampling of biological fluids (blood and plasma) from healthy controls and from women with breast cancer. From blood samples, dried blood and plasma samples were prepared by spotting 8uL sample onto pre-cutted 5-mm paper disks followed by drying of the disks for 100 minutes. Dried disks were then extracted by 100 uL of methanol. From liquid blood and plasma samples 40 uL were deproteinized with methanol followed by centrifugation and collection of supernatants. Supernatants and extracts were evaporated until dryness by nitrogen gas and residues derivated by O-methyxyamine and MSTFA. As internal standard C17:0-methylester in heptane (10 ppm) was used. Deconvolution and alignment of and full scan (m/z 50-500) MS data were done by AMDIS and SpectConnect (http://spectconnect.mit.edu) software, respectively. Statistical Data analysis was done by Principal Component Analysis (PCA) using R software. The results obtained from our preliminary study indicate that the use of dried blood/plasma on paper disks could be a powerful new tool in metabolic profiling. Many of the metabolites observed in plasma (liquid/dried) were also positively identified in whole blood samples (liquid/dried). Whole blood could be a potential substitute matrix for plasma in Metabolomic profiling studies as well also micro-sampling techniques for the collection of samples in clinical studies. It was concluded that the separation of the different sample methodologies (liquid vs. dried) as observed by PCA was due to different sample treatment protocols applied. More experiments need to be done to confirm obtained observations as well also a more rigorous validation .of these micro-sampling techniques is needed. The novelty of our approach can be found in the application of different biological fluid micro-sampling techniques for metabolic profiling.
13
45398
Interaction of Non-Gray-Gas Radiation with Opposed Mixed Convection in a Lid-Driven Square Cavity
Abstract:
The present study was conducted to numerically investigate the interaction of non-gray-gas radiation with opposed mixed convection in a vertical two-sided lid-driven square cavity. The opposing flows are simultaneously generated by the vertical boundary walls which slide at a constant speed and the natural convection due to the gradient temperature of differentially heated cavity. The horizontal walls are thermally insulated and perfectly reflective. The enclosure is filled with air-H2O-CO2 gas mixture, which is considered as a non-gray, absorbing, emitting and not scattering medium. The governing differential equations are solved by a finite-volume method, by adopting the SIMPLER algorithm for pressure–velocity coupling. The radiative transfer equation (RTE) is solved by the discrete ordinates method (DOM). The spectral line weighted sum of gray gases model (SLW) is used to account for non-gray radiation properties. Three cases of the effects of radiation (transparent, gray and non-gray medium) are studied. Comparison is also made with the parametric studies of the effect of the mixed convection parameter, Ri (0.1, 1, 10), on the fluid flow and heat transfer have been performed.
12
92611
Numerical Modeling to Validate Theoretical Models of Toppling Failure in Rock Slopes
Abstract:
Traditionally, rock slope stability is carried out using limit equilibrium analysis when investigating toppling failure. In these equilibrium methods, internal forces exerted between columns are not clearly defined, and to the authors’ best knowledge, there is no consensus in literature with respect to the results of analysis. A discrete element method-based numerical model was developed and applied to simulate the behavior of rock layers subjected to toppling failure. Based on this calibrated numerical model, a study of the location and distribution of internal forces that result in equilibrium was carried out. The sum of side forces was applied at a point on a block which properly represents the force to determine the inter-column force distribution. In terms of the side force distribution coefficient, the result was compared to those obtained from laboratory centrifuge tests. The results of the simulation show the suitable criteria to select the correct position for the internal exerted force between rock layers. In addition, the numerical method demonstrates how a theoretical method could be reliable by considering the interaction between the rock layers.
11
31121
Spatio-temporal Variations in Heavy Metal Concentrations in Sediment of Qua Iboe River Estuary, Nigeria
Abstract:
The concentrations of heavy metals in sediments of Qua Iboe River Estuary (QIRE) were monitored at four different sampling locations in wet and dry seasons. A preliminary survey to determine the four sampling stations along the river continuum showed that the area spanned between < 0.1% salinity at the control station and 21.5‰ at the fourth station along the river continuum. A preliminary survey to determine the four sampling locations along the river estuary showed variations in salinity and other physicochemical parameters. The estuary was found to be polluted with heavy metals from point and nonpoint sources at varying degrees. Mean values of 7.80 mg/kg, 4.97 mg/kg and 2.80 mg/kg of nickel were obtained for sediment samples from Douglas creek, Qua Iboe and Atlantic sampling locations, respectively in the dry season. The wet season nickel concentrations were however lower. The entire study area was grossly contaminated by iron. At Douglas creek, the concentration of iron in sediment was 9274 ± 9.54 mg/kg while copper, nickel, lead and vanadium were < 0.5 mg/kg each as compared to iron. Bioaccumulation was therefore suspected within the study area as values of 31.00 ± 0.79, 36.00 ± 0.10 and 55.00 ± 0.05 mg/kg of zinc were recorded in sediment at Douglas creek, Atlantic and the control sampling locations. The results from this study showed that the source of these heavy metals were from point sources like the corrosion of metal steel pipes from old bridges as well as oily sludge wastes from the Qua Iboe Terminal / tank farm located within the vicinity of the study area.
10
98047
Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling
Abstract:
Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.
9
93381
Modeling User Departure Time Choice for Work Trips in High Traffic Suburban Roads
Abstract:
Modeling users’ decisions on departure time choice is the main motivation for this research. In particular, it examines the impact of social-demographic features, household, job characteristics and trip qualities on individuals’ departure time choice. Departure time alternatives are presented as adjacent discrete time periods. The choice between these alternatives is done using a discrete choice model. Since a great deal of early morning trips and traffic congestion at that time of the day comprise work trips, the focus of this study is on the work trip over the entire day. Therefore, this study by using the users’ stated preference in questionnaire models users’ departure time choice affected by congestion pricing schemes in high traffic suburban entrance roads of Tehran. The results demonstrate efficient social-demographic impact on work trips’ departure time. These findings have substantial outcomes for the analysis of transportation planning. Particularly, the analysis shows that ignoring the effects of these variables could result in erroneous information and consequently decisions in the field of transportation planning and air quality would fail and cause financial resources loss.
8
43572
Modeling Bessel Beams and Their Discrete Superpositions from the Generalized Lorenz-Mie Theory to Calculate Optical Forces over Spherical Dielectric Particles
Abstract:
In this work, we propose an algorithm developed under Python language for the modeling of ordinary scalar Bessel beams and their discrete superpositions and subsequent calculation of optical forces exerted over dielectric spherical particles. The mathematical formalism, based on the generalized Lorenz-Mie theory, is implemented in Python for its large number of free mathematical (as SciPy and NumPy), data visualization (Matplotlib and PyJamas) and multiprocessing libraries. We also propose an approach, provided by a synchronized Software as Service (SaaS) in cloud computing, to develop a user interface embedded on a mobile application, thus providing users with the necessary means to easily introduce desired unknowns and parameters and see the graphical outcomes of the simulations right at their mobile devices. Initially proposed as a free Android-based application, such an App enables data post-processing in cloud-based architectures and visualization of results, figures and numerical tables.
7
21817
Robust Medical Image Watermarking based on Contourlet and Extraction Using ICA
Abstract:
In this paper, a medical image watermarking algorithm based on contourlet is proposed. Medical image watermarking is a special subcategory of image watermarking in the sense that images have special requirements. Watermarked medical images should not differ perceptually from their original counterparts because clinical reading of images must not be affected. Watermarking techniques based on wavelet transform are reported in many literatures but robustness and security using contourlet are better when compared to wavelet transform. The main challenge in exploring geometry in images comes from the discrete nature of the data. In this paper, original image is decomposed to two level using contourlet and the watermark is embedded in the resultant sub-bands. Sub-band selection is based on the value of Peak Signal to Noise Ratio (PSNR) that is calculated between watermarked and original image. To extract the watermark, Kernel ICA is used and it has a novel characteristic is that it does not require the transformation process to extract the watermark. Simulation results show that proposed scheme is robust against attacks such as Salt and Pepper noise, Median filtering and rotation. The performance measures like PSNR and Similarity measure are evaluated and compared with Discrete Wavelet Transform (DWT) to prove the robustness of the scheme. Simulations are carried out using Matlab Software.
6
19426
Analysis of Combined Heat Transfer through the Core Materials of VIPs with Various Scattering Properties
Abstract:
Vacuum insulation panel (VIP) can achieve very low thermal conductivity by evacuating its inner space. Heat transfer in the core materials of highly-evacuated VIP occurs by conduction through the solid structure and radiation through the pore. The effect of various scattering modes in combined conduction-radiation in VIP is investigated through numerical analysis. The discrete ordinates interpolation method (DOIM) incorporated with the commercial code FLUENT® is employed. It is found that backward scattering is more effective in reducing the total heat transfer while isotropic scattering is almost identical with pure absorbing/emitting case of the same optical thickness. For a purely scattering medium, the results agree well with additive solution with diffusion approximation, while a modified term is added in the effect of optical thickness to backward scattering is employed. For other scattering phase functions, it is also confirmed that backwardly scattering phase function gives a lower effective thermal conductivity. Thus, the materials with backward scattering properties, with radiation shields are desirable to lower the thermal conductivity of VIPs.
5
41730
Pathways and Mechanisms of Lymphocytes Emigration from Newborn Thymus
Abstract:
Nowadays mechanisms of thymocytes emigration from the thymus to the periphery are investigated actively. We have proposed a hypothesis of thymocytes’ migration from the thymus through lymphatic vessels during periodical short-term local edema. By morphological, hystochemical methods we have examined quantity of lymphocytes, epitelioreticulocytes, mast cells, blood and lymphatic vessels in morpho-functional areas of rats’ thymuses during the first week after birth in 4 hours interval. In newborn and beginning from 8 hour after birth every 12 hours specific density of the thymus, absolute quantity of microcirculatory vessels, especially of lymphatic ones, lymphcyte-epithelial index, quantity of mast cells and their degranulative forms increase. Structure of extracellular matrix, intrathymical microenvironment and lymphocytes’ adhesive properties change. Absolute quantity of small lymphocytes in thymic cortex changes wavy. All these changes are straightly expressed from 0 till 2, from 12 till 16, from 108 till 120 hours of postnatal life. During this periods paravasal lymphatic vessels are stuffed with lymphocytes, i.e. discrete migration of lymphocytes from the thymus occurs. After rapid edema reduction, quantity of lymphatic vessels decrease, they become empty. Therefore, in the thymus of newborn periodical short-term local edema is observed, on its top discrete migration of lymphocytes from the thymus occurs.
4
108076
River Stage-Discharge Forecasting Based on Multiple-Gauge Strategy Using EEMD-DWT-LSSVM Approach
Abstract:
This study presented hybrid pre-processing approach along with a conceptual model to enhance the accuracy of river discharge prediction. In order to achieve this goal, Ensemble Empirical Mode Decomposition algorithm (EEMD), Discrete Wavelet Transform (DWT) and Mutual Information (MI) were employed as a hybrid pre-processing approach conjugated to Least Square Support Vector Machine (LSSVM). A conceptual strategy namely multi-station model was developed to forecast the Souris River discharge more accurately. The strategy used herein was capable of covering uncertainties and complexities of river discharge modeling. DWT and EEMD was coupled, and the feature selection was performed for decomposed sub-series using MI to be employed in multi-station model. In the proposed feature selection method, some useless sub-series were omitted to achieve better performance. Results approved efficiency of the proposed DWT-EEMD-MI approach to improve accuracy of multi-station modeling strategies.
3
50935
Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver
Abstract:
In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.
2
62644
An Improved Sub-Nyquist Sampling Jamming Method for Deceiving Inverse Synthetic Aperture Radar
Abstract:
Sub-Nyquist sampling jamming method (SNSJ) is a well known deception jamming method for inverse synthetic aperture radar (ISAR). However, the anti-decoy of the SNSJ method performs easier since the amplitude of the false-target images are weaker than the real-target image; the false-target images always lag behind the real-target image, and all targets are located in the same cross-range. In order to overcome the drawbacks mentioned above, a simple modulation based on SNSJ (M-SNSJ) is presented in this paper. The method first uses amplitude modulation factor to make the amplitude of the false-target images consistent with the real-target image, then uses the down-range modulation factor and cross-range modulation factor to make the false-target images move freely in down-range and cross-range, respectively, thus the capacity of deception is improved. Finally, the simulation results on the six available combinations of three modulation factors are given to illustrate our conclusion.
1
76339
Investigation of Single Particle Breakage inside an Impact Mill
Abstract:
In current work, a numerical model based on the discrete element method (DEM) was developed which provided information about particle dynamic and impact event condition inside a laboratory scale impact mill (Fritsch). It showed that each particle mostly experiences three impacts inside the mill. While the first impact frequently happens at front surface of the rotor’s rib, the frequent location of the second impact is side surfaces of the rotor’s rib. It was also showed that while the first impact happens at small impact angle mostly varying around 35º, the second impact happens at around 70º which is close to normal impact condition. Also analyzing impact energy revealed that varying mill speed from 6000 to 14000 rpm, the ratio of first impact’s average impact energy and minimum required energy to break particle (Wₘᵢₙ) increased from 0.30 to 0.85. Moreover, it was seen that second impact poses intense impact energy on particle which can be considered as the main cause of particle splitting. Finally, obtained information from DEM simulation along with obtained data from conducted experiments was implemented in semi-empirical equations in order to find selection and breakage functions. Then, using a back-calculation approach, those parameters were used to predict the PSDs of ground particles under different impact energies. Results were compared with experiment results and showed reasonable accuracy and prediction ability.