Bidirectional Discriminant Supervised Locality Preserving Projection for Face Recognition
Dimensionality reduction and feature extraction are of
crucial importance for achieving high efficiency in manipulating
the high dimensional data. Two-dimensional discriminant locality
preserving projection (2D-DLPP) and two-dimensional discriminant
supervised LPP (2D-DSLPP) are two effective two-dimensional
projection methods for dimensionality reduction and feature
extraction of face image matrices. Since 2D-DLPP and 2D-DSLPP
preserve the local structure information of the original data and
exploit the discriminant information, they usually have good
recognition performance. However, 2D-DLPP and 2D-DSLPP
only employ single-sided projection, and thus the generated low
dimensional data matrices have still many features. In this paper,
by combining the discriminant supervised LPP with the bidirectional
projection, we propose the bidirectional discriminant supervised LPP
(BDSLPP). The left and right projection matrices for BDSLPP can
be computed iteratively. Experimental results show that the proposed
BDSLPP achieves higher recognition accuracy than 2D-DLPP,
2D-DSLPP, and bidirectional discriminant LPP (BDLPP).
Metric Dimension on Line Graph of Honeycomb Networks
Let G = (V,E) be a connected graph and distance
between any two vertices a and b in G is a−b geodesic and is denoted
by d(a, b). A set of vertices W resolves a graph G if each vertex is
uniquely determined by its vector of distances to the vertices in W.
A metric dimension of G is the minimum cardinality of a resolving
set of G. In this paper line graph of honeycomb network has been
derived and then we calculated the metric dimension on line graph
of honeycomb network.
The Non-Stationary BINARMA(1,1) Process with Poisson Innovations: An Application on Accident Data
This paper considers the modelling of a non-stationary
bivariate integer-valued autoregressive moving average of order
one (BINARMA(1,1)) with correlated Poisson innovations. The
BINARMA(1,1) model is specified using the binomial thinning
operator and by assuming that the cross-correlation between the
two series is induced by the innovation terms only. Based on
these assumptions, the non-stationary marginal and joint moments
of the BINARMA(1,1) are derived iteratively by using some initial
stationary moments. As regards to the estimation of parameters of
the proposed model, the conditional maximum likelihood (CML)
estimation method is derived based on thinning and convolution
properties. The forecasting equations of the BINARMA(1,1) model
are also derived. A simulation study is also proposed where
BINARMA(1,1) count data are generated using a multivariate
Poisson R code for the innovation terms. The performance of
the BINARMA(1,1) model is then assessed through a simulation
experiment and the mean estimates of the model parameters obtained
are all efficient, based on their standard errors. The proposed model
is then used to analyse a real-life accident data on the motorway in
Mauritius, based on some covariates: policemen, daily patrol, speed
cameras, traffic lights and roundabouts. The BINARMA(1,1) model
is applied on the accident data and the CML estimates clearly indicate
a significant impact of the covariates on the number of accidents on
the motorway in Mauritius. The forecasting equations also provide
reliable one-step ahead forecasts.
An Automated Stock Investment System Using Machine Learning Techniques: An Application in Australia
A key issue in stock investment is how to select representative features for stock selection. The objective of this paper is to firstly determine whether an automated stock investment system, using machine learning techniques, may be used to identify a portfolio of growth stocks that are highly likely to provide returns better than the stock market index. The second objective is to identify the technical features that best characterize whether a stock’s price is likely to go up and to identify the most important factors and their contribution to predicting the likelihood of the stock price going up. Unsupervised machine learning techniques, such as cluster analysis, were applied to the stock data to identify a cluster of stocks that was likely to go up in price – portfolio 1. Next, the principal component analysis technique was used to select stocks that were rated high on component one and component two – portfolio 2. Thirdly, a supervised machine learning technique, the logistic regression method, was used to select stocks with a high probability of their price going up – portfolio 3. The predictive models were validated with metrics such as, sensitivity (recall), specificity and overall accuracy for all models. All accuracy measures were above 70%. All portfolios outperformed the market by more than eight times. The top three stocks were selected for each of the three stock portfolios and traded in the market for one month. After one month the return for each stock portfolio was computed and compared with the stock market index returns. The returns for all three stock portfolios was 23.87% for the principal component analysis stock portfolio, 11.65% for the logistic regression portfolio and 8.88% for the K-means cluster portfolio while the stock market performance was 0.38%. This study confirms that an automated stock investment system using machine learning techniques can identify top performing stock portfolios that outperform the stock market.
Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method
In this paper, one dimensional advection diffusion
model is analyzed using finite difference method based on
Crank-Nicolson scheme. A practical problem of filter cake washing
of chemical engineering is analyzed. The model is converted into
dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the
Crank-Nicolson spatial derivative scheme is used in space domain
and forward difference scheme is used in time domain. The scheme is
found to be unconditionally convergent, stable, first order accurate in
time and second order accurate in space domain. For a test problem,
numerical results are compared with the analytical ones for different
values of parameter.
Relation between Roots and Tangent Lines of Function in Fractional Dimensions: A Method for Optimization Problems
In this paper, a basic schematic of fractional dimensional optimization problem is presented. As will be shown, a method is performed based on a relation between roots and tangent lines of function in fractional dimensions for an arbitrary initial point. It is shown that for each polynomial function with order N at least N tangent lines must be existed in fractional dimensions of 0 < α < N+1 which pass exactly through the all roots of the proposed function. Geometrical analysis of tangent lines in fractional dimensions is also presented to clarify more intuitively the proposed method. Results show that with an appropriate selection of fractional dimensions, we can directly find the roots. Method is presented for giving a different direction of optimization problems by the use of fractional dimensions.
Multi-Objective Optimization of Combined System Reliability and Redundancy Allocation Problem
This paper presents established 3n enumeration procedure for mixed integer optimization problems for solving multi-objective reliability and redundancy allocation problem subject to design constraints. The formulated problem is to find the optimum level of unit reliability and the number of units for each subsystem. A number of illustrative examples are provided and compared to indicate the application of the superiority of the proposed method.
Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model
This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.
Characterizations of Γ-Semirings by Their Cubic Ideals
Cubic ideals, cubic bi-ideals and cubic quasi-ideals of
a Γ-semiring are introduced and various properties of these ideals
are investigated. Among all other results, some characterizations of
regular Γ-semirings are achieved.
On the Efficiency of Five Step Approximation Method for the Solution of General Third Order Ordinary Differential Equations
In this work, a five step continuous method for the solution of third order ordinary differential equations was developed in block form using collocation and interpolation techniques of the shifted Legendre polynomial basis function. The method was found to be zero-stable, consistent and convergent. The application of the method in solving third order initial value problem of ordinary differential equations revealed that the method compared favorably with existing methods.
Comparing the Efficiency of Simpson’s 1/3 and 3/8 Rules for the Numerical Solution of First Order Volterra Integro-Differential Equations
This paper compared the efficiency of Simpson’s 1/3 and 3/8 rules for the numerical solution of first order Volterra integro-differential equations. In developing the solution, collocation approximation method was adopted using the shifted Legendre polynomial as basis function. A block method approach is preferred to the predictor corrector method for being self-starting. Experimental results confirmed that the Simpson’s 3/8 rule is more efficient than the Simpson’s 1/3 rule.
Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound
This paper employs the Jeffrey's prior technique in the
process of estimating the periodograms and frequency of sinusoidal
model for unknown noisy time variants or oscillating events (data) in
a Bayesian setting. The non-informative Jeffrey's prior was adopted
for the posterior trigonometric function of the sinusoidal model
such that Cramer-Rao Lower Bound (CRLB) inference was used
in carving-out the minimum variance needed to curb the invariance
structure effect for unknown noisy time observational and repeated
circular patterns. An average monthly oscillating temperature series
measured in degree Celsius (0C) from 1901 to 2014 was subjected to
the posterior solution of the unknown noisy events of the sinusoidal
model via Markov Chain Monte Carlo (MCMC). It was not only
deduced that two minutes period is required before completing a cycle
of changing temperature from one particular degree Celsius to another
but also that the sinusoidal model via the CRLB-Jeffrey's prior for
unknown noisy events produced a miniature posterior Maximum A
Posteriori (MAP) compare to a known noisy events.
Three-Dimensional Generalized Thermoelasticity with Variable Thermal Conductivity
In this paper, a three-dimensional model of the generalized thermoelasticity with one relaxation time and variable thermal conductivity has been constructed. The resulting non-dimensional governing equations together with the Laplace and double Fourier transforms techniques have been applied to a three-dimensional half-space subjected to thermal loading with rectangular pulse and traction free in the directions of the principle co-ordinates. The inverses of double Fourier transforms, and Laplace transforms have been obtained numerically. Numerical results for the temperature increment, the invariant stress, the invariant strain, and the displacement are represented graphically. The variability of the thermal conductivity has significant effects on the thermal and the mechanical waves.
Autonomous Vehicle Navigation Using Harmonic Functions via Modified Arithmetic Mean Iterative Method
Harmonic functions are solutions to Laplace’s equation
that are known to have an advantage as a global approach in providing
the potential values for autonomous vehicle navigation. However,
the computation for obtaining harmonic functions is often too slow
particularly when it involves very large environment. This paper
presents a two-stage iterative method namely Modified Arithmetic
Mean (MAM) method for solving 2D Laplace’s equation. Once
the harmonic functions are obtained, the standard Gradient Descent
Search (GDS) is performed for path finding of an autonomous vehicle
from arbitrary initial position to the specified goal position. Details
of the MAM method are discussed. Several simulations of vehicle
navigation with path planning in a static known indoor environment
were conducted to verify the efficiency of the MAM method. The
generated paths obtained from the simulations are presented. The
performance of the MAM method in computing harmonic functions
in 2D environment to solve path planning problem for an autonomous
vehicle navigation is also provided.
Cryptographic Attack on Lucas Based Cryptosystems Using Chinese Remainder Theorem
Lenstra’s attack uses Chinese remainder theorem as a tool and requires a faulty signature to be successful. This paper reports on the security responses of fourth and sixth order Lucas based (LUC4,6) cryptosystem under the Lenstra’s attack as compared to the other two Lucas based cryptosystems such as LUC and LUC3 cryptosystems. All the Lucas based cryptosystems were exposed mathematically to the Lenstra’s attack using Chinese Remainder Theorem and Dickson polynomial. Result shows that the possibility for successful Lenstra’s attack is less against LUC4,6 cryptosystem than LUC3 and LUC cryptosystems. Current study concludes that LUC4,6 cryptosystem is more secure than LUC and LUC3 cryptosystems in sustaining against Lenstra’s attack.
An Attack on the Lucas Based El-Gamal Cryptosystem in the Elliptic Curve Group Over Finite Field Using Greater Common Divisor
Greater common divisor (GCD) attack is an attack that relies on the polynomial structure of the cryptosystem. This attack required two plaintexts differ from a fixed number and encrypted under same modulus. This paper reports a security reaction of Lucas Based El-Gamal Cryptosystem in the Elliptic Curve group over finite field under GCD attack. Lucas Based El-Gamal Cryptosystem in the Elliptic Curve group over finite field was exposed mathematically to the GCD attack using GCD and Dickson polynomial. The result shows that the cryptanalyst is able to get the plaintext without decryption by using GCD attack. Thus, the study concluded that it is highly perilous when two plaintexts have a slight difference from a fixed number in the same Elliptic curve group over finite field.
A Study of Two Disease Models: With and Without Incubation Period
The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.
Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.
The Influence of Beta Shape Parameters in Project Planning
Networks can be utilized to represent project planning problems, using nodes for activities and arcs to indicate precedence relationship between them. For fixed activity duration, a simple algorithm calculates the amount of time required to complete a project, followed by the activities that comprise the critical path. Program Evaluation and Review Technique (PERT) generalizes the above model by incorporating uncertainty, allowing activity durations to be random variables, producing nevertheless a relatively crude solution in planning problems. In this paper, based on the findings of the relevant literature, which strongly suggests that a Beta distribution can be employed to model earthmoving activities, we utilize Monte Carlo simulation, to estimate the project completion time distribution and measure the influence of skewness, an element inherent in activities of modern technical projects. We also extract the activity criticality index, with an ultimate goal to produce more accurate planning estimations.
An IM-COH Algorithm Neural Network Optimization with Cuckoo Search Algorithm for Time Series Samples
Back propagation algorithm (BP) is a widely used
technique in artificial neural network and has been used as a tool
for solving the time series problems, such as decreasing training
time, maximizing the ability to fall into local minima, and optimizing
sensitivity of the initial weights and bias. This paper proposes an
improvement of a BP technique which is called IM-COH algorithm
(IM-COH). By combining IM-COH algorithm with cuckoo search
algorithm (CS), the result is cuckoo search improved control output
hidden layer algorithm (CS-IM-COH). This new algorithm has a
better ability in optimizing sensitivity of the initial weights and bias
than the original BP algorithm. In this research, the algorithm of
CS-IM-COH is compared with the original BP, the IM-COH, and the
original BP with CS (CS-BP). Furthermore, the selected benchmarks,
four time series samples, are shown in this research for illustration.
The research shows that the CS-IM-COH algorithm give the best
forecasting results compared with the selected samples.
Box Counting Dimension of the Union L of Trinomial Curves When α ≥ 1
In the present work, we consider one category of curves
denoted by L(p, k, r, n). These curves are continuous arcs which are
trajectories of roots of the trinomial equation zn = αzk + (1 − α),
where z is a complex number, n and k are two integers such that
1 ≤ k ≤ n − 1 and α is a real parameter greater than 1. Denoting
by L the union of all trinomial curves L(p, k, r, n) and using the
box counting dimension as fractal dimension, we will prove that the
dimension of L is equal to 3/2.
Optimal Location of the I/O Point in the Parking System
In this paper, we deal with the optimal I/O point location in an automated parking system. In this system, the S/R machine (storage and retrieve machine) travels independently in vertical and horizontal directions. Based on the characteristics of the parking system and the basic principle of AS/RS system (Automated Storage and Retrieval System), we obtain the continuous model in units of time. For the single command cycle using the randomized storage policy, we calculate the probability density function for the system travel time and thus we develop the travel time model. And we confirm that the travel time model shows a good performance by comparing with discrete case. Finally in this part, we establish the optimal model by minimizing the expected travel time model and it is shown that the optimal location of the I/O point is located at the middle of the left-hand above corner.
Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids
Urban flooding resulting from a sudden release of
water due to dam-break or excessive rainfall is a serious threatening
environment hazard, which causes loss of human life and large
economic losses. Anticipating floods before they occur could
minimize human and economic losses through the implementation
of appropriate protection, provision, and rescue plans. This work
reports on the numerical modelling of flash flood propagation
in urban areas after an excessive rainfall event or dam-break.
A two-dimensional (2D) depth-averaged shallow water model is
used with a refined unstructured grid of triangles for representing
the urban area topography. The 2D shallow water equations are
solved using a second-order well-balanced discontinuous Galerkin
scheme. Theoretical test case and three flood events are described
to demonstrate the potential benefits of the scheme: (i) wetting and
drying in a parabolic basin (ii) flash flood over a physical model of
the urbanized Toce River valley in Italy; (iii) wave propagation on
the Reyran river valley in consequence of the Malpasset dam-break
in 1959 (France); and (iv) dam-break flood in October 1982 at the
town of Sumacarcel (Spain). The capability of the scheme is also
verified against alternative models. Computational results compare
well with recorded data and show that the scheme is at least as
efficient as comparable second-order finite volume schemes, with
notable efficiency speedup due to parallelization.
Natural Emergence of a Core Structure in Networks via Clique Percolation
Networks are often presented as containing a “core”
and a “periphery.” The existence of a core suggests that some
vertices are central and form the skeleton of the network, to which
all other vertices are connected. An alternative view of graphs is
through communities. Multiple measures have been proposed for
dense communities in graphs, the most classical being k-cliques,
k-cores, and k-plexes, all presenting groups of tightly connected
vertices. We here show that the edge number thresholds for such
communities to emerge and for their percolation into a single dense
connectivity component are very close, in all networks studied. These
percolating cliques produce a natural core and periphery structure.
This result is generic and is tested in configuration models and in
real-world networks. This is also true for k-cores and k-plexes. Thus,
the emergence of this connectedness among communities leading to
a core is not dependent on some specific mechanism but a direct
result of the natural percolation of dense communities.
Forecasting Issues in Energy Markets within a Reg-ARIMA Framework
Electricity markets throughout the world have
undergone substantial changes. Accurate, reliable, clear and
comprehensible modeling and forecasting of different variables
(loads and prices in the first instance) have achieved increasing
importance. In this paper, we describe the actual state of the
art focusing on reg-SARMA methods, which have proven to be
flexible enough to accommodate the electricity price/load behavior
satisfactory. More specifically, we will discuss: 1) The dichotomy
between point and interval forecasts; 2) The difficult choice between
stochastic (e.g. climatic variation) and non-deterministic predictors
(e.g. calendar variables); 3) The confrontation between modelling
a single aggregate time series or creating separated and potentially
different models of sub-series. The noteworthy point that we would
like to make it emerge is that prices and loads require different
approaches that appear irreconcilable even though must be made
reconcilable for the interests and activities of energy companies.
Adomian’s Decomposition Method to Generalized Magneto-Thermoelasticity
Due to many applications and problems in the fields of plasma physics, geophysics, and other many topics, the interaction between the strain field and the magnetic field has to be considered. Adomian introduced the decomposition method for solving linear and nonlinear functional equations. This method leads to accurate, computable, approximately convergent solutions of linear and nonlinear partial and ordinary differential equations even the equations with variable coefficients. This paper is dealing with a mathematical model of generalized thermoelasticity of a half-space conducting medium. A magnetic field with constant intensity acts normal to the bounding plane has been assumed. Adomian’s decomposition method has been used to solve the model when the bounding plane is taken to be traction free and thermally loaded by harmonic heating. The numerical results for the temperature increment, the stress, the strain, the displacement, the induced magnetic, and the electric fields have been represented in figures. The magnetic field, the relaxation time, and the angular thermal load have significant effects on all the studied fields.
An Efficient Collocation Method for Solving the Variable-Order Time-Fractional Partial Differential Equations Arising from the Physical Phenomenon
In this work, we present an efficient approach for
solving variable-order time-fractional partial differential equations,
which are based on Legendre and Laguerre polynomials. First, we
introduced the pseudo-operational matrices of integer and variable
fractional order of integration by use of some properties of
Riemann-Liouville fractional integral. Then, applied together with
collocation method and Legendre-Laguerre functions for solving
variable-order time-fractional partial differential equations. Also, an
estimation of the error is presented. At last, we investigate numerical
examples which arise in physics to demonstrate the accuracy of the
present method. In comparison results obtained by the present method
with the exact solution and the other methods reveals that the method
is very effective.
Bayesian Meta-Analysis to Account for Heterogeneity in Studies Relating Life Events to Disease
Associations between life events and various forms of cancers have been identified. The purpose of a recent random-effects meta-analysis was to identify studies that examined the association between adverse events associated with changes to financial status including decreased income and breast cancer risk. The same association was studied in four separate studies which displayed traits that were not consistent between studies such as the study design, location, and time frame. It was of interest to pool information from various studies to help identify characteristics that differentiated study results. Two random-effects Bayesian meta-analysis models are proposed to combine the reported estimates of the described studies. The proposed models allow major sources of variation to be taken into account, including study level characteristics, between study variance and within study variance, and illustrate the ease with which uncertainty can be incorporated using a hierarchical Bayesian modelling approach.
On the Efficiency and Robustness of Commingle Wiener and Lévy Driven Processes for Vasciek Model
The driven processes of Wiener and Lévy are known
self-standing Gaussian-Markov processes for fitting non-linear
dynamical Vasciek model. In this paper, a coincidental Gaussian
density stationarity condition and autocorrelation function of the
two driven processes were established. This led to the conflation
of Wiener and Lévy processes so as to investigate the efficiency
of estimates incorporated into the one-dimensional Vasciek model
that was estimated via the Maximum Likelihood (ML) technique.
The conditional laws of drift, diffusion and stationarity process
was ascertained for the individual Wiener and Lévy processes as
well as the commingle of the two processes for a fixed effect
and Autoregressive like Vasciek model when subjected to financial
series; exchange rate of Naira-CFA Franc. In addition, the model
performance error of the sub-merged driven process was miniature
compared to the self-standing driven process of Wiener and Lévy.
Multilevel Arnoldi-Tikhonov Regularization Methods for Large-Scale Linear Ill-Posed Systems
This paper is devoted to the numerical solution of
large-scale linear ill-posed systems. A multilevel regularization
method is proposed. This method is based on a synthesis of
the Arnoldi-Tikhonov regularization technique and the multilevel
technique. We show that if the Arnoldi-Tikhonov method is
a regularization method, then the multilevel method is also a
regularization one. Numerical experiments presented in this paper
illustrate the effectiveness of the proposed method.