181
7162
Vulnerabilities of IEEE 802.11i Wireless LAN CCMP Protocol
Abstract: IEEE has recently incorporated CCMP protocol to provide robust security to IEEE 802.11 wireless LANs. It is found that CCMP has been designed with a weak nonce construction and transmission mechanism, which leads to the exposure of initial counter value. This weak construction of nonce renders the protocol vulnerable to attacks by intruders. This paper presents how the initial counter can be pre-computed by the intruder. This vulnerability of counter block value leads to pre-computation attack on the counter mode encryption of CCMP. The failure of the counter mode will result in the collapse of the whole security mechanism of 802.11 WLAN.
Digital Article Identifier (DOI):
180
12207
A New Construction of 16-QAM Codewords with Low Peak Power
Abstract: We present a novel construction of 16-QAM codewords of length n = 2k . The number of constructed codewords is 162×[4k-1×k-k+1] . When these constructed codewords are utilized as a code in OFDM systems, their peak-to-mean envelope power ratios (PMEPR) are bounded above by 3.6 . The principle of our scheme is illustrated with a four subcarrier example.
Digital Article Identifier (DOI):
179
1419
Dynamic Inverted Index Maintenance
Abstract: The majority of today's IR systems base the IR task on two main processes: indexing and searching. There exists a special group of dynamic IR systems where both processes (indexing and searching) happen simultaneously; such a system discards obsolete information, simultaneously dealing with the insertion of new in¬formation, while still answering user queries. In these dynamic, time critical text document databases, it is often important to modify index structures quickly, as documents arrive. This paper presents a method for dynamization which may be used for this task. Experimental results show that the dynamization process is possible and that it guarantees the response time for the query operation and index actualization.
Digital Article Identifier (DOI):
178
239
Complex Condition Monitoring System of Aircraft Gas Turbine Engine
Abstract: Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE workand output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-by-stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.
Digital Article Identifier (DOI):
177
6515
Performance Improvement in the Bivariate Models by using Modified Marginal Variance of Noisy Observations for Image-Denoising Applications
Abstract: Most simple nonlinear thresholding rules for
wavelet- based denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. This paper attempts to give a recipe for selecting one of the popular image-denoising algorithms based
on VisuShrink, SureShrink, OracleShrink, BayesShrink and BiShrink and also this paper compares different Bivariate models used for image denoising applications. The first part of the paper
compares different Shrinkage functions used for image-denoising.
The second part of the paper compares different bivariate models
and the third part of this paper uses the Bivariate model with modified marginal variance which is based on Laplacian assumption. This paper gives an experimental comparison on six 512x512 commonly used images, Lenna, Barbara, Goldhill,
Clown, Boat and Stonehenge. The following noise powers 25dB,26dB, 27dB, 28dB and 29dB are added to the six standard images and the corresponding Peak Signal to Noise Ratio (PSNR) values
are calculated for each noise level.
Digital Article Identifier (DOI):
176
4201
Speech Activated Automation
Abstract: This article presents a simple way to perform programmed voice commands for the interface with commercial Digital and Analogue Input/Output PCI cards, used in Robotics and Automation applications. Robots and Automation equipment can "listen" to voice commands and perform several different tasks, approaching to the human behavior, and improving the human- machine interfaces for the Automation Industry. Since most PCI Digital and Analogue Input/Output cards are sold with several DLLs included (for use with different programming languages), it is possible to add speech recognition capability, using a standard speech recognition engine, compatible with the programming languages used. It was created in this work a Visual Basic 6 (the world's most popular language) application, that listens to several voice commands, and is capable to communicate directly with several standard 128 Digital I/O PCI Cards, used to control complete Automation Systems, with up to (number of boards used) x 128 Sensors and/or Actuators.
Digital Article Identifier (DOI):
175
12830
A High Accuracy Measurement Circuit for Soil Moisture Detection
Abstract: The study of soil for agriculture purposes has
remained the main focus of research since the beginning of civilization as humans- food related requirements remained closely linked with the soil. The study of soil has generated an interest
among the researchers for very similar other reasons including transmission, reflection and refraction of signals for deploying
wireless underground sensor networks or for the monitoring of objects on (or in ) soil in the form of better understanding of soil
electromagnetic characteristics properties. The moisture content has
been very instrumental in such studies as it decides on the resistance of the soil, and hence the attenuation on signals traveling through soil
or the attenuation the signals may suffer upon their impact on soil. This work is related testing and characterizing a measurement circuit
meant for the detection of moisture level content in soil.
Digital Article Identifier (DOI):
174
13006
Connected Vertex Cover in 2-Connected Planar Graph with Maximum Degree 4 is NP-complete
Abstract: This paper proves that the problem of finding connected
vertex cover in a 2-connected planar graph ( CVC-2 ) with maximum degree 4 is NP-complete. The motivation for proving this result is to
give a shorter and simpler proof of NP-Completeness of TRA-MLC (the Top Right Access point Minimum-Length Corridor) problem [1], by finding the reduction from CVC-2. TRA-MLC has many applications in laying optical fibre cables for data communication and electrical wiring in floor plans.The problem of finding connected vertex cover in any planar graph ( CVC ) with maximum degree 4 is NP-complete [2]. We first show that CVC-2 belongs to NP and then we find a polynomial reduction from CVC to CVC-2. Let a graph G0 and an integer K form an instance of CVC, where G0 is a planar graph and K is an upper bound on the size of the connected vertex cover in G0. We construct a 2-connected planar graph, say G, by identifying the blocks and cut vertices of G0, and then finding the planar representation of all the blocks of G0, leading to a plane graph G1. We replace the cut vertices with cycles in such a way that the resultant graph G is a 2-connected planar graph with maximum
degree 4. We consider L = K -2t+3 t i=1 di where t is the number of cut vertices in G1 and di is the number of blocks for which ith cut vertex is common. We prove that G will have a connected vertex
cover with size less than or equal to L if and only if G0 has a connected vertex cover of size less than or equal to K.
Digital Article Identifier (DOI):
173
2734
A New Method for Computing the Inverse Ideal in a Coordinate Ring
Abstract: In this paper we present an efficient method for inverting an ideal in the ideal class group of a Cab curve by extending the method which is presented in [3]. More precisely we introduce a useful generator for the inverse ideal as a K[X]-module.
Digital Article Identifier (DOI):
172
3840
An Analysis of Activity-Based Costing in a Manufacturing System
Abstract: Activity-Based Costing (ABC) represents an
alternative paradigm to traditional cost accounting system and
it often provides more accurate cost information for decision
making such as product pricing, product mix, and make-orbuy
decisions. ABC models the causal relationships between
products and the resources used in their production and traces
the cost of products according to the activities through the use
of appropriate cost drivers. In this paper, the implementation
of the ABC in a manufacturing system is analyzed and a
comparison with the traditional cost based system in terms of
the effects on the product costs are carried out to highlight the
difference between two costing methodologies. By using this
methodology, a valuable insight into the factors that cause the
cost is provided, helping to better manage the activities of the
company.
Digital Article Identifier (DOI):
171
15503
An Experimental Study of a Self-Supervised Classifier Ensemble
Abstract: Learning using labeled and unlabelled data has
received considerable amount of attention in the machine learning
community due its potential in reducing the need for expensive
labeled data. In this work we present a new method for combining
labeled and unlabeled data based on classifier ensembles. The model
we propose assumes each classifier in the ensemble observes the
input using different set of features. Classifiers are initially trained
using some labeled samples. The trained classifiers learn further
through labeling the unknown patterns using a teaching signals that is
generated using the decision of the classifier ensemble, i.e. the
classifiers self-supervise each other. Experiments on a set of object
images are presented. Our experiments investigate different classifier
models, different fusing techniques, different training sizes and
different input features. Experimental results reveal that the proposed
self-supervised ensemble learning approach reduces classification
error over the single classifier and the traditional ensemble classifier
approachs.
Digital Article Identifier (DOI):
170
9795
Image Modeling Using Gibbs-Markov Random Field and Support Vector Machines Algorithm
Abstract: This paper introduces a novel approach to estimate the
clique potentials of Gibbs Markov random field (GMRF) models
using the Support Vector Machines (SVM) algorithm and the Mean
Field (MF) theory. The proposed approach is based on modeling the
potential function associated with each clique shape of the GMRF
model as a Gaussian-shaped kernel. In turn, the energy function of
the GMRF will be in the form of a weighted sum of Gaussian
kernels. This formulation of the GMRF model urges the use of the
SVM with the Mean Field theory applied for its learning for
estimating the energy function. The approach has been tested on
synthetic texture images and is shown to provide satisfactory results
in retrieving the synthesizing parameters.
Digital Article Identifier (DOI):
169
10008324
Modeling of Gas Turbine Cooled Blades
Abstract: In contrast to existing methods which do not take into account
multiconnectivity in a broad sense of this term, we develop
mathematical models and highly effective combination (BIEM
and FDM) numerical methods of calculation of stationary and
quasi-stationary temperature field of a profile part of a blade
with convective cooling (from the point of view of realization
on PC). The theoretical substantiation of these methods is
proved by appropriate theorems. For it, converging quadrature
processes have been developed and the estimations of errors in
the terms of A.Ziqmound continuity modules have been
received. For visualization of profiles are used: the method of the least
squares with automatic conjecture, device spline, smooth
replenishment and neural nets. Boundary conditions of heat
exchange are determined from the solution of the
corresponding integral equations and empirical relationships.
The reliability of designed methods is proved by calculation
and experimental investigations heat and hydraulic
characteristics of the gas turbine first stage nozzle blade.
Digital Article Identifier (DOI):
168
3614
A Reconfigurable Processing Element for Cholesky Decomposition and Matrix Inversion
Abstract: Fixed-point simulation results are used for the
performance measure of inverting matrices by Cholesky
decomposition. The fixed-point Cholesky decomposition algorithm
is implemented using a fixed-point reconfigurable processing
element. The reconfigurable processing element provides all
mathematical operations required by Cholesky decomposition. The
fixed-point word length analysis is based on simulations using
different condition numbers and different matrix sizes. Simulation
results show that 16 bits word length gives sufficient performance
for small matrices with low condition number. Larger matrices and
higher condition numbers require more dynamic range for a fixedpoint
implementation.
Digital Article Identifier (DOI):
167
9137
Formal Verification of a Multicast Protocol in Mobile Networks
Abstract: As computer network technology becomes
increasingly complex, it becomes necessary to place greater
requirements on the validity of developing standards and the
resulting technology. Communication networks are based on large
amounts of protocols. The validity of these protocols have to be
proved either individually or in an integral fashion. One strategy for
achieving this is to apply the growing field of formal methods.
Formal methods research defines systems in high order logic so that
automated reasoning can be applied for verification. In this research
we represent and implement a formerly announced multicast protocol
in Prolog language so that certain properties of the protocol can be
verified. It is shown that by using this approach some minor faults in
the protocol were found and repaired. Describing the protocol as
facts and rules also have other benefits i.e. leads to a process-able
knowledge. This knowledge can be transferred as ontology between
systems in KQML format. Since the Prolog language can increase its
knowledge base every time, this method can also be used to learn an
intelligent network.
Digital Article Identifier (DOI):
166
2525
A Modified Cross Correlation in the Frequency Domain for Fast Pattern Detection Using Neural Networks
Abstract: Recently, neural networks have shown good
results for detection of a certain pattern in a given image. In
our previous papers [1-5], a fast algorithm for pattern
detection using neural networks was presented. Such
algorithm was designed based on cross correlation in the
frequency domain between the input image and the weights
of neural networks. Image conversion into symmetric shape
was established so that fast neural networks can give the
same results as conventional neural networks. Another
configuration of symmetry was suggested in [3,4] to improve
the speed up ratio. In this paper, our previous algorithm for
fast neural networks is developed. The frequency domain
cross correlation is modified in order to compensate for the
symmetric condition which is required by the input image.
Two new ideas are introduced to modify the cross correlation
algorithm. Both methods accelerate the speed of the fast
neural networks as there is no need for converting the input
image into symmetric one as previous. Theoretical and
practical results show that both approaches provide faster
speed up ratio than the previous algorithm.
Digital Article Identifier (DOI):
165
3384
Fast Object/Face Detection Using Neural Networks and Fast Fourier Transform
Abstract: Recently, fast neural networks for object/face
detection were presented in [1-3]. The speed up factor of these
networks relies on performing cross correlation in the frequency
domain between the input image and the weights of the hidden
layer. But, these equations given in [1-3] for conventional and fast
neural networks are not valid for many reasons presented here. In
this paper, correct equations for cross correlation in the spatial and
frequency domains are presented. Furthermore, correct formulas for
the number of computation steps required by conventional and fast
neural networks given in [1-3] are introduced. A new formula for
the speed up ratio is established. Also, corrections for the equations
of fast multi scale object/face detection are given. Moreover,
commutative cross correlation is achieved. Simulation results show
that sub-image detection based on cross correlation in the frequency
domain is faster than classical neural networks.
Digital Article Identifier (DOI):
164
7265
Change Detector Combination in Remotely Sensed Images Using Fuzzy Integral
Abstract: Decision fusion is one of hot research topics in
classification area, which aims to achieve the best possible
performance for the task at hand. In this paper, we
investigate the usefulness of this concept to improve change
detection accuracy in remote sensing. Thereby, outputs of
two fuzzy change detectors based respectively on
simultaneous and comparative analysis of multitemporal
data are fused by using fuzzy integral operators. This
method fuses the objective evidences produced by the
change detectors with respect to fuzzy measures that express
the difference of performance between them. The proposed
fusion framework is evaluated in comparison with some
ordinary fuzzy aggregation operators. Experiments carried
out on two SPOT images showed that the fuzzy integral was
the best performing. It improves the change detection
accuracy while attempting to equalize the accuracy rate in
both change and no change classes.
Digital Article Identifier (DOI):
163
703
Shape-Based Image Retrieval Using Shape Matrix
Abstract: Retrieval image by shape similarity, given a template
shape is particularly challenging, owning to the difficulty to derive a
similarity measurement that closely conforms to the common
perception of similarity by humans. In this paper, a new method for the
representation and comparison of shapes is present which is based on
the shape matrix and snake model. It is scaling, rotation, translation
invariant. And it can retrieve the shape images with some missing or
occluded parts. In the method, the deformation spent by the template
to match the shape images and the matching degree is used to evaluate
the similarity between them.
Digital Article Identifier (DOI):
162
8581
Direction of Arrival Estimation Based on a Single Port Smart Antenna Using MUSIC Algorithm with Periodic Signals
Abstract: A novel direction-of-arrival (DOA) estimation technique, which uses a conventional multiple signal classification (MUSIC) algorithm with periodic signals, is applied to a single RF-port parasitic array antenna for direction finding. Simulation results show that the proposed method gives high resolution (1 degree) DOA estimation in an uncorrelated signal environment. The novelty lies in that the MUSIC algorithm is applied to a simplified antenna configuration. Only one RF port and one analogue-to-digital converter (ADC) are used in this antenna, which features low DC power consumption, low cost, and ease of fabrication. Modifications to the conventional MUSIC algorithm do not bring much additional complexity. The proposed technique is also free from the negative influence by the mutual coupling between elements. Therefore, the technique has great potential to be implemented into the existing wireless mobile communications systems, especially at the power consumption limited mobile terminals, to provide additional position location (PL) services.
Digital Article Identifier (DOI):
161
13578
Incorporation of Long-Term Redundancy in ECG Time Domain Compression Methods through Curve Simplification and Block-Sorting
Abstract: We suggest a novel method to incorporate longterm
redundancy (LTR) in signal time domain compression
methods. The proposition is based on block-sorting and curve
simplification. The proposition is illustrated on the ECG
signal as a post-processor for the FAN method. Test
applications on the new so-obtained FAN+ method using the
MIT-BIH database show substantial improvement of the
compression ratio-distortion behavior for a higher quality
reconstructed signal.
Digital Article Identifier (DOI):
160
6925
Left Ventricular Model Using Second Order Electromechanical Coupling: Effects of Viscoelastic Damping
Abstract: It is known that the heart interacts with and adapts to
its venous and arterial loading conditions. Various experimental
studies and modeling approaches have been developed to investigate
the underlying mechanisms. This paper presents a model of the left
ventricle derived based on nonlinear stress-length myocardial
characteristics integrated over truncated ellipsoidal geometry, and
second-order dynamic mechanism for the excitation-contraction
coupling system. The results of the model presented here describe the
effects of the viscoelastic damping element of the electromechanical
coupling system on the hemodynamic response. Different heart rates
are considered to study the pacing effects on the performance of the
left-ventricle against constant preload and afterload conditions under
various damping conditions. The results indicate that the pacing
process of the left ventricle has to take into account, among other
things, the viscoelastic damping conditions of the myofilament
excitation-contraction process.
Digital Article Identifier (DOI):
159
4505
Heat Treatment and Rest-Inserted Exercise Enhances EMG Activity of the Lower Limb
Abstract: Prolonged immobilization leads to significant
weakness and atrophy of the skeletal muscle and can also impair the
recovery of muscle strength following injury. Therefore, it is
important to minimize the period under immobilization and accelerate
the return to normal activity. This study examined the effects of heat
treatment and rest-inserted exercise on the muscle activity of the lower
limb during knee flexion/extension. Twelve healthy subjects were
assigned to 4 groups that included: (1) heat treatment + rest-inserted
exercise; (2) heat + continuous exercise; (3) no heat + rest-inserted
exercise; and (4) no heat + continuous exercise. Heat treatment was
applied for 15 mins prior to exercise. Continuous exercise groups
performed knee flexion/extension at 0.5 Hz for 300 cycles without rest
whereas rest-inserted exercise groups performed the same exercise but
with 2 mins rest inserted every 60 cycles of continuous exercise.
Changes in the rectus femoris and hamstring muscle activities were
assessed at 0, 1, and 2 weeks of treatment by measuring the
electromyography signals of isokinetic maximum voluntary
contraction. Significant increases in both the rectus femoris and
hamstring muscles were observed after 2 weeks of treatment only
when both heat treatment and rest-inserted exercise were performed.
These results suggest that combination of various treatment techniques,
such as heat treatment and rest-inserted exercise, may expedite the
recovery of muscle strength following immobilization.
Digital Article Identifier (DOI):
158
13020
Person Identification by Using AR Model for EEG Signals
Abstract: A direct connection between ElectroEncephaloGram
(EEG) and the genetic information of individuals has been
investigated by neurophysiologists and psychiatrists since 1960-s;
and it opens a new research area in the science. This paper focuses on
the person identification based on feature extracted from the EEG
which can show a direct connection between EEG and the genetic
information of subjects. In this work the full EO EEG signal of
healthy individuals are estimated by an autoregressive (AR) model
and the AR parameters are extracted as features. Here for feature
vector constitution, two methods have been proposed; in the first
method the extracted parameters of each channel are used as a
feature vector in the classification step which employs a competitive
neural network and in the second method a combination of different
channel parameters are used as a feature vector. Correct classification
scores at the range of 80% to 100% reveal the potential of our
approach for person classification/identification and are in agreement
to the previous researches showing evidence that the EEG signal
carries genetic information. The novelty of this work is in the
combination of AR parameters and the network type (competitive
network) that we have used. A comparison between the first and the
second approach imply preference of the second one.
Digital Article Identifier (DOI):
157
12538
Removing Ocular Artifacts from EEG Signals using Adaptive Filtering and ARMAX Modeling
Abstract: EEG signal is one of the oldest measures of brain
activity that has been used vastly for clinical diagnoses and
biomedical researches. However, EEG signals are highly
contaminated with various artifacts, both from the subject and from
equipment interferences. Among these various kinds of artifacts,
ocular noise is the most important one. Since many applications such
as BCI require online and real-time processing of EEG signal, it is
ideal if the removal of artifacts is performed in an online fashion.
Recently, some methods for online ocular artifact removing have
been proposed. One of these methods is ARMAX modeling of EEG
signal. This method assumes that the recorded EEG signal is a
combination of EOG artifacts and the background EEG. Then the
background EEG is estimated via estimation of ARMAX parameters.
The other recently proposed method is based on adaptive filtering.
This method uses EOG signal as the reference input and subtracts
EOG artifacts from recorded EEG signals. In this paper we
investigate the efficiency of each method for removing of EOG
artifacts. A comparison is made between these two methods. Our
undertaken conclusion from this comparison is that adaptive filtering
method has better results compared with the results achieved by
ARMAX modeling.
Digital Article Identifier (DOI):
156
338
The Role of Velocity Map Quality in Estimation of Intravascular Pressure Distribution
Abstract: Phase-Contrast MR imaging methods are widely used
for measurement of blood flow velocity components. Also there are
some other tools such as CT and Ultrasound for velocity map
detection in intravascular studies. These data are used in deriving
flow characteristics. Some clinical applications are investigated
which use pressure distribution in diagnosis of intravascular disorders
such as vascular stenosis. In this paper an approach to the problem of
measurement of intravascular pressure field by using velocity field
obtained from flow images is proposed. The method presented in this
paper uses an algorithm to calculate nonlinear equations of Navier-
Stokes, assuming blood as an incompressible and Newtonian fluid.
Flow images usually suffer the lack of spatial resolution. Our
attempt is to consider the effect of spatial resolution on the pressure
distribution estimated from this method. In order to achieve this aim,
velocity map of a numerical phantom is derived at six different
spatial resolutions. To determine the effects of vascular stenoses on
pressure distribution, a stenotic phantom geometry is considered. A
comparison between the pressure distribution obtained from the
phantom and the pressure resulted from the algorithm is presented. In
this regard we also compared the effects of collocated and staggered
computational grids on the pressure distribution resulted from this
algorithm.
Digital Article Identifier (DOI):
155
6633
Respirator System For Total Liquid Ventilation
Abstract: Total liquid ventilation can support gas exchange in animal models of lung injury. Clinical application awaits further technical improvements and performance verification. Our aim was to develop a liquid ventilator, able to deliver accurate tidal volumes, and a computerized system for measuring lung mechanics. The computer-assisted, piston-driven respirator controlled ventilatory parameters that were displayed and modified on a real-time basis. Pressure and temperature transducers along with a lineal displacement controller provided the necessary signals to calculate lung mechanics. Ten newborn lambs (<6 days old) with respiratory failure induced by lung lavage, were monitored using the system. Electromechanical, hydraulic and data acquisition/analysis components of the ventilator were developed and tested in animals with respiratory failure. All pulmonary signals were collected synchronized in time, displayed in real-time, and archived on digital media. The total mean error (due to transducers, A/D conversion, amplifiers, etc.) was less than 5% compared to calibrated signals. Improvements in gas exchange and lung mechanics were observed during liquid ventilation, without impairment of cardiovascular profiles. The total liquid ventilator maintained accurate control of tidal volumes and the sequencing of inspiration/expiration. The computerized system demonstrated its ability to monitor in vivo lung mechanics, providing valuable data for early decision-making.
Digital Article Identifier (DOI):
154
8387
Computer Software Applicable in Rehabilitation, Cardiology and Molecular Biology
Abstract: We have developed a computer program consisting of
6 subtests assessing the children hand dexterity applicable in the
rehabilitation medicine. We have carried out a normative study on a
representative sample of 285 children aged from 7 to 15 (mean age
11.3) and we have proposed clinical standards for three age groups
(7-9, 9-11, 12-15 years). We have shown statistical significance of
differences among the corresponding mean values of the task time
completion. We have also found a strong correlation between the task
time completion and the age of the subjects, as well as we have
performed the test-retest reliability checks in the sample of 84
children, giving the high values of the Pearson coefficients for the
dominant and non-dominant hand in the range 0.740.97 and
0.620.93, respectively.
A new MATLAB-based programming tool aiming at analysis of
cardiologic RR intervals and blood pressure descriptors, is worked
out, too. For each set of data, ten different parameters are extracted: 2
in time domain, 4 in frequency domain and 4 in Poincaré plot
analysis. In addition twelve different parameters of baroreflex
sensitivity are calculated. All these data sets can be visualized in time
domain together with their power spectra and Poincaré plots. If
available, the respiratory oscillation curves can be also plotted for
comparison. Another application processes biological data obtained
from BLAST analysis.
Digital Article Identifier (DOI):
153
8897
Fractal Analysis on Human Colonic Pressure Activities based on the Box-counting Method
Abstract: The colonic tissue is a complicated dynamic system
and the colonic activities it generates are composed of irregular
segmental waves, which are referred to as erratic fluctuations or spikes.
They are also highly irregular with subunit fractal structure. The
traditional time-frequency domain statistics like the averaged
amplitude, the motility index and the power spectrum, etc. are
insufficient to describe such fluctuations. Thus the fractal
box-counting dimension is proposed and the fractal scaling behaviors
of the human colonic pressure activities under the physiological
conditions are studied. It is shown that the dimension of the resting
activity is smaller than that of the normal one, whereas the clipped
version, which corresponds to the activity of the constipation patient,
shows with higher fractal dimension. It may indicate a practical
application to assess the colonic motility, which is often indicated by
the colonic pressure activity.
Digital Article Identifier (DOI):
152
15169
In situ Observation of the State and Stability of Hemoglobin Adsorbed onto Glass Surface by Slab Optical Waveguide (SOWG) Spectroscopy
Abstract: The state and stability of hemoglobin adsorbed on the
glass surface was investigated using slab optical waveguide (SOWG)
spectroscopy. The peak position of the absorption band of hemoglobin
adsorbed on the glass surface was same as that of the hemoglobin in
solution. This result suggests that no significant denaturation occurred
by adsorption. The adsorption of hemoglobin is relatively strong that
the hemoglobin molecules even remained adsorbed after rinsing the
cell with buffer solution. The peak shift caused by the reduction of
adsorbed hemoglobin was also observed.
Digital Article Identifier (DOI):
151
12706
The Concentration Effects for the Adsorption Behavior of Heptyl Viologen Cation Radicals on Indium-Tin-Oxide Electrode Surfaces
Abstract: In situ observation of absorption spectral change of
heptil viologen cation radical (HV+.) was performed by slab optical
waveguide (SOWG) spectroscopy utilizing indium-tin-oxide (ITO)
electrodes. Synchronizing with electrochemical techniques, we
observed the adsorption process of HV+.on the ITO electrode. In this
study, we carried out the ITO-SOWG observations using KBr aqueous
solution containing different concentration of HV to investigate the
concentration dependent spectral change. A few specific absorption
bands, which indicated HV+.existed as both monomer and dimer on
ITO electrode surface with a monolayer or a few layers deposition,
were observed in UV-visible region. The change in the peak position
of the absorption spectra from adsorption species of HV+. were
correlated with the concentration of HV as well as the electrode
potential.
Digital Article Identifier (DOI):
150
1725
Magnetization of Thin-Film Permalloy Ellipses used for Programmable Motion of Magnetic Particles
Abstract: Simulations of magnetic microstructure in elliptical
Permalloy elements used for controlled motion of magnetic particles
are discussed. The saturating field of the elliptical elements was
studied with respect to lateral dimensions for one-vortex, cross-tie,
diamond and double-diamond states as initial zero-field domain
configurations. With aspect ratio of 1:3 the short axis was varied
from 125 nm to 1000 nm, whereas the thickness was kept constant at
50 nm.
Digital Article Identifier (DOI):
149
3680
Long-Term Study for the Effect of Ovariectomy on Rat Bone - Use of In-Vivo Micro-CT -
Abstract: In the present study, changes of morphology and
mechanical characteristics in the lumbar vertebrae of the
ovariectomised (OVX) rat were investigated. In previous researches,
there were many studies about morphology like volume fraction and
trabecular thickness based on Micro - Computed Tomography (Micro
- CT). However, detecting and tracking long-term changes in the
trabecular bone of the lumbar vertebrae for the OVX rat were few. For
this study, one female Sprague-Dawley rat was used: an OVX rat. The
4th Lumbar of the OVX rat was subjected to in-vivo micro-CT.
Detecting and tracking long-term changes could be investigated in the
trabecular bone of the lumbar vertebrae for an OVX rat using in-vivo
micro-CT. An OVX rat was scanned at week 0 (just before surgery), at
week 4, at week 8, week 16, week 22 and week 56 after surgery. Finite
element (FE) analysis was used to investigate mechanical
characteristics of the lumbar vertebrae for an OVX rat. When the OVX
rat (at week 56) was compared with the OVX rat (at week 0), volume
fraction was decreased by 80% and effective modulus was decreased
by 75%.
Digital Article Identifier (DOI):
148
3738
Intrusion Detection System Based On The Integrity of TCP Packet
Abstract: A common way to elude the signature-based Network Intrusion Detection System is based upon changing a recognizable attack to an unrecognizable one via the IDS. For example, in order to evade sign accommodation with intrusion detection system markers, a hacker spilt the payload packet into many small pieces or hides them within messages. In this paper we try to model the main fragmentation attack and create a new module in the intrusion detection architecture system which recognizes the main fragmentation attacks through verification of integrity checking of TCP packet in order to prevent elusion of the system and also to announce the necessary alert to the system administrator.
Digital Article Identifier (DOI):
147
9385
NEAR: Visualizing Information Relations in Multimedia Repository A•VI•RE
Abstract: This paper describes the NEAR (Navigating Exhibitions, Annotations and Resources) panel, a novel interactive visualization technique designed to help people navigate and interpret groups of resources, exhibitions and annotations by revealing hidden relations such as similarities and references. NEAR is implemented on A•VI•RE, an extended online information repository. A•VI•RE supports a semi-structured collection of exhibitions containing various resources and annotations. Users are encouraged to contribute, share, annotate and interpret resources in the system by building their own exhibitions and annotations. However, it is hard to navigate smoothly and efficiently in A•VI•RE because of its high capacity and complexity. We present a visual panel that implements new navigation and communication approaches that support discovery of implied relations. By quickly scanning and interacting with NEAR, users can see not only implied relations but also potential connections among different data elements. NEAR was tested by several users in the A•VI•RE system and shown to be a supportive navigation tool. In the paper, we further analyze the design, report the evaluation and consider its usage in other applications.
Digital Article Identifier (DOI):
146
8725
SySRA: A System of a Continuous Speech Recognition in Arab Language
Abstract: We report in this paper the model adopted by our
system of continuous speech recognition in Arab language SySRA
and the results obtained until now. This system uses the database
Arabdic-10 which is a corpus of word for the Arab language and
which was manually segmented. Phonetic decoding is represented
by an expert system where the knowledge base is translated in the
form of production rules. This expert system transforms a vocal
signal into a phonetic lattice. The higher level of the system takes
care of the recognition of the lattice thus obtained by deferring it in
the form of written sentences (orthographical Form). This level
contains initially the lexical analyzer which is not other than the
module of recognition. We subjected this analyzer to a set of
spectrograms obtained by dictating a score of sentences in Arab
language. The rate of recognition of these sentences is about 70%
which is, to our knowledge, the best result for the recognition of the
Arab language. The test set consists of twenty sentences from four
speakers not having taken part in the training.
Digital Article Identifier (DOI):
145
10390
VoIP Source Model based on the Hyperexponential Distribution
Abstract: In this paper we present a statistical analysis of Voice
over IP (VoIP) packet streams produced by the G.711 voice coder
with voice activity detection (VAD). During telephone conversation,
depending whether the interlocutor speaks (ON) or remains silent
(OFF), packets are produced or not by a voice coder. As index of
dispersion for both ON and OFF times distribution was greater than
one, we used hyperexponential distribution for approximation of
streams duration. For each stage of the hyperexponential distribution,
we tested goodness of our fits using graphical methods, we calculated
estimation errors, and performed Kolmogorov-Smirnov test.
Obtained results showed that the precise VoIP source model can be
based on the five-state Markov process.
Digital Article Identifier (DOI):
144
8899
Moment Invariants in Image Analysis
Abstract: This paper aims to present a survey of object
recognition/classification methods based on image moments. We
review various types of moments (geometric moments, complex
moments) and moment-based invariants with respect to various
image degradations and distortions (rotation, scaling, affine
transform, image blurring, etc.) which can be used as shape
descriptors for classification. We explain a general theory how to
construct these invariants and show also a few of them in explicit
forms. We review efficient numerical algorithms that can be used
for moment computation and demonstrate practical examples of
using moment invariants in real applications.
Digital Article Identifier (DOI):
143
5105
Ratio-Dependent Food Chain Models with Three Trophic Levels
Abstract: In this paper we study a food chain model with three trophic levels and Michaelis-Menten type ratio-dependent functional response. Distinctive feature of this model is the sensitive dependence of the dynamical behavior on the initial populations and parameters of the real world. The stability of the equilibrium points are also investigated.
Digital Article Identifier (DOI):
142
13784
Evaluation of New Product Development Projects using Artificial Intelligence and Fuzzy Logic
Abstract: As a vital activity for companies, new product
development (NPD) is also a very risky process due to the high
uncertainty degree encountered at every development stage and the
inevitable dependence on how previous steps are successfully
accomplished. Hence, there is an apparent need to evaluate new
product initiatives systematically and make accurate decisions under
uncertainty. Another major concern is the time pressure to launch a
significant number of new products to preserve and increase the
competitive power of the company. In this work, we propose an
integrated decision-making framework based on neural networks and
fuzzy logic to make appropriate decisions and accelerate the
evaluation process. We are especially interested in the two initial
stages where new product ideas are selected (go/no go decision) and
the implementation order of the corresponding projects are
determined. We show that this two-staged intelligent approach allows
practitioners to roughly and quickly separate good and bad product
ideas by making use of previous experiences, and then, analyze a
more shortened list rigorously.
Digital Article Identifier (DOI):
141
11503
Advanced Robust PDC Fuzzy Control of Nonlinear Systems
Abstract: This paper introduces a new method called ARPDC (Advanced Robust Parallel Distributed Compensation) for automatic control of nonlinear systems. This method improves a quality of robust control by interpolating of robust and optimal controller. The weight of each controller is determined by an original criteria function for model validity and disturbance appreciation. ARPDC method is based on nonlinear Takagi-Sugeno (T-S) fuzzy systems and Parallel Distributed Compensation (PDC) control scheme. The relaxed stability conditions of ARPDC control of nominal system have been derived. The advantages of presented method are demonstrated on the inverse pendulum benchmark problem. From comparison between three different controllers (robust, optimal and ARPDC) follows, that ARPDC control is almost optimal with the robustness close to the robust controller. The results indicate that ARPDC algorithm can be a good alternative not only for a robust control, but in some cases also to an adaptive control of nonlinear systems.
Digital Article Identifier (DOI):
140
817
Bayesian Belief Networks for Test Driven Development
Abstract: Testing accounts for the major percentage of technical
contribution in the software development process. Typically, it
consumes more than 50 percent of the total cost of developing a
piece of software. The selection of software tests is a very important
activity within this process to ensure the software reliability
requirements are met. Generally tests are run to achieve maximum
coverage of the software code and very little attention is given to the
achieved reliability of the software. Using an existing methodology,
this paper describes how to use Bayesian Belief Networks (BBNs) to
select unit tests based on their contribution to the reliability of the
module under consideration. In particular the work examines how the
approach can enhance test-first development by assessing the quality
of test suites resulting from this development methodology and
providing insight into additional tests that can significantly reduce
the achieved reliability. In this way the method can produce an
optimal selection of inputs and the order in which the tests are
executed to maximize the software reliability. To illustrate this
approach, a belief network is constructed for a modern software
system incorporating the expert opinion, expressed through
probabilities of the relative quality of the elements of the software,
and the potential effectiveness of the software tests. The steps
involved in constructing the Bayesian Network are explained as is a
method to allow for the test suite resulting from test-driven
development.
Digital Article Identifier (DOI):
139
1847
Modeling and Simulations of Complex Low- Dimensional systems: Testing the Efficiency of Parallelization
Abstract: The deterministic quantum transfer-matrix (QTM)
technique and its mathematical background are presented. This
important tool in computational physics can be applied to a class of
the real physical low-dimensional magnetic systems described by the
Heisenberg hamiltonian which includes the macroscopic molecularbased
spin chains, small size magnetic clusters embedded in some
supramolecules and other interesting compounds. Using QTM, the
spin degrees of freedom are accurately taken into account, yielding
the thermodynamical functions at finite temperatures.
In order to test the application for the susceptibility calculations to
run in the parallel environment, the speed-up and efficiency of
parallelization are analyzed on our platform SGI Origin 3800 with
p = 128 processor units. Using Message Parallel Interface (MPI)
system libraries we find the efficiency of the code of 94% for
p = 128 that makes our application highly scalable.
Digital Article Identifier (DOI):
138
4519
Training Radial Basis Function Networks with Differential Evolution
Abstract: In this paper, Differential Evolution (DE) algorithm, a new promising evolutionary algorithm, is proposed to train Radial Basis Function (RBF) network related to automatic configuration of network architecture. Classification tasks on data sets: Iris, Wine, New-thyroid, and Glass are conducted to measure the performance of neural networks. Compared with a standard RBF training algorithm in Matlab neural network toolbox, DE achieves more rational architecture for RBF networks. The resulting networks hence obtain strong generalization abilities.
Digital Article Identifier (DOI):
137
11472
Automating the Testing of Object Behaviour: A Statechart-Driven Approach
Abstract: The evolution of current modeling specifications gives rise to the problem of generating automated test cases from a variety of application tools. Past endeavours on behavioural testing of UML statecharts have not systematically leveraged the potential of existing graph theory for testing of objects. Therefore there exists a need for a simple, tool-independent, and effective method for automatic test generation. An architecture, codenamed ACUTE-J (Automated stateChart Unit Testing Engine for Java), for automating the unit test generation process is presented. A sequential approach for converting UML statechart diagrams to JUnit test classes is described, with the application of existing graph theory. Research byproducts such as a universal XML Schema and API for statechart-driven testing are also proposed. The result from a Java implementation of ACUTE-J is discussed in brief. The Chinese Postman algorithm is utilised as an illustration for a run-through of the ACUTE-J architecture.
Digital Article Identifier (DOI):
136
6
Hybrid Modeling and Optimal Control of a Two-Tank System as a Switched System
Abstract: In the past decade, because of wide applications of
hybrid systems, many researchers have considered modeling and
control of these systems. Since switching systems constitute an
important class of hybrid systems, in this paper a method for optimal
control of linear switching systems is described. The method is also
applied on the two-tank system which is a much appropriate system
to analyze different modeling and control techniques of hybrid
systems. Simulation results show that, in this method, the goals of
control and also problem constraints can be satisfied by an
appropriate selection of cost function.
Digital Article Identifier (DOI):
135
748
Modeling Hybrid Systems with MLD Approach and Analysis of the Model Size and Complexity
Abstract: Recently, a great amount of interest has been shown
in the field of modeling and controlling hybrid systems. One of the
efficient and common methods in this area utilizes the mixed logicaldynamical
(MLD) systems in the modeling. In this method, the
system constraints are transformed into mixed-integer inequalities by
defining some logic statements. In this paper, a system containing
three tanks is modeled as a nonlinear switched system by using the
MLD framework. Comparing the model size of the three-tank system
with that of a two-tank system, it is deduced that the number of
binary variables, the size of the system and its complexity
tremendously increases with the number of tanks, which makes the
control of the system more difficult. Therefore, methods should be
found which result in fewer mixed-integer inequalities.
Digital Article Identifier (DOI):
134
4603
Integrating Low and High Level Object Recognition Steps
Abstract: In pattern recognition applications the low level
segmentation and the high level object recognition are generally
considered as two separate steps. The paper presents a method that
bridges the gap between the low and the high level object
recognition. It is based on a Bayesian network representation and
network propagation algorithm. At the low level it uses hierarchical
structure of quadratic spline wavelet image bases. The method is
demonstrated for a simple circuit diagram component identification
problem.
Digital Article Identifier (DOI):
133
10718
Categorical Clustering By Converting Associated Information
Abstract: Lacking an inherent “natural" dissimilarity measure
between objects in categorical dataset presents special difficulties in
clustering analysis. However, each categorical attributes from a given
dataset provides natural probability and information in the sense of
Shannon. In this paper, we proposed a novel method which
heuristically converts categorical attributes to numerical values by
exploiting such associated information. We conduct an experimental
study with real-life categorical dataset. The experiment demonstrates
the effectiveness of our approach.
Digital Article Identifier (DOI):
132
2779
A Software Framework for Predicting Oil-Palm Yield from Climate Data
Abstract: Intelligent systems based on machine learning
techniques, such as classification, clustering, are gaining wide spread
popularity in real world applications. This paper presents work on
developing a software system for predicting crop yield, for example
oil-palm yield, from climate and plantation data. At the core of our
system is a method for unsupervised partitioning of data for finding
spatio-temporal patterns in climate data using kernel methods which
offer strength to deal with complex data. This work gets inspiration
from the notion that a non-linear data transformation into some high
dimensional feature space increases the possibility of linear
separability of the patterns in the transformed space. Therefore, it
simplifies exploration of the associated structure in the data. Kernel
methods implicitly perform a non-linear mapping of the input data
into a high dimensional feature space by replacing the inner products
with an appropriate positive definite function. In this paper we
present a robust weighted kernel k-means algorithm incorporating
spatial constraints for clustering the data. The proposed algorithm
can effectively handle noise, outliers and auto-correlation in the
spatial data, for effective and efficient data analysis by exploring
patterns and structures in the data, and thus can be used for
predicting oil-palm yield by analyzing various factors affecting the
yield.
Digital Article Identifier (DOI):
131
477
A Dynamic Composition of an Adaptive Course
Abstract: The number of framework conceived for e-learning
constantly increase, unfortunately the creators of learning materials
and educational institutions engaged in e-formation adopt a
“proprietor" approach, where the developed products (courses,
activities, exercises, etc.) can be exploited only in the framework
where they were conceived, their uses in the other learning
environments requires a greedy adaptation in terms of time and
effort. Each one proposes courses whose organization, contents,
modes of interaction and presentations are unique for all learners,
unfortunately the latter are heterogeneous and are not interested by
the same information, but only by services or documents adapted to
their needs. Currently the new tendency for the framework
conceived for e-learning, is the interoperability of learning materials,
several standards exist (DCMI (Dublin Core Metadata Initiative)[2],
LOM (Learning Objects Meta data)[1], SCORM (Shareable Content
Object Reference Model)[6][7][8], ARIADNE (Alliance of Remote
Instructional Authoring and Distribution Networks for Europe)[9],
CANCORE (Canadian Core Learning Resource Metadata
Application Profiles)[3]), they converge all to the idea of learning
objects. They are also interested in the adaptation of the learning
materials according to the learners- profile. This article proposes an
approach for the composition of courses adapted to the various
profiles (knowledge, preferences, objectives) of learners, based on
two ontologies (domain to teach and educational) and the learning
objects.
Digital Article Identifier (DOI):
130
11193
Multi-Agent Systems for Intelligent Clustering
Abstract: Intelligent systems are required in order to quickly and accurately analyze enormous quantities of data in the Internet environment. In intelligent systems, information extracting processes can be divided into supervised learning and unsupervised learning. This paper investigates intelligent clustering by unsupervised learning. Intelligent clustering is the clustering system which determines the clustering model for data analysis and evaluates results by itself. This system can make a clustering model more rapidly, objectively and accurately than an analyzer. The methodology for the automatic clustering intelligent system is a multi-agent system that comprises a clustering agent and a cluster performance evaluation agent. An agent exchanges information about clusters with another agent and the system determines the optimal cluster number through this information. Experiments using data sets in the UCI Machine Repository are performed in order to prove the validity of the system.
Digital Article Identifier (DOI):
129
12747
A Multiagent System for Distributed Systems Management
Abstract: The demand for autonomous resource
management for distributed systems has increased in recent
years. Distributed systems require an efficient and powerful
communication mechanism between applications running on
different hosts and networks. The use of mobile agent
technology to distribute and delegate management tasks
promises to overcome the scalability and flexibility limitations
of the currently used centralized management approach. This
work proposes a multiagent system that adopts mobile agents
as a technology for tasks distribution, results collection, and
management of resources in large-scale distributed systems. A
new mobile agent-based approach for collecting results from
distributed system elements is presented. The technique of
artificial intelligence based on intelligent agents giving the
system a proactive behavior. The presented results are based
on a design example of an application operating in a mobile
environment.
Digital Article Identifier (DOI):
128
5416
XML based Safe and Scalable Multi-Agent Development Framework
Abstract: In this paper we describe our efforts to design and
implement an agent development framework that has the potential to
scale to the size of any underlying network suitable for various ECommerce
activities. The main novelty in our framework is it-s
capability to allow the development of sophisticated, secured agents
which are simple enough to be practical.
We have adopted FIPA agent platform reference Model as
backbone for implementation along with XML for agent
Communication and Java Cryptographic Extension and architecture
to realize the security of communication information between agents.
The advantage of our architecture is its support of agents
development in different languages and Communicating with each
other using a more open standard i.e. XML
Digital Article Identifier (DOI):
127
14884
SUPAR: System for User-Centric Profiling of Association Rules in Streaming Data
Abstract: With a surge of stream processing applications novel
techniques are required for generation and analysis of association
rules in streams. The traditional rule mining solutions cannot handle
streams because they generally require multiple passes over the data
and do not guarantee the results in a predictable, small time. Though
researchers have been proposing algorithms for generation of rules
from streams, there has not been much focus on their analysis.
We propose Association rule profiling, a user centric process for
analyzing association rules and attaching suitable profiles to them
depending on their changing frequency behavior over a previous
snapshot of time in a data stream.
Association rule profiles provide insights into the changing nature
of associations and can be used to characterize the associations. We
discuss importance of characteristics such as predictability of
linkages present in the data and propose metric to quantify it. We
also show how association rule profiles can aid in generation of user
specific, more understandable and actionable rules.
The framework is implemented as SUPAR: System for Usercentric
Profiling of Association Rules in streaming data. The
proposed system offers following capabilities:
i) Continuous monitoring of frequency of streaming item-sets
and detection of significant changes therein for association rule
profiling.
ii) Computation of metrics for quantifying predictability of
associations present in the data.
iii) User-centric control of the characterization process: user
can control the framework through a) constraint specification and b)
non-interesting rule elimination.
Digital Article Identifier (DOI):
126
7794
Pronominal Anaphora Processing
Abstract: Discourse pronominal anaphora resolution must be part of any efficient information processing systems, since the reference of a pronoun is dependent on an antecedent located in the discourse. Contrary to knowledge-poor approaches, this paper shows that syntax-semantic relations are basic in pronominal anaphora resolution. The identification of quantified expressions to which pronouns can be anaphorically related provides further evidence that pronominal anaphora is based on domains of interpretation where asymmetric agreement holds.
Digital Article Identifier (DOI):
125
12283
Object-Oriented Simulation of Simulating Anticipatory Systems
Abstract: The present paper is oriented to problems of simulation of anticipatory systems, namely those that use simulation models for the aid of anticipation. A certain analogy between use of simulation and imagining will be applied to make the explication more comprehensible. The paper will be completed by notes of problems and by some existing applications. The problems consist in the fact that simulation of the mentioned anticipatory systems end is simulation of simulating systems, i.e. in computer models handling two or more modeled time axes that should be mapped to real time flow in a nondescent manner. Languages oriented to objects, processes and blocks can be used to surmount the problems.
Digital Article Identifier (DOI):
124
8959
Geometric Data Structures and Their Selected Applications
Abstract: Finding the shortest path between two positions is a
fundamental problem in transportation, routing, and communications
applications. In robot motion planning, the robot should pass around
the obstacles touching none of them, i.e. the goal is to find a
collision-free path from a starting to a target position. This task has
many specific formulations depending on the shape of obstacles,
allowable directions of movements, knowledge of the scene, etc.
Research of path planning has yielded many fundamentally different
approaches to its solution, mainly based on various decomposition
and roadmap methods. In this paper, we show a possible use of
visibility graphs in point-to-point motion planning in the Euclidean
plane and an alternative approach using Voronoi diagrams that
decreases the probability of collisions with obstacles. The second
application area, investigated here, is focused on problems of finding
minimal networks connecting a set of given points in the plane using
either only straight connections between pairs of points (minimum
spanning tree) or allowing the addition of auxiliary points to the set
to obtain shorter spanning networks (minimum Steiner tree).
Digital Article Identifier (DOI):
123
4892
A Hybrid Ontology Based Approach for Ranking Documents
Abstract: Increasing growth of information volume in the
internet causes an increasing need to develop new (semi)automatic
methods for retrieval of documents and ranking them according to
their relevance to the user query. In this paper, after a brief review
on ranking models, a new ontology based approach for ranking
HTML documents is proposed and evaluated in various
circumstances. Our approach is a combination of conceptual,
statistical and linguistic methods. This combination reserves the
precision of ranking without loosing the speed. Our approach
exploits natural language processing techniques to extract phrases
from documents and the query and doing stemming on words. Then
an ontology based conceptual method will be used to annotate
documents and expand the query. To expand a query the spread
activation algorithm is improved so that the expansion can be done
flexible and in various aspects. The annotated documents and the
expanded query will be processed to compute the relevance degree
exploiting statistical methods. The outstanding features of our
approach are (1) combining conceptual, statistical and linguistic
features of documents, (2) expanding the query with its related
concepts before comparing to documents, (3) extracting and using
both words and phrases to compute relevance degree, (4) improving
the spread activation algorithm to do the expansion based on
weighted combination of different conceptual relationships and (5)
allowing variable document vector dimensions. A ranking system
called ORank is developed to implement and test the proposed
model. The test results will be included at the end of the paper.
Digital Article Identifier (DOI):
122
6348
Bottom Up Text Mining through Hierarchical Document Representation
Abstract: Most of the existing text mining approaches are
proposed, keeping in mind, transaction databases model. Thus, the
mined dataset is structured using just one concept: the “transaction",
whereas the whole dataset is modeled using the “set" abstract type. In
such cases, the structure of the whole dataset and the relationships
among the transactions themselves are not modeled and
consequently, not considered in the mining process.
We believe that taking into account structure properties of
hierarchically structured information (e.g. textual document, etc ...)
in the mining process, can leads to best results. For this purpose, an
hierarchical associations rule mining approach for textual documents
is proposed in this paper and the classical set-oriented mining
approach is reconsidered profits to a Direct Acyclic Graph (DAG)
oriented approach. Natural languages processing techniques are used
in order to obtain the DAG structure. Based on this graph model, an
hierarchical bottom up algorithm is proposed. The main idea is that
each node is mined with its parent node.
Digital Article Identifier (DOI):
121
1376
A Frame Work for Query Results Refinement in Multimedia Databases
Abstract: In the current age, retrieval of relevant information
from massive amount of data is a challenging job. Over the years,
precise and relevant retrieval of information has attained high
significance. There is a growing need in the market to build systems,
which can retrieve multimedia information that precisely meets the
user's current needs. In this paper, we have introduced a framework
for refining query results before showing it to the user, using ambient
intelligence, user profile, group profile, user location, time, day, user
device type and extracted features. A prototypic tool was also
developed to demonstrate the efficiency of the proposed approach.
Digital Article Identifier (DOI):
120
15325
A Proposed Trust Model for the Semantic Web
Abstract: A serious problem on the WWW is finding reliable
information. Not everything found on the Web is true and the
Semantic Web does not change that in any way. The problem will be
even more crucial for the Semantic Web, where agents will be
integrating and using information from multiple sources. Thus, if an
incorrect premise is used due to a single faulty source, then any
conclusions drawn may be in error. Thus, statements published on
the Semantic Web have to be seen as claims rather than as facts, and
there should be a way to decide which among many possibly
inconsistent sources is most reliable. In this work, we propose a trust
model for the Semantic Web. The proposed model is inspired by the
use trust in human society. Trust is a type of social knowledge and
encodes evaluations about which agents can be taken as reliable
sources of information or services. Our proposed model allows
agents to decide which among different sources of information to
trust and thus act rationally on the semantic web.
Digital Article Identifier (DOI):
119
697
Semantic Mobility Channel (SMC): Ubiquitous and Mobile Computing Meets the Semantic Web
Abstract: With the advent of emerging personal computing paradigms such as ubiquitous and mobile computing, Web contents are becoming accessible from a wide range of mobile devices. Since these devices do not have the same rendering capabilities, Web contents need to be adapted for transparent access from a variety of client agents. Such content adaptation is exploited for either an individual element or a set of consecutive elements in a Web document and results in better rendering and faster delivery to the client device. Nevertheless, Web content adaptation sets new challenges for semantic markup. This paper presents an advanced components platform, called SMC, enabling the development of mobility applications and services according to a channel model based on the principles of Services Oriented Architecture (SOA). It then goes on to describe the potential for integration with the Semantic Web through a novel framework of external semantic annotation that prescribes a scheme for representing semantic markup files and a way of associating Web documents with these external annotations. The role of semantic annotation in this framework is to describe the contents of individual documents themselves, assuring the preservation of the semantics during the process of adapting content rendering. Semantic Web content adaptation is a way of adding value to Web contents and facilitates repurposing of Web contents (enhanced browsing, Web Services location and access, etc).
Digital Article Identifier (DOI):
118
5774
Semantic Modeling of Management Information: Enabling Automatic Reasoning on DMTF-CIM
Abstract: CIM is the standard formalism for modeling management
information developed by the Distributed Management Task
Force (DMTF) in the context of its WBEM proposal, designed to
provide a conceptual view of the managed environment. In this
paper, we propose the inclusion of formal knowledge representation
techniques, based on Description Logics (DLs) and the Web Ontology
Language (OWL), in CIM-based conceptual modeling, and then we
examine the benefits of such a decision. The proposal is specified as a
CIM metamodel level mapping to a highly expressive subset of DLs
capable of capturing all the semantics of the models. The paper shows
how the proposed mapping can be used for automatic reasoning
about the management information models, as a design aid, by means
of new-generation CASE tools, thanks to the use of state-of-the-art
automatic reasoning systems that support the proposed logic and use
algorithms that are sound and complete with respect to the semantics.
Such a CASE tool framework has been developed by the authors and
its architecture is also introduced. The proposed formalization is not
only useful at design time, but also at run time through the use of
rational autonomous agents, in response to a need recently recognized
by the DMTF.
Digital Article Identifier (DOI):
117
15025
Prospects, Problems of Marketing Research and Data Mining in Turkey
Abstract: The objective of this paper is to review and assess the
methodological issues and problems in marketing research, data and
knowledge mining in Turkey. As a summary, academic marketing
research publications in Turkey have significant problems. The most
vital problem seems to be related with modeling. Most of the
publications had major weaknesses in modeling. There were also,
serious problems regarding measurement and scaling, sampling and
analyses. Analyses myopia seems to be the most important problem
for young academia in Turkey. Another very important finding is the
lack of publications on data and knowledge mining in the academic
world.
Digital Article Identifier (DOI):
116
12744
The Relevance of Data Warehousing and Data Mining in the Field of Evidence-based Medicine to Support Healthcare Decision Making
Abstract: Evidence-based medicine is a new direction in modern healthcare. Its task is to prevent, diagnose and medicate diseases using medical evidence. Medical data about a large patient population is analyzed to perform healthcare management and medical research. In order to obtain the best evidence for a given disease, external clinical expertise as well as internal clinical experience must be available to the healthcare practitioners at right time and in the right manner. External evidence-based knowledge can not be applied directly to the patient without adjusting it to the patient-s health condition. We propose a data warehouse based approach as a suitable solution for the integration of external evidence-based data sources into the existing clinical information system and data mining techniques for finding appropriate therapy for a given patient and a given disease. Through integration of data warehousing, OLAP and data mining techniques in the healthcare area, an easy to use decision support platform, which supports decision making process of care givers and clinical managers, is built. We present three case studies, which show, that a clinical data warehouse that facilitates evidence-based medicine is a reliable, powerful and user-friendly platform for strategic decision making, which has a great relevance for the practice and acceptance of evidence-based medicine.
Digital Article Identifier (DOI):
115
13987
Efficient Implementation of Serial and Parallel Support Vector Machine Training with a Multi-Parameter Kernel for Large-Scale Data Mining
Abstract: This work deals with aspects of support vector learning for large-scale data mining tasks. Based on a decomposition algorithm that can be run in serial and parallel mode we introduce a data transformation that allows for the usage of an expensive generalized kernel without additional costs. In order to speed up the decomposition algorithm we analyze the problem of working set selection for large data sets and analyze the influence of the working set sizes onto the scalability of the parallel decomposition scheme. Our modifications and settings lead to improvement of support vector learning performance and thus allow using extensive parameter search methods to optimize classification accuracy.
Digital Article Identifier (DOI):
114
13096
Model Discovery and Validation for the Qsar Problem using Association Rule Mining
Abstract: There are several approaches in trying to solve the
Quantitative 1Structure-Activity Relationship (QSAR) problem.
These approaches are based either on statistical methods or on
predictive data mining. Among the statistical methods, one should
consider regression analysis, pattern recognition (such as cluster
analysis, factor analysis and principal components analysis) or partial
least squares. Predictive data mining techniques use either neural
networks, or genetic programming, or neuro-fuzzy knowledge. These
approaches have a low explanatory capability or non at all. This
paper attempts to establish a new approach in solving QSAR
problems using descriptive data mining. This way, the relationship
between the chemical properties and the activity of a substance
would be comprehensibly modeled.
Digital Article Identifier (DOI):
113
11104
Optimized Data Fusion in an Intelligent Integrated GPS/INS System Using Genetic Algorithm
Abstract: Most integrated inertial navigation systems (INS) and
global positioning systems (GPS) have been implemented using the
Kalman filtering technique with its drawbacks related to the need for
predefined INS error model and observability of at least four
satellites. Most recently, a method using a hybrid-adaptive network
based fuzzy inference system (ANFIS) has been proposed which is
trained during the availability of GPS signal to map the error
between the GPS and the INS. Then it will be used to predict the
error of the INS position components during GPS signal blockage.
This paper introduces a genetic optimization algorithm that is used to
update the ANFIS parameters with respect to the INS/GPS error
function used as the objective function to be minimized. The results
demonstrate the advantages of the genetically optimized ANFIS for
INS/GPS integration in comparison with conventional ANFIS
specially in the cases of satellites- outages. Coping with this problem
plays an important role in assessment of the fusion approach in land
navigation.
Digital Article Identifier (DOI):
112
7572
Application of Neural Network for Contingency Ranking Based on Combination of Severity Indices
Abstract: In this paper, an improved technique for contingency
ranking using artificial neural network (ANN) is presented. The
proposed approach is based on multi-layer perceptrons trained by
backpropagation to contingency analysis. Severity indices in dynamic
stability assessment are presented. These indices are based on the
concept of coherency and three dot products of the system variables.
It is well known that some indices work better than others for a
particular power system. This paper along with test results using
several different systems, demonstrates that combination of indices
with ANN provides better ranking than a single index. The presented
results are obtained through the use of power system simulation
(PSS/E) and MATLAB 6.5 software.
Digital Article Identifier (DOI):
111
11035
Improved Weighted Matching for Speaker Recognition
Abstract: Matching algorithms have significant importance in
speaker recognition. Feature vectors of the unknown utterance are
compared to feature vectors of the modeled speakers as a last step in
speaker recognition. A similarity score is found for every model in
the speaker database. Depending on the type of speaker recognition,
these scores are used to determine the author of unknown speech
samples. For speaker verification, similarity score is tested against a
predefined threshold and either acceptance or rejection result is
obtained. In the case of speaker identification, the result depends on
whether the identification is open set or closed set. In closed set
identification, the model that yields the best similarity score is
accepted. In open set identification, the best score is tested against a
threshold, so there is one more possible output satisfying the
condition that the speaker is not one of the registered speakers in
existing database. This paper focuses on closed set speaker
identification using a modified version of a well known matching
algorithm. The results of new matching algorithm indicated better
performance on YOHO international speaker recognition database.
Digital Article Identifier (DOI):
110
1297
A Case Study: Experiences with Building an Online Exhibition System using Web Services
Abstract: We present an implementation of an Online Exhibition System (OES) web service(s) that reflects our experiences with using web service development packages and software process models. The system provides major functionality that exists in similar packages. While developing such a complex web service, we gained insightful experience (i) in the traditional software development processes: waterfall model and evolutionary development and their fitness to web services development, (ii) in the fitness and effectiveness of a major web services development kit.
Digital Article Identifier (DOI):
109
205
Routing in Mobile Wireless Networks for Realtime Multimedia Applications- Reuse of Virtual Circuits
Abstract: Routing places an important role in determining the
quality of service in wireless networks. The routing methods adopted
in wireless networks have many drawbacks. This paper aims to
review the current routing methods used in wireless networks. This
paper proposes an innovative solution to overcome the problems in
routing. This solution is aimed at improving the Quality of Service.
This solution is different from others as it involves the resuage of the
part of the virtual circuits. This improvement in quality of service is
important especially in propagation of multimedia applications like
video, animations etc. So it is the dire need to propose a new solution
to improve the quality of service in ATM wireless networks for
multimedia applications especially during this era of multimedia
based applications.
Digital Article Identifier (DOI):
108
5449
Entropy Based Data Hiding for Document Images
Abstract: In this paper we present a novel technique for data
hiding in binary document images. We use the concept of entropy in
order to identify document specific least distortive areas throughout
the binary document image. The document image is treated as any
other image and the proposed method utilizes the standard document
characteristics for the embedding process. Proposed method
minimizes perceptual distortion due to embedding and allows
watermark extraction without the requirement of any side information
at the decoder end.
Digital Article Identifier (DOI):
107
6905
STRPRO Tool for Manipulation of Stratified Programs Based on SEPN
Abstract: Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. SEPN nets are well adapted extension of predicate nets for the definition and manipulation of stratified programs. This formalism is characterized by two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimization (maximal stratification, incremental updates ...). We propose, in this paper, useful algorithms for manipulating stratified programs using SEPN. These algorithms were implemented and validated with STRPRO tool.
Digital Article Identifier (DOI):
106
9949
New Approach for Manipulation of Stratified Programs
Abstract: Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. We propose in this paper an approach based on stratification to deal with negation problems. This approach is based on an extension of predicates nets. It is characterized with two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimizations on stratified programs (maximal stratification, incremental updates ...).
Digital Article Identifier (DOI):
105
6913
Diffusion of Mobile Entertainment in Malaysia: Drivers and Barriers
Abstract: This research aims to examine the key success factors
for the diffusion of mobile entertainment services in Malaysia. The
drivers and barriers observed in this research include perceived
benefit; concerns pertaining to pricing, product and technological
standardization, privacy and security; as well as influences from
peers and community. An analysis of a Malaysian survey of 384
respondents between 18 to 25 years shows that subscribers placed
greater importance on perceived benefit of mobile entertainment
services compared to other factors. Results of the survey also show
that there are strong positive correlations between all the factors,
with pricing issue–perceived benefit showing the strongest
relationship. This paper aims to provide an extensive study on the
drivers and barriers that could be used to derive architecture for
entertainment service provision to serve as a guide for telcos to
outline suitable approaches in order to encourage mass market
adoption of mobile entertainment services in Malaysia.
Digital Article Identifier (DOI):
104
13778
FPGA Implementation of the “PYRAMIDS“ Block Cipher
Abstract: The “PYRAMIDS" Block Cipher is a symmetric encryption algorithm of a 64, 128, 256-bit length, that accepts a variable key length of 128, 192, 256 bits. The algorithm is an iterated cipher consisting of repeated applications of a simple round transformation with different operations and different sequence in each round. The algorithm was previously software implemented in Cµ code. In this paper, a hardware implementation of the algorithm, using Field Programmable Gate Arrays (FPGA), is presented. In this work, we discuss the algorithm, the implemented micro-architecture, and the simulation and implementation results. Moreover, we present a detailed comparison with other implemented standard algorithms. In addition, we include the floor plan as well as the circuit diagrams of the various micro-architecture modules.
Digital Article Identifier (DOI):
103
5497
Design of a Neural Networks Classifier for Face Detection
Abstract: Face detection and recognition has many applications
in a variety of fields such as security system, videoconferencing and
identification. Face classification is currently implemented in
software. A hardware implementation allows real-time processing,
but has higher cost and time to-market.
The objective of this work is to implement a classifier based on
neural networks MLP (Multi-layer Perceptron) for face detection.
The MLP is used to classify face and non-face patterns. The systm is
described using C language on a P4 (2.4 Ghz) to extract weight
values. Then a Hardware implementation is achieved using VHDL
based Methodology. We target Xilinx FPGA as the implementation
support.
Digital Article Identifier (DOI):
102
12008
Enhancing capabilities of Texture Extraction for Color Image Retrieval
Abstract: Content-Based Image Retrieval has been a major area
of research in recent years. Efficient image retrieval with high
precision would require an approach which combines usage of both
the color and texture features of the image. In this paper we propose
a method for enhancing the capabilities of texture based feature
extraction and further demonstrate the use of these enhanced texture
features in Texture-Based Color Image Retrieval.
Digital Article Identifier (DOI):
101
1597
Techniques for Video Mosaicing
Abstract: Video Mosaicing is the stitching of selected frames of
a video by estimating the camera motion between the frames and
thereby registering successive frames of the video to arrive at the
mosaic. Different techniques have been proposed in the literature for
video mosaicing. Despite of the large number of papers dealing with
techniques to generate mosaic, only a few authors have investigated
conditions under which these techniques generate good estimate of
motion parameters. In this paper, these techniques are studied under
different videos, and the reasons for failures are found. We propose
algorithms with incorporation of outlier removal algorithms for better
estimation of motion parameters.
Digital Article Identifier (DOI):
100
2459
VISUAL JESS: AN Expandable Visual Generator of Oriented Object Expert systems
Abstract: The utility of expert system generators has been
widely recognized in many applications. Several generators based on
concept of the paradigm object, have been recently proposed. The
generator of oriented object expert system (GSEOO) offers
languages that are often complex and difficult to use. We propose in
this paper an extension of the expert system generator, JESS, which
permits a friendly use of this expert system. The new tool, called
VISUAL JESS, bring two main improvements to JESS. The first
improvement concerns the easiness of its utilization while giving
back transparency to the syntax and semantic aspects of the JESS
programming language. The second improvement permits an easy
access and modification of the JESS knowledge basis. The
implementation of VISUAL JESS is made so that it is extensible and
portable.
Digital Article Identifier (DOI):
99
4391
Unsupervised Texture Classification and Segmentation
Abstract: An unsupervised classification algorithm is derived
by modeling observed data as a mixture of several mutually
exclusive classes that are each described by linear combinations of
independent non-Gaussian densities. The algorithm estimates the
data density in each class by using parametric nonlinear functions
that fit to the non-Gaussian structure of the data. This improves
classification accuracy compared with standard Gaussian mixture
models. When applied to textures, the algorithm can learn basis
functions for images that capture the statistically significant structure
intrinsic in the images. We apply this technique to the problem of
unsupervised texture classification and segmentation.
Digital Article Identifier (DOI):
98
3651
Using Fuzzy Controller in Induction Motor Speed Control with Constant Flux
Abstract: Variable speed drives are growing and varying. Drives expanse depend on progress in different part of science like power system, microelectronic, control methods, and so on. Artificial intelligent contains hard computation and soft computation. Artificial intelligent has found high application in most nonlinear systems same as motors drive. Because it has intelligence like human but there are no sentimental against human like angriness and.... Artificial intelligent is used for various points like approximation, control, and monitoring. Because artificial intelligent techniques can use as controller for any system without requirement to system mathematical model, it has been used in electrical drive control. With this manner, efficiency and reliability of drives increase and volume, weight and cost of them decrease.
Digital Article Identifier (DOI):
97
2689
Energy Consumption Analysis of Design Patterns
Abstract: The importance of low power consumption is widely
acknowledged due to the increasing use of portable devices, which
require minimizing the consumption of energy. Energy dissipation is
heavily dependent on the software used in the system. Applying
design patterns in object-oriented designs is a common practice
nowadays. In this paper we analyze six design patterns and explore
the effect of them on energy consumption and performance.
Digital Article Identifier (DOI):
96
13915
A Novel Approach to Persian Online Hand Writing Recognition
Abstract: Persian (Farsi) script is totally cursive and each character is written in several different forms depending on its former and later characters in the word. These complexities make automatic handwriting recognition of Persian a very hard problem and there are few contributions trying to work it out. This paper presents a novel practical approach to online recognition of Persian handwriting which is based on representation of inputs and patterns with very simple visual features and comparison of these simple terms. This recognition approach is tested over a set of Persian words and the results have been quite acceptable when the possible words where unknown and they were almost all correct in cases that the words where chosen from a prespecified list.
Digital Article Identifier (DOI):
95
12030
Optimizing Allocation of Two Dimensional Irregular Shapes using an Agent Based Approach
Abstract: Packing problems arise in a wide variety of application
areas. The basic problem is that of determining an efficient arrangement
of different objects in a region without any overlap and
with minimal wasted gap between shapes. This paper presents a
novel population based approach for optimizing arrangement of irregular
shapes. In this approach, each shape is coded as an agent and
the agents' reproductions and grouping policies results in arrangements
of the objects in positions with least wasted area between
them. The approach is implemented in an application for cutting
sheets and test results on several problems from literature are presented.
Digital Article Identifier (DOI):
94
14005
Molecular Evolutionary Analysis of Yeast Protein Interaction Network
Abstract: To understand life as biological system, evolutionary
understanding is indispensable. Protein interactions data are rapidly
accumulating and are suitable for system-level evolutionary analysis.
We have analyzed yeast protein interaction network by both
mathematical and biological approaches. In this poster presentation,
we inferred the evolutionary birth periods of yeast proteins by
reconstructing phylogenetic profile. It has been thought that hub
proteins that have high connection degree are evolutionary old. But
our analysis showed that hub proteins are entirely evolutionary new.
We also examined evolutionary processes of protein complexes. It
showed that member proteins of complexes were tend to have
appeared in the same evolutionary period. Our results suggested that
protein interaction network evolved by modules that form the
functional unit. We also reconstructed standardized phylogenetic trees
and calculated evolutionary rates of yeast proteins. It showed that
there is no obvious correlation between evolutionary rates and
connection degrees of yeast proteins.
Digital Article Identifier (DOI):
93
9755
Analysis of Medical Data using Data Mining and Formal Concept Analysis
Abstract: This paper focuses on analyzing medical diagnostic data using classification rules in data mining and context reduction in formal concept analysis. It helps in finding redundancies among the various medical examination tests used in diagnosis of a disease. Classification rules have been derived from positive and negative association rules using the Concept lattice structure of the Formal Concept Analysis. Context reduction technique given in Formal Concept Analysis along with classification rules has been used to find redundancies among the various medical examination tests. Also it finds out whether expensive medical tests can be replaced by some cheaper tests.
Digital Article Identifier (DOI):
92
10322
Performance Evaluation of Single-mode and Multimode Fiber in LAN Environment
Abstract: Optical networks are high capacity networks that meet
the rapidly growing demand for bandwidth in the terrestrial
telecommunications industry. This paper studies and evaluates singlemode
and multimode fiber transmission by varying the distance. It
focuses on their performance in LAN environment. This is achieved
by observing the pulse spreading and attenuation in optical spectrum
and eye-diagram that are obtained using OptSim simulator. The
behaviors of two modes with different distance of data transmission
are studied, evaluated and compared.
Digital Article Identifier (DOI):
91
5008
Evaluation of Handover Latency in Intra- Domain Mobility
Abstract: Mobile IPv6 (MIPv6) describes how mobile node can change its point of attachment from one access router to another. As a demand for wireless mobile devices increases, many enhancements for macro-mobility (inter-domain) protocols have been proposed, designed and implemented in Mobile IPv6. Hierarchical Mobile IPv6 (HMIPv6) is one of them that is designed to reduce the amount of signaling required and to improve handover speed for mobile connections. This is achieved by introducing a new network entity called Mobility Anchor Point (MAP). This report presents a comparative study of the Hierarchical Mobility IPv6 and Mobile IPv6 protocols and we have narrowed down the scope to micro-mobility (intra-domain). The architecture and operation of each protocol is studied and they are evaluated based on the Quality of Service (QoS) parameter; handover latency. The simulation was carried out by using the Network Simulator-2. The outcome from this simulation has been discussed. From the results, it shows that, HMIPv6 performs best under intra-domain mobility compared to MIPv6. The MIPv6 suffers large handover latency. As enhancement we proposed to HMIPv6 to locate the MAP to be in the middle of the domain with respect to all Access Routers. That gives approximately same distance between MAP and Mobile Node (MN) regardless of the new location of MN, and possible shorter distance. This will reduce the delay since the distance is shorter. As a future work performance analysis is to be carried for the proposed HMIPv6 and compared to HMIPv6.
Digital Article Identifier (DOI):
90
2248
New Approach for the Modeling and the Implementation of the Object-Relational Databases
Abstract: Conception is the primordial part in the realization of
a computer system. Several tools have been used to help inventors to
describe their software. These tools knew a big success in the
relational databases domain since they permit to generate SQL script
modeling the database from an Entity/Association model. However,
with the evolution of the computer domain, the relational databases
proved their limits and object-relational model became used more
and more. Tools of present conception don't support all new concepts
introduced by this model and the syntax of the SQL3 language. We
propose in this paper a tool of help to the conception and
implementation of object-relational databases called «NAVIGTOOLS"
that allows the user to generate script modeling its database
in SQL3 language. This tool bases itself on the Entity/Association
and navigational model for modeling the object-relational databases.
Digital Article Identifier (DOI):
89
13754
Contribution to the Query Optimization in the Object-Oriented Databases
Abstract: Appeared toward 1986, the object-oriented databases
management systems had not known successes knew five years after
their birth. One of the major difficulties is the query optimization.
We propose in this paper a new approach that permits to enrich
techniques of query optimization existing in the object-oriented
databases. Seen success that knew the query optimization in the
relational model, our approach inspires itself of these optimization
techniques and enriched it so that they can support the new concepts
introduced by the object databases.
Digital Article Identifier (DOI):
88
10022
Genetic Programming Approach to Hierarchical Production Rule Discovery
Abstract: Automated discovery of hierarchical structures in
large data sets has been an active research area in the recent past.
This paper focuses on the issue of mining generalized rules with crisp
hierarchical structure using Genetic Programming (GP) approach to
knowledge discovery. The post-processing scheme presented in this
work uses flat rules as initial individuals of GP and discovers
hierarchical structure. Suitable genetic operators are proposed for the
suggested encoding. Based on the Subsumption Matrix(SM), an
appropriate fitness function is suggested. Finally, Hierarchical
Production Rules (HPRs) are generated from the discovered
hierarchy. Experimental results are presented to demonstrate the
performance of the proposed algorithm.
Digital Article Identifier (DOI):
87
2068
Moving From Problem Space to Solution Space
Abstract: Extracting and elaborating software requirements and
transforming them into viable software architecture are still an
intricate task. This paper defines a solution architecture which is
based on the blurred amalgamation of problem space and solution
space. The dependencies between domain constraints, requirements
and architecture and their importance are described that are to be
considered collectively while evolving from problem space to
solution space. This paper proposes a revised version of Twin Peaks
Model named Win Peaks Model that reconciles software
requirements and architecture in more consistent and adaptable
manner. Further the conflict between stakeholders- win-requirements
is resolved by proposed Voting methodology that is simple
adaptation of win-win requirements negotiation model and QARCC.
Digital Article Identifier (DOI):
86
11364
A Distinguish Attack on COSvd Cipher
Abstract: The COSvd Ciphers has been proposed by Filiol and others (2004). It is a strengthened version of COS stream cipher family denoted COSvd that has been adopted for at least one commercial standard. We propose a distinguish attack on this version, and prove that, it is distinguishable from a random stream. In the COSvd Cipher used one S-Box (10×8) on the final part of cipher. We focus on S-Box and use weakness this S-Box for distinguish attack. In addition, found a leak on HNLL that the sub s-boxes don-t select uniformly. We use this property for an Improve distinguish attack.
Digital Article Identifier (DOI):
85
3873
A New Proxy Signature Scheme As Secure As ElGamal Signature
Abstract: Proxy signature helps the proxy signer to sign
messages on behalf of the original signer. It is very useful when
the original signer (e.g. the president of a company) is not
available to sign a specific document. If the original signer can
not forge valid proxy signatures through impersonating the proxy
signer, it will be robust in a virtual environment; thus the original
signer can not shift any illegal action initiated by herself to the
proxy signer. In this paper, we propose a new proxy signature
scheme. The new scheme can prevent the original signer from
impersonating the proxy signer to sign messages. The proposed
scheme is based on the regular ElGamal signature. In addition,
the fair privacy of the proxy signer is maintained. That means,
the privacy of the proxy signer is preserved; and the privacy can
be revealed when it is necessary.
Digital Article Identifier (DOI):
84
5552
Hybrid Intelligent Intrusion Detection System
Abstract: Intrusion Detection Systems are increasingly a key
part of systems defense. Various approaches to Intrusion Detection
are currently being used, but they are relatively ineffective. Artificial
Intelligence plays a driving role in security services. This paper
proposes a dynamic model Intelligent Intrusion Detection System,
based on specific AI approach for intrusion detection. The
techniques that are being investigated includes neural networks and
fuzzy logic with network profiling, that uses simple data mining
techniques to process the network data. The proposed system is a
hybrid system that combines anomaly, misuse and host based
detection. Simple Fuzzy rules allow us to construct if-then rules that
reflect common ways of describing security attacks. For host based
intrusion detection we use neural-networks along with self
organizing maps. Suspicious intrusions can be traced back to its
original source path and any traffic from that particular source will
be redirected back to them in future. Both network traffic and system
audit data are used as inputs for both.
Digital Article Identifier (DOI):
83
8945
Neural-Symbolic Machine-Learning for Knowledge Discovery and Adaptive Information Retrieval
Abstract: In this paper, a model for an information retrieval
system is proposed which takes into account that knowledge about
documents and information need of users are dynamic. Two
methods are combined, one qualitative or symbolic and the other
quantitative or numeric, which are deemed suitable for many
clustering contexts, data analysis, concept exploring and
knowledge discovery. These two methods may be classified as
inductive learning techniques. In this model, they are introduced to
build “long term" knowledge about past queries and concepts in a
collection of documents. The “long term" knowledge can guide
and assist the user to formulate an initial query and can be
exploited in the process of retrieving relevant information. The
different kinds of knowledge are organized in different points of
view. This may be considered an enrichment of the exploration
level which is coherent with the concept of document/query
structure.
Digital Article Identifier (DOI):
82
7207
Techniques with Statistics for Web Page Watermarking
Abstract: Information hiding, especially watermarking is a
promising technique for the protection of intellectual property rights.
This technology is mainly advanced for multimedia but the same has
not been done for text. Web pages, like other documents, need a
protection against piracy. In this paper, some techniques are
proposed to show how to hide information in web pages using some
features of the markup language used to describe these pages. Most
of the techniques proposed here use the white space to hide
information or some varieties of the language in representing
elements. Experiments on a very small page and analysis of five
thousands web pages show that these techniques have a wide
bandwidth available for information hiding, and they might form a
solid base to develop a robust algorithm for web page watermarking.
Digital Article Identifier (DOI):
81
13943
Evolving Neural Networks using Moment Method for Handwritten Digit Recognition
Abstract: This paper proposes a neural network weights and
topology optimization using genetic evolution and the
backpropagation training algorithm. The proposed crossover and
mutation operators aims to adapt the networks architectures and
weights during the evolution process. Through a specific inheritance
procedure, the weights are transmitted from the parents to their
offsprings, which allows re-exploitation of the already trained
networks and hence the acceleration of the global convergence of the
algorithm. In the preprocessing phase, a new feature extraction
method is proposed based on Legendre moments with the Maximum
entropy principle MEP as a selection criterion. This allows a global
search space reduction in the design of the networks. The proposed
method has been applied and tested on the well known MNIST
database of handwritten digits.
Digital Article Identifier (DOI):
80
12258
An Artificial Immune System for a Multi Agent Robotics System
Abstract: This paper explores an application of an adaptive learning mechanism for robots based on the natural immune system. Most of the research carried out so far are based either on the innate or adaptive characteristics of the immune system, we present a combination of these to achieve behavior arbitration wherein a robot learns to detect vulnerable areas of a track and adapts to the required speed over such portions. The test bed comprises of two Lego robots deployed simultaneously on two predefined near concentric tracks with the outer robot capable of helping the inner one when it misaligns. The helper robot works in a damage-control mode by realigning itself to guide the other robot back onto its track. The panic-stricken robot records the conditions under which it was misaligned and learns to detect and adapt under similar conditions thereby making the overall system immune to such failures.
Digital Article Identifier (DOI):
79
1755
Slovenian Text-to-Speech Synthesis for Speech User Interfaces
Abstract: The paper presents the design concept of a unitselection
text-to-speech synthesis system for the Slovenian language.
Due to its modular and upgradable architecture, the system can be
used in a variety of speech user interface applications, ranging from
server carrier-grade voice portal applications, desktop user interfaces
to specialized embedded devices.
Since memory and processing power requirements are important
factors for a possible implementation in embedded devices, lexica
and speech corpora need to be reduced. We describe a simple and
efficient implementation of a greedy subset selection algorithm that
extracts a compact subset of high coverage text sentences. The
experiment on a reference text corpus showed that the subset
selection algorithm produced a compact sentence subset with a small
redundancy.
The adequacy of the spoken output was evaluated by several
subjective tests as they are recommended by the International
Telecommunication Union ITU.
Digital Article Identifier (DOI):
78
7414
Utilizing Biological Models to Determine the Recruitment of the Irish Republican Army
Abstract: Sociological models (e.g., social network analysis, small-group dynamic and gang models) have historically been used to predict the behavior of terrorist groups. However, they may not be the most appropriate method for understanding the behavior of terrorist organizations because the models were not initially intended to incorporate violent behavior of its subjects. Rather, models that incorporate life and death competition between subjects, i.e., models utilized by scientists to examine the behavior of wildlife populations, may provide a more accurate analysis. This paper suggests the use of biological models to attain a more robust method for understanding the behavior of terrorist organizations as compared to traditional methods. This study also describes how a biological population model incorporating predator-prey behavior factors can predict terrorist organizational recruitment behavior for the purpose of understanding the factors that govern the growth and decline of terrorist organizations. The Lotka-Volterra, a biological model that is based on a predator-prey relationship, is applied to a highly suggestive case study, that of the Irish Republican Army. This case study illuminates how a biological model can be utilized to understand the actions of a terrorist organization.
Digital Article Identifier (DOI):
77
8929
The Role of Classroom Management Efficacy in Predicting Teacher Burnout
Abstract: The purpose of this study was to examine to what
extend classroom management efficacy, marital status, gender, and
teaching experience predict burnout among primary school teachers.
Participants of this study were 523 (345 female, 178 male) teachers
who completed inventories. The results of multiple regression
analysis indicated that three dimensions of teacher burnout
(Emotional Exhaustion, Depersonalization, Personal
Accomplishment) were affected differently from four predictor
variables. Findings indicated that for the emotional exhaustion,
classroom management efficacy, marital status and teaching
experience; for depersonalization dimension, classroom management
efficacy and marital status and finally for the personal
accomplishment dimension, classroom management efficacy, gender,
and teaching experience were significant predictors.
Digital Article Identifier (DOI):
76
13419
Research on Self-Perceptions of Pre-Service Turkish Language Teachers in Turkey with Regard to Problem Solving Skills
Abstract: The aim of this research is to determine how preservice Turkish teachers perceive themselves in terms of problem solving skills. Students attending Department of Turkish Language Teaching of Gazi University Education Faculty in 2005-2006 academic year constitute the study group (n= 270) of this research in which survey model was utilized. Data were obtained by Problem Solving Inventory developed by Heppner & Peterson and Personal Information Form. Within the settings of this research, Cronbach Alpha reliability coefficient of the scale was found as .87. Besides, reliability coefficient obtained by split-half technique which splits odd and even numbered items of the scale was found as r=.81 (Split- Half Reliability). The findings of the research revealed that preservice Turkish teachers were sufficiently qualified on the subject of problem solving skills and statistical significance was found in favor of male candidates in terms of “gender" variable. According to the “grade" variable, statistical significance was found in favor of 4th graders.
Digital Article Identifier (DOI):
75
5687
[The] Creative Art [of] Education
Abstract: In our current political climate of assessment and
accountability initiatives we are failing to prepare our children for a
participatory role in the creative economy. The field of education is
increasingly falling prey to didactic methodologies which train a
nation of competent test takers, foregoing the opportunity to educate
students to find problems and develop multiple solutions. No where is
this more evident than in the area of art education. Due to a myriad of
issues including budgetary shortfalls, time constraints and a general
misconception that anyone who enjoys the arts is capable of teaching
the arts, our students are not developing the skills they require to
become fully literate in critical thinking and creative processing.
Although art integrated curriculum is increasingly being viewed as a
reform strategy for motivating students by offering alternative
presentation of concepts and representation of knowledge acquisition,
misinformed administrators are often excluding the art teacher from
the integration equation. The paper to follow addresses the problem
of the need for divergent thinking and conceptualization in our
schools. Furthermore, this paper explores the role of education, and
specifically, art education in the development of a creatively literate
citizenry.
Digital Article Identifier (DOI):
74
481
The Effects of the Impact of Instructional Immediacy on Cognition and Learning in Online Classes
Abstract: Current research has explored the impact of
instructional immediacy, defined as those behaviors that help build
close relationships or feelings of closeness, both on cognition and
motivation in the traditional classroom and online classroom;
however, online courses continue to suffer from higher dropout rates.
Based on Albert Bandura-s Social Cognitive Theory, four primary
relationships or interactions in an online course will be explored in
light of how they can provide immediacy thereby reducing student
attrition and improving cognitive learning. The four relationships are
teacher-student, student-student, and student-content, and studentcomputer.
Results of a study conducted with inservice teachers
completing a 14-week online professional development technology
course will be examined to demonstrate immediacy strategies that
improve cognitive learning and reduce student attrition. Results of
the study reveal that students can be motivated through various
interactions and instructional immediacy behaviors which lead to
higher completion rates, improved self-efficacy, and cognitive
learning.
Digital Article Identifier (DOI):
73
6022
Digital Narrative as a Change Agent to Teach Reading to Media-Centric Students
Abstract: Because today-s media centric students have adopted
digital as their native form of communication, teachers are having
increasingly difficult time motivating reluctant readers to read and
write. Our research has shown these text-averse individuals can learn
to understand the importance of reading and writing if the instruction
is based on digital narratives. While these students are naturally
attracted to story, they are better at consuming them than creating
them. Therefore, any intervention that utilizes story as its basis needs
to include instruction on the elements of story making. This paper
presents a series of digitally-based tools to identify potential
weaknesses of visually impaired visual learners and to help motivate
these and other media-centric students to select and complete books
that are assigned to them
Digital Article Identifier (DOI):
72
11345
Virtual or Virtually U: Educational Institutions in Second Life
Abstract: Educational institutions are increasingly exploring the affordances of 3D virtual worlds for instruction and research, but few studies have been done to document current practices and uses of this emerging technology. This observational survey examines the virtual presences of 170 accredited educational institutions found in one such 3D virtual world called Second Life®, created by San- Francisco based Linden Lab®. The study focuses on what educational institutions look like in this virtual environment, the types of spaces educational institutions are creating or simulating, and what types of activities are being conducted.
Digital Article Identifier (DOI):
71
302
Increasing the Efficacy of Educators Teaching Online
Abstract: In order to provide and maintain effective pedagogy for the burgeoning virtual reality community, it is vital to have trained faculty in the institutions of higher education who will teach these courses and be able to make full use of their academic knowledge and expertise. As the number of online courses continues to grow, there is a need for these institutions to establish mentoring programs that will support the novice online instructor. The environment in which this takes place and the factors that ensure its success are critical to the adoption of the new instructional delivery format taught by both seasoned educators and adjunct instructors. Effective one-on-one mentoring promotes a professional, compassionate and collegial faculty who will provide a consistent and rigorous academic program for students online.
Digital Article Identifier (DOI):
70
2544
A Virtual Learning Environment for Deaf Children: Design and Evaluation
Abstract: The object of this research is the design and
evaluation of an immersive Virtual Learning Environment (VLE) for
deaf children. Recently we have developed a prototype immersive
VR game to teach sign language mathematics to deaf students age K-
4 [1] [2]. In this paper we describe a significant extension of the
prototype application. The extension includes: (1) user-centered
design and implementation of two additional interactive
environments (a clock store and a bakery), and (2) user-centered
evaluation including development of user tasks, expert panel-based
evaluation, and formative evaluation. This paper is one of the few to
focus on the importance of user-centered, iterative design in VR
application development, and to describe a structured evaluation
method.
Digital Article Identifier (DOI):
69
8160
Prospective Class Teachers- Computer Experiences and Computer Attitudes
Abstract: The main purpose of the research is to investigate the computer experiences and computer attitudes of prospective class teachers. The research also investigated the differences between computer attitudes and computer experiences, computer competencies and the influence of genders. Ninety prospective class teachers participated in the research. Computer Attitude Scale- Marmara (CAS-M), and a questionnaire, about their computer experiences, and opinions toward the use of computers in the classroom setting, were administrated. The major findings are as follows: (1) 62% of prospective class teachers have computer at home; (2) 50% of the computer owners have computers less than three years; (3) No significant differences were found between computer attitudes and gender; (4) Differences were found between general computer attitudes and computer liking attitudes of prospective class teachers based on their computer competencies in favor of more competent ones.
Digital Article Identifier (DOI):
68
4780
Virtual Reality Classrooms Strategies for Creating a Social Presence
Abstract: Delivering course material via a virtual environment
is beneficial to today-s students because it offers the interactivity,
real-time interaction and social presence that students of all ages
have come to accept in our gaming rich community. It is essential
that the Net Generation also known as Generation Why, have
exposure to learning communities that encompass interactivity to
form social and educational connections. As student and professor
become interconnected through collaboration and interaction in a
virtual learning space, relationships develop and students begin to
take on an individual identity. With this in mind the research project
was developed to investigate the use of virtual environments on
student satisfaction and the effectiveness of course delivery.
Furthermore, the project was designed to integrate both interactive
(real-time) classes conducted in the Virtual Reality (VR)
environment while also creating archived VR sessions for student use
in retaining and reviewing course content.
Digital Article Identifier (DOI):
67
2573
Online Programme of Excellence Model (OPEM)
Abstract: Finding effective ways of improving university quality assurance requires, as well, a retraining of the staff. This article illustrates an Online Programme of Excellence Model (OPEM), based on the European quality assurance model, for improving participants- formative programme standards. The results of applying this OPEM indicate the necessity of quality policies that support the evaluators- competencies to improve formative programmes. The study concludes by outlining how faculty and agency staff can use OPEM for the internal and external quality assurance of formative programmes.
Digital Article Identifier (DOI):
66
13613
Learning and Teaching in the Panopticon:Ethical and Social Issues in Creating a Virtual Educational Environment
Abstract: This paper examines ethical and social issues which
have proved important when initiating and creating educational spaces within a virtual environment. It focuses on one project, identifying the key decisions made, the barriers to new practice
encountered and the impact these had on the project. It demonstrates
the importance of the 'backstage' ethical and social issues involved in
the creation of a virtual education community and offers conclusions,
and questions, which will inform future research and practice in this
area. These ethical issues are considered using Knobel-s framework
of front-end, in-process and back-end concerns, and include
establishing social practices for the islands, allocating access rights,
considering personal safety and supporting researchers appropriately
within this context.
Digital Article Identifier (DOI):
65
15015
Virtual Reality for Mutual Understanding in Landscape Planning
Abstract: This paper argues that fostering mutual understanding in landscape planning is as much about the planners educating stakeholder groups as the stakeholders educating the planners. In other words it is an epistemological agreement as to the meaning and nature of place, especially where an effort is made to go beyond the quantitative aspects, which can be achieved by the phenomenological experience of the Virtual Reality (VR) environment. This education needs to be a bi-directional process in which distance can be both temporal as well as spatial separation of participants, that there needs to be a common framework of understanding in which neither 'side' is disadvantaged during the process of information exchange and it follows that a medium such as VR offers an effective way of overcoming some of the shortcomings of traditional media by taking advantage of continuing technological advances in Information, Technology and Communications (ITC). In this paper we make particular reference to this as an extension to Geographical Information Systems (GIS). VR as a two-way communication tool offers considerable potential particularly in the area of Public Participation GIS (PPGIS). Information rich virtual environments that can operate over broadband networks are now possible and thus allow for the representation of large amounts of qualitative and quantitative information 'side-by-side'. Therefore, with broadband access becoming standard for households and enterprises alike, distributed virtual reality environments have great potential to contribute to enabling stakeholder participation and mutual learning within the planning context.
Digital Article Identifier (DOI):
64
2775
Toward a Model for Knowledge Development in Virtual Environments: Strategies for Student Ownership
Abstract: This article discusses the concept of student ownership of knowledge and seeks to determine how to move students from knowledge acquisition to knowledge application and ultimately to knowledge generation in a virtual setting. Instructional strategies for fostering student engagement in a virtual environment are critical to the learner-s strategic ownership of the knowledge. A number of relevant theories that focus on learning, affect, needs and adult concerns are presented to provide a basis for exploring the transfer of knowledge from teacher to learner. A model under development is presented that combines the dimensions of knowledge approach, the teacher-student relationship with regards to knowledge authority and teaching approach to demonstrate the recursive and scaffolded design for creation of virtual learning environments.
Digital Article Identifier (DOI):
63
3775
Building Virtual Reality Environments for Distance Education on the Web: A Case Study in Medical Education
Abstract: The paper presents an investigation into the role of virtual reality and web technologies in the field of distance education. Within this frame, special emphasis is given on the building of web-based virtual learning environments so as to successfully fulfill their educational objectives. In particular, basic pedagogical methods are studied, focusing mainly on the efficient preparation, approach and presentation of learning content, and specific designing rules are presented considering the hypermedia, virtual and educational nature of this kind of applications. The paper also aims to highlight the educational benefits arising from the use of virtual reality technology in medicine and study the emerging area of web-based medical simulations. Finally, an innovative virtual reality environment for distance education in medicine is demonstrated. The proposed environment reproduces conditions of the real learning process and enhances learning through a real-time interactive simulator.
Digital Article Identifier (DOI):
62
15368
Virtual Reality Models used on the Visualization of Construction Activities in Civil Engineering Education
Abstract: Three-dimensional geometric models have been used
to present architectural and engineering works, showing their final
configuration. When the clarification of a detail or the constitution of
a construction step in needed, these models are not appropriate. They
do not allow the observation of the construction progress of a
building. Models that could present dynamically changes of the
building geometry are a good support to the elaboration of projects.
Techniques of geometric modeling and virtual reality were used to
obtain models that could visually simulate the construction activity.
The applications explain the construction work of a cavity wall and a
bridge. These models allow the visualization of the physical
progression of the work following a planned construction sequence,
the observation of details of the form of every component of the
works and support the study of the type and method of operation of
the equipment applied in the construction. These models presented
distinct advantage as educational aids in first-degree courses in Civil
Engineering. The use of Virtual Reality techniques in the
development of educational applications brings new perspectives to
the teaching of subjects related to the field of civil construction.
Digital Article Identifier (DOI):
61
6243
Promoting Reflection through Action Learning in a 3D Virtual World
Abstract: An international cooperation between educators in
Australia and the US has led to a reconceptualization of the teaching
of a library science course at Appalachian State University. The
pedagogy of Action Learning coupled with a 3D virtual learning
environment immerses students in a social constructivist learning
space that incorporates and supports interaction and reflection. The
intent of this study was to build a bridge between theory and practice
by providing students with a tool set that promoted personal and
social reflection, and created and scaffolded a community of practice.
Besides, action learning is an educational process whereby the fifty
graduate students experienced their own actions and experience to
improve performance.
Digital Article Identifier (DOI):
60
3623
The Effect of Facial Expressions on Students in Virtual Educational Environments
Abstract: The scope of this research was to study the relation between the facial expressions of three lecturers in a real academic lecture theatre and the reactions of the students to those expressions. The first experiment aimed to investigate the effectiveness of a virtual lecturer-s expressions on the students- learning outcome in a virtual pedagogical environment. The second experiment studied the effectiveness of a single facial expression, i.e. the smile, on the students- performance. Both experiments involved virtual lectures, with virtual lecturers teaching real students. The results suggest that the students performed better by 86%, in the lectures where the lecturer performed facial expressions compared to the results of the lectures that did not use facial expressions. However, when simple or basic information was used, the facial expressions of the virtual lecturer had no substantial effect on the students- learning outcome. Finally, the appropriate use of smiles increased the interest of the students and consequently their performance.
Digital Article Identifier (DOI):
59
14203
A Virtual Reality Laboratory for Distance Education in Chemistry
Abstract: Simulations play a major role in education not only because they provide realistic models with which students can interact to acquire real world experiences, but also because they constitute safe environments in which students can repeat processes without any risk in order to perceive easier concepts and theories. Virtual reality is widely recognized as a significant technological advance that can facilitate learning process through the development of highly realistic 3D simulations supporting immersive and interactive features. The objective of this paper is to analyze the influence of virtual reality-s use in chemistry instruction as well as to present an integrated web-based learning environment for the simulation of chemical experiments. The proposed application constitutes a cost-effective solution for both schools and universities without appropriate infrastructure and a valuable tool for distance learning and life-long education in chemistry. Its educational objectives are the familiarization of students with the equipment of a real chemical laboratory and the execution of virtual volumetric analysis experiments with the active participation of students.
Digital Article Identifier (DOI):
58
3382
Utilizing Virtual Worlds in Education: The Implications for Practice
Abstract: Multi User Virtual Worlds are becoming a valuable educational tool. Learning experiences within these worlds focus on discovery and active experiences that both engage students and motivate them to explore new concepts. As educators, we need to explore these environments to determine how they can most effectively be used in our instructional practices. This paper explores the current application of virtual worlds to identify meaningful educational strategies that are being used to engage students and enhance teaching and learning.
Digital Article Identifier (DOI):
57
10954
Redefining Field Experiences: Virtual Environments in Teacher Education
Abstract: The explosion of interest in online gaming and
virtual worlds is leading many universities to investigate
possible educational applications of the new environments.
In this paper we explore the possibilities of 3D online worlds
for teacher education, particularly the field experience
component. Drawing upon two pedagogical examples, we
suggest that virtual simulations may, with certain limitations,
create safe spaces that allow preservice teachers to adopt
alternate identities and interact safely with the “other." In so
doing they may become aware of the constructed nature of
social categories and gain the essential pedagogical skill of
perspective-taking. We suggest that, ultimately, the ability to
be the principal creators of themselves in virtual environments
can increase their ability to do the same in the real world.
Digital Article Identifier (DOI):
56
10870
Train the Trainer: The Bricks in the Learning Community Scaffold of Professional Development
Abstract: Professional development is the focus of this study. It
reports on questionnaire data that examined the perceived
effectiveness of the Train the Trainer model of technology
professional development for elementary teachers. Eighty-three
selected teachers called Information Technology Coaches received
four half-day and one after-school in-service sessions. Subsequently,
coaches shared the information and skills acquired during training
with colleagues. Results indicated that participants felt comfortable
as Information Technology Coaches and felt well prepared because
of their technological professional development. Overall, participants
perceived the Train the Trainer model to be effective. The outcomes
of this study suggest that the use of the Train the Trainer model, a
known professional development model, can be an integral and
interdependent component of the newer more comprehensive
learning community professional development model.
Digital Article Identifier (DOI):
55
13506
Endogenous Fantasy – Based Serious Games: Intrinsic Motivation and Learning
Abstract: Current technological advances pale in comparison to the changes in social behaviors and 'sense of place' that is being empowered since the Internet made it on the scene. Today-s students view the Internet as both a source of entertainment and an educational tool. The development of virtual environments is a conceptual framework that needs to be addressed by educators and it is important that they become familiar with who these virtual learners are and how they are motivated to learn. Massively multiplayer online role playing games (MMORPGs), if well designed, could become the vehicle of choice to deliver learning content. We suggest that these games, in order to accomplish these goals, must begin with well-established instructional design principles that are co-aligned with established principles of video game design. And have the opportunity to provide an instructional model of significant prescriptive power. The authors believe that game designers need to take advantage of the natural motivation player-learners have for playing games by developing them in such a way so as to promote, intrinsic motivation, content learning, transfer of knowledge, and naturalization.
Digital Article Identifier (DOI):
54
13249
Simulated Annealing Application for Structural Optimization
Abstract: Several methods are available for weight and shape
optimization of structures, among which Evolutionary Structural
Optimization (ESO) is one of the most widely used methods. In ESO,
however, the optimization criterion is completely case-dependent.
Moreover, only the improving solutions are accepted during the
search. In this paper a Simulated Annealing (SA) algorithm is used
for structural optimization problem. This algorithm differs from other
random search methods by accepting non-improving solutions. The
implementation of SA algorithm is done through reducing the
number of finite element analyses (function evaluations).
Computational results show that SA can efficiently and effectively
solve such optimization problems within short search time.
Digital Article Identifier (DOI):
53
11207
Unit Commitment Solution Methods
Abstract: An effort to develop a unit commitment approach
capable of handling large power systems consisting of both thermal
and hydro generating units offers a large profitable return. In order to
be feasible, the method to be developed must be flexible, efficient
and reliable. In this paper, various proposed methods have been
described along with their strengths and weaknesses. As all of these
methods have some sort of weaknesses, a comprehensive algorithm
that combines the strengths of different methods and overcomes each
other-s weaknesses would be a suitable approach for solving
industry-grade unit commitment problem.
Digital Article Identifier (DOI):
52
10594
Emission Constrained Hydrothermal Scheduling Algorithm
Abstract: This paper presents an efficient emission constrained
hydrothermal scheduling algorithm that deals with nonlinear
functions such as the water discharge characteristics, thermal cost,
and transmission loss. It is then incorporated into the hydrothermal
coordination program. The program has been tested on a practical
utility system having 32 thermal and 12 hydro generating units. Test
results show that a slight increase in production cost causes a
substantial reduction in emission.
Digital Article Identifier (DOI):
51
1715
Rule-Based Fuzzy Logic Controller with Adaptable Reference
Abstract: This paper attempts to model and design a simple
fuzzy logic controller with Variable Reference. The Variable
Reference (VR) is featured as an adaptability element which is
obtained from two known variables – desired system-input and actual
system-output. A simple fuzzy rule-based technique is simulated to
show how the actual system-input is gradually tuned in to a value
that closely matches the desired input. The designed controller is
implemented and verified on a simple heater which is controlled by
PIC Microcontroller harnessed by a code developed in embedded C.
The output response of the PIC-controlled heater is analyzed and
compared to the performances by conventional fuzzy logic
controllers. The novelty of this work lies in the fact that it gives
better performance by using less number of rules compared to
conventional fuzzy logic controllers.
Digital Article Identifier (DOI):
50
7734
Security Management System of Cellular Communication: Case Study
Abstract: Cellular communication is being widely used by all
over the world. The users of handsets are increasing due to the
request from marketing sector. The important aspect that has to be
touch in this paper is about the security system of cellular
communication. It is important to provide users with a secure channel
for communication. A brief description of the new GSM cellular
network architecture will be provided. Limitations of cellular
networks, their security issues and the different types of attacks will
be discussed. The paper will go over some new security mechanisms
that have been proposed by researchers. Overall, this paper clarifies
the security system or services of cellular communication using
GSM. Three Malaysian Communication Companies were taken as
Case study in this paper.
Digital Article Identifier (DOI):
49
4817
Performance Evaluation of Low Density Parity Check Codes
Abstract: This paper mainly about the study on one of the
widely used error correcting codes that is Low parity check Codes
(LDPC). In this paper, the Regular LDPC code has been discussed
The LDPC codes explained in this paper is about the Regular Binary
LDPC codes or the Gallager.
Digital Article Identifier (DOI):
48
12430
SMaTTS: Standard Malay Text to Speech System
Abstract: This paper presents a rule-based text- to- speech
(TTS) Synthesis System for Standard Malay, namely SMaTTS. The
proposed system using sinusoidal method and some pre- recorded
wave files in generating speech for the system. The use of phone
database significantly decreases the amount of computer memory
space used, thus making the system very light and embeddable. The
overall system was comprised of two phases the Natural Language
Processing (NLP) that consisted of the high-level processing of text
analysis, phonetic analysis, text normalization and morphophonemic
module. The module was designed specially for SM to overcome
few problems in defining the rules for SM orthography system before
it can be passed to the DSP module. The second phase is the Digital
Signal Processing (DSP) which operated on the low-level process of
the speech waveform generation. A developed an intelligible and
adequately natural sounding formant-based speech synthesis system
with a light and user-friendly Graphical User Interface (GUI) is
introduced. A Standard Malay Language (SM) phoneme set and an
inclusive set of phone database have been constructed carefully for
this phone-based speech synthesizer. By applying the generative
phonology, a comprehensive letter-to-sound (LTS) rules and a
pronunciation lexicon have been invented for SMaTTS. As for the
evaluation tests, a set of Diagnostic Rhyme Test (DRT) word list was
compiled and several experiments have been performed to evaluate
the quality of the synthesized speech by analyzing the Mean Opinion
Score (MOS) obtained. The overall performance of the system as
well as the room for improvements was thoroughly discussed.
Digital Article Identifier (DOI):
47
5717
Managing Legal, Consumers and Commerce Risks in Phishing
Abstract: Phishing scheme is a new emerged security issue of
E-Commerce Crime in globalization. In this paper, the legal scaffold
of Malaysia, United States and United Kingdom are analyzed and
followed by discussion on critical issues that rose due to phishing
activities. The result revealed that inadequacy of current legal
framework is the main challenge to govern this epidemic. However,
lack of awareness among consumers, crisis on merchant-s
responsibility and lack of intrusion reports and incentive arrangement
contributes to phishing proliferating. Prevention is always better than
curb. By the end of this paper, some best practices for consumers and
corporations are suggested.
Digital Article Identifier (DOI):
46
11026
Secure Secret Recovery by using Weighted Personal Entropy
Abstract: Authentication plays a vital role in many secure
systems. Most of these systems require user to log in with his or her
secret password or pass phrase before entering it. This is to ensure all
the valuables information is kept confidential guaranteeing also its
integrity and availability. However, to achieve this goal, users are
required to memorize high entropy passwords or pass phrases.
Unfortunately, this sometimes causes difficulty for user to remember
meaningless strings of data. This paper presents a new scheme which
assigns a weight to each personal question given to the user in
revealing the encrypted secrets or password. Concentration of this
scheme is to offer fault tolerance to users by allowing them to forget
the specific password to a subset of questions and still recover the
secret and achieve successful authentication. Comparison on level of
security for weight-based and weightless secret recovery scheme is
also discussed. The paper concludes with the few areas that requires
more investigation in this research.
Digital Article Identifier (DOI):
45
3471
Service Architecture for 3rd Party Operator's Participation
Abstract: Next generation networks with the idea of convergence of service and control layer in existing networks (fixed, mobile and data) and with the intention of providing services in an integrated network, has opened new horizon for telecom operators. On the other hand, economic problems have caused operators to look for new source of income including consider new services, subscription of more users and their promotion in using morenetwork resources and easy participation of service providers or 3rd party operators in utilizing networks. With this requirement, an architecture based on next generation objectives for service layer is necessary. In this paper, a new architecture based on IMS model explains participation of 3rd party operators in creation and implementation of services on an integrated telecom network.
Digital Article Identifier (DOI):
44
14196
Text Mining Technique for Data Mining Application
Abstract: Text Mining is around applying knowledge discovery
techniques to unstructured text is termed knowledge discovery in text
(KDT), or Text data mining or Text Mining. In decision tree
approach is most useful in classification problem. With this
technique, tree is constructed to model the classification process.
There are two basic steps in the technique: building the tree and
applying the tree to the database. This paper describes a proposed
C5.0 classifier that performs rulesets, cross validation and boosting
for original C5.0 in order to reduce the optimization of error ratio.
The feasibility and the benefits of the proposed approach are
demonstrated by means of medial data set like hypothyroid. It is
shown that, the performance of a classifier on the training cases from
which it was constructed gives a poor estimate by sampling or using a
separate test file, either way, the classifier is evaluated on cases that
were not used to build and evaluate the classifier are both are large. If
the cases in hypothyroid.data and hypothyroid.test were to be
shuffled and divided into a new 2772 case training set and a 1000
case test set, C5.0 might construct a different classifier with a lower
or higher error rate on the test cases. An important feature of see5 is
its ability to classifiers called rulesets. The ruleset has an error rate
0.5 % on the test cases. The standard errors of the means provide an
estimate of the variability of results. One way to get a more reliable
estimate of predictive is by f-fold –cross- validation. The error rate of
a classifier produced from all the cases is estimated as the ratio of the
total number of errors on the hold-out cases to the total number of
cases. The Boost option with x trials instructs See5 to construct up to
x classifiers in this manner. Trials over numerous datasets, large and
small, show that on average 10-classifier boosting reduces the error
rate for test cases by about 25%.
Digital Article Identifier (DOI):
43
3895
GridNtru: High Performance PKCS
Abstract: Cryptographic algorithms play a crucial role in the
information society by providing protection from unauthorized
access to sensitive data. It is clear that information technology will
become increasingly pervasive, Hence we can expect the emergence
of ubiquitous or pervasive computing, ambient intelligence. These
new environments and applications will present new security
challenges, and there is no doubt that cryptographic algorithms and
protocols will form a part of the solution. The efficiency of a public
key cryptosystem is mainly measured in computational overheads,
key size and bandwidth. In particular the RSA algorithm is used in
many applications for providing the security. Although the security
of RSA is beyond doubt, the evolution in computing power has
caused a growth in the necessary key length. The fact that most chips
on smart cards can-t process key extending 1024 bit shows that there
is need for alternative. NTRU is such an alternative and it is a
collection of mathematical algorithm based on manipulating lists of
very small integers and polynomials. This allows NTRU to high
speeds with the use of minimal computing power. NTRU (Nth degree
Truncated Polynomial Ring Unit) is the first secure public key
cryptosystem not based on factorization or discrete logarithm
problem. This means that given sufficient computational resources
and time, an adversary, should not be able to break the key. The
multi-party communication and requirement of optimal resource
utilization necessitated the need for the present day demand of
applications that need security enforcement technique .and can be
enhanced with high-end computing. This has promoted us to develop
high-performance NTRU schemes using approaches such as the use
of high-end computing hardware. Peer-to-peer (P2P) or enterprise
grids are proven as one of the approaches for developing high-end
computing systems. By utilizing them one can improve the
performance of NTRU through parallel execution. In this paper we
propose and develop an application for NTRU using enterprise grid
middleware called Alchemi. An analysis and comparison of its
performance for various text files is presented.
Digital Article Identifier (DOI):
42
15790
Auction Theory: Bidder's Perspective in an English Auction Environment
Abstract: This paper provides an overview of auction theory literature. We present a general review on literature of various auctions and focus ourselves specifically on an English auction. We are interested in modelling bidder's behavior in an English auction environment. And hence, we present an overview of the New Zealand wool auction followed by a model that would describe a bidder's decision making behavior from the New Zealand wool auction. The mathematical assumptions in an English auction environment are demonstrated from the perspective of the New Zealand wool auction.
Digital Article Identifier (DOI):
41
12267
Using Heuristic Rules from Sentence Decomposition of Experts- Summaries to Detect Students- Summarizing Strategies
Abstract: Summarizing skills have been introduced to English
syllabus in secondary school in Malaysia to evaluate student-s comprehension for a given text where it requires students to employ several strategies to produce the summary. This paper reports on our effort to develop a computer-based summarization assessment system
that detects the strategies used by the students in producing their
summaries. Sentence decomposition of expert-written summaries is
used to analyze how experts produce their summary sentences. From
the analysis, we identified seven summarizing strategies and their
rules which are then transformed into a set of heuristic rules on how
to determine the summarizing strategies. We developed an algorithm
based on the heuristic rules and performed some experiments to
evaluate and support the technique proposed.
Digital Article Identifier (DOI):
40
5738
A Fiber Optic Interferometric Sensor for Dynamic Measurement
Abstract: An optical fiber Fabry-Perot interferometer (FFPI) is
proposed and demonstrated for dynamic measurements in a
mechanical vibrating target. A polishing metal with a low reflectance
value adhered to a mechanical vibrator was excited via a function
generator at various excitation frequencies. Output interference
fringes were generated by modulating the reference and sensing
signal at the output arm. A fringe-counting technique was used for
interpreting the displacement information on the dedicated computer.
The fiber interferometer has been found the capability of the
displacement measurements of 1.28 μm – 96.01 μm. A commercial
displacement sensor was employed as a reference sensor for
investigating the measurement errors from the fiber sensor. A
maximum percentage measurement error of approximately 1.59 %
was obtained.
Digital Article Identifier (DOI):
39
14330
Inter-Phase Magnetic Coupling Effects on Sensorless SR Motor Control
Abstract: Control of commutation of switched reluctance (SR)
motor has been an area of interest for researchers for sometime now
with mixed successes in addressing the inherent challenges. New
technologies, processing schemes and methods have been adopted to
make sensorless SR drive a reality. There are a number of
conceptual, offline, analytical and online solutions in literature that
have varying complexities and achieved equally varying degree of
robustness and accuracies depending on the method used to address
the challenges and the SR drive application. Magnetic coupling is
one such challenge when using active probing techniques to
determine rotor position of a SR motor from stator winding. This
paper studies the effect of back-of-core saturation on the detected
rotor position and presents results on measurement made on a 4-
phase SR motor. The results shows that even for a four phase motor
which is excited one phase at a time and using the electrically
opposite phase for active position probing, the back-of-core
saturation effects should not be ignored.
Digital Article Identifier (DOI):
38
8051
Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis
Abstract: Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Digital Article Identifier (DOI):
37
3323
Conversion of Modified Commercial Polyacrylonitrile Fibers to Carbon Fibers
Abstract: Carbon fibers are fabricated from different materials,
such as special polyacrylonitrile (PAN) fibers, rayon fibers and pitch.
Among these three groups of materials, PAN fibers are the most
widely used precursor for the manufacture of carbon fibers. The
process of fabrication carbon fibers from special PAN fibers includes
two steps; oxidative stabilization at low temperature and
carbonization at high temperatures in an inert atmosphere. Due to the
high price of raw materials (special PAN fibers), carbon fibers are
still expensive.
In the present work the main goal is making carbon fibers from
low price commercial PAN fibers with modified chemical
compositions. The results show that in case of conducting completes
stabilization process, it is possible to produce carbon fibers with
desirable tensile strength from this type of PAN fibers. To this
matter, thermal characteristics of commercial PAN fibers were
investigated and based upon the obtained results, with some changes
in conventional procedure of stabilization in terms of temperature
and time variables; the desirable conditions of complete stabilization
is achieved.
Digital Article Identifier (DOI):
36
7311
Modified Fast and Exact Algorithm for Fast Haar Transform
Abstract: Wavelet transform or wavelet analysis is a recently
developed mathematical tool in applied mathematics. In numerical
analysis, wavelets also serve as a Galerkin basis to solve partial
differential equations. Haar transform or Haar wavelet transform has
been used as a simplest and earliest example for orthonormal wavelet
transform. Since its popularity in wavelet analysis, there are several
definitions and various generalizations or algorithms for calculating
Haar transform. Fast Haar transform, FHT, is one of the algorithms
which can reduce the tedious calculation works in Haar transform. In
this paper, we present a modified fast and exact algorithm for FHT,
namely Modified Fast Haar Transform, MFHT. The algorithm or
procedure proposed allows certain calculation in the process
decomposition be ignored without affecting the results.
Digital Article Identifier (DOI):
35
4917
The Synthetic T2 Quality Control Chart and its Multi-Objective Optimization
Abstract: In some real applications of Statistical Process Control
it is necessary to design a control chart to not detect small process
shifts, but keeping a good performance to detect moderate and large
shifts in the quality. In this work we develop a new quality control
chart, the synthetic T2 control chart, that can be designed to cope with
this objective. A multi-objective optimization is carried out employing
Genetic Algorithms, finding the Pareto-optimal front of
non-dominated solutions for this optimization problem.
Digital Article Identifier (DOI):
34
14877
Mining Sequential Patterns Using I-PrefixSpan
Abstract: In this paper, we propose an improvement of pattern
growth-based PrefixSpan algorithm, called I-PrefixSpan. The general idea of I-PrefixSpan is to use sufficient data structure for Seq-Tree
framework and separator database to reduce the execution time and
memory usage. Thus, with I-PrefixSpan there is no in-memory database stored after index set is constructed. The experimental result
shows that using Java 2, this method improves the speed of PrefixSpan up to almost two orders of magnitude as well as the memory usage to more than one order of magnitude.
Digital Article Identifier (DOI):
33
224
An Intelligent Approach of Rough Set in Knowledge Discovery Databases
Abstract: Knowledge Discovery in Databases (KDD) has
evolved into an important and active area of research because of
theoretical challenges and practical applications associated with the
problem of discovering (or extracting) interesting and previously
unknown knowledge from very large real-world databases. Rough
Set Theory (RST) is a mathematical formalism for representing
uncertainty that can be considered an extension of the classical set
theory. It has been used in many different research areas, including
those related to inductive machine learning and reduction of
knowledge in knowledge-based systems. One important concept
related to RST is that of a rough relation. In this paper we presented
the current status of research on applying rough set theory to KDD,
which will be helpful for handle the characteristics of real-world
databases. The main aim is to show how rough set and rough set
analysis can be effectively used to extract knowledge from large
databases.
Digital Article Identifier (DOI):
32
11860
Volterra Filter for Color Image Segmentation
Abstract: Color image segmentation plays an important role in
computer vision and image processing areas. In this paper, the
features of Volterra filter are utilized for color image segmentation.
The discrete Volterra filter exhibits both linear and nonlinear
characteristics. The linear part smoothes the image features in
uniform gray zones and is used for getting a gross representation of
objects of interest. The nonlinear term compensates for the blurring
due to the linear term and preserves the edges which are mainly used
to distinguish the various objects. The truncated quadratic Volterra
filters are mainly used for edge preserving along with Gaussian noise
cancellation. In our approach, the segmentation is based on K-means
clustering algorithm in HSI space. Both the hue and the intensity
components are fully utilized. For hue clustering, the special cyclic
property of the hue component is taken into consideration. The
experimental results show that the proposed technique segments the
color image while preserving significant features and removing noise
effects.
Digital Article Identifier (DOI):
31
1272
High-Individuality Voice Conversion Based on Concatenative Speech Synthesis
Abstract: Concatenative speech synthesis is a method that can
make speech sound which has naturalness and high-individuality of a
speaker by introducing a large speech corpus. Based on this method, in
this paper, we propose a voice conversion method whose conversion
speech has high-individuality and naturalness. The authors also have
two subjective evaluation experiments for evaluating individuality and
sound quality of conversion speech. From the results, following three
facts have be confirmed: (a) the proposal method can convert the
individuality of speakers well, (b) employing the framework of unit
selection (especially join cost) of concatenative speech synthesis into
conventional voice conversion improves the sound quality of
conversion speech, and (c) the proposal method is robust against the
difference of genders between a source speaker and a target speaker.
Digital Article Identifier (DOI):
30
9297
Approximate Frequent Pattern Discovery Over Data Stream
Abstract: Frequent pattern discovery over data stream is a hard
problem because a continuously generated nature of stream does not
allow a revisit on each data element. Furthermore, pattern discovery
process must be fast to produce timely results. Based on these
requirements, we propose an approximate approach to tackle the
problem of discovering frequent patterns over continuous stream.
Our approximation algorithm is intended to be applied to process a
stream prior to the pattern discovery process. The results of
approximate frequent pattern discovery have been reported in the
paper.
Digital Article Identifier (DOI):
29
11662
On Pattern-Based Programming towards the Discovery of Frequent Patterns
Abstract: The problem of frequent pattern discovery is defined
as the process of searching for patterns such as sets of features or items that appear in data frequently. Finding such frequent patterns
has become an important data mining task because it reveals associations, correlations, and many other interesting relationships
hidden in a database. Most of the proposed frequent pattern mining
algorithms have been implemented with imperative programming
languages. Such paradigm is inefficient when set of patterns is large
and the frequent pattern is long. We suggest a high-level declarative
style of programming apply to the problem of frequent pattern
discovery. We consider two languages: Haskell and Prolog. Our
intuitive idea is that the problem of finding frequent patterns should
be efficiently and concisely implemented via a declarative paradigm
since pattern matching is a fundamental feature supported by most
functional languages and Prolog. Our frequent pattern mining
implementation using the Haskell and Prolog languages confirms our
hypothesis about conciseness of the program. The comparative
performance studies on line-of-code, speed and memory usage of
declarative versus imperative programming have been reported in the
paper.
Digital Article Identifier (DOI):
28
6907
An Alternative Proof for the NP-completeness of Top Right Access point-Minimum Length Corridor Problem
Abstract: In the Top Right Access point Minimum Length Corridor (TRA-MLC) problem [1], a rectangular boundary partitioned into rectilinear polygons is given and the problem is to find a corridor of least total length and it must include the top right corner of the outer rectangular boundary. A corridor is a tree containing a set of line segments lying along the outer rectangular boundary and/or on the boundary of the rectilinear polygons. The corridor must contain at least one point from the boundaries of the outer rectangle and also the rectilinear polygons. Gutierrez and Gonzalez [1] proved that the MLC problem, along with some of its restricted versions and variants, are NP-complete. In this paper, we give a shorter proof of NP-Completeness of TRA-MLC by findig the reduction in the following way.
Digital Article Identifier (DOI):
27
15279
Multi-matrix Real-coded Genetic Algorithm for Minimising Total Costs in Logistics Chain Network
Abstract: The importance of supply chain and logistics
management has been widely recognised. Effective management of
the supply chain can reduce costs and lead times and improve
responsiveness to changing customer demands. This paper proposes a
multi-matrix real-coded Generic Algorithm (MRGA) based
optimisation tool that minimises total costs associated within supply
chain logistics. According to finite capacity constraints of all parties
within the chain, Genetic Algorithm (GA) often produces infeasible
chromosomes during initialisation and evolution processes. In the
proposed algorithm, chromosome initialisation procedure, crossover
and mutation operations that always guarantee feasible solutions
were embedded. The proposed algorithm was tested using three sizes
of benchmarking dataset of logistic chain network, which are typical
of those faced by most global manufacturing companies. A half
fractional factorial design was carried out to investigate the influence
of alternative crossover and mutation operators by varying GA
parameters. The analysis of experimental results suggested that the
quality of solutions obtained is sensitive to the ways in which the
genetic parameters and operators are set.
Digital Article Identifier (DOI):
26
11240
Reactive Neural Control for Phototaxis and Obstacle Avoidance Behavior of Walking Machines
Abstract: This paper describes reactive neural control used to
generate phototaxis and obstacle avoidance behavior of walking
machines. It utilizes discrete-time neurodynamics and consists of
two main neural modules: neural preprocessing and modular neural
control. The neural preprocessing network acts as a sensory fusion
unit. It filters sensory noise and shapes sensory data to drive the
corresponding reactive behavior. On the other hand, modular neural
control based on a central pattern generator is applied for locomotion
of walking machines. It coordinates leg movements and can generate
omnidirectional walking. As a result, through a sensorimotor loop this
reactive neural controller enables the machines to explore a dynamic
environment by avoiding obstacles, turn toward a light source, and
then stop near to it.
Digital Article Identifier (DOI):
25
12186
Numerical Simulation of Wall Treatment Effects on the Micro-Scale Combustion
Abstract: To understand working features of a micro combustor,
a computer code has been developed to study combustion of
hydrogen–air mixture in a series of chambers with same shape aspect
ratio but various dimensions from millimeter to micrometer level.
The prepared algorithm and the computer code are capable of
modeling mixture effects in different fluid flows including chemical
reactions, viscous and mass diffusion effects. The effect of various
heat transfer conditions at chamber wall, e.g. adiabatic wall, with
heat loss and heat conduction within the wall, on the combustion is
analyzed. These thermal conditions have strong effects on the
combustion especially when the chamber dimension goes smaller and
the ratio of surface area to volume becomes larger.
Both factors, such as larger heat loss through the chamber wall
and smaller chamber dimension size, may lead to the thermal
quenching of micro-scale combustion. Through such systematic
numerical analysis, a proper operation space for the micro-combustor
is suggested, which may be used as the guideline for microcombustor
design. In addition, the results reported in this paper
illustrate that the numerical simulation can be one of the most
powerful and beneficial tools for the micro-combustor design,
optimization and performance analysis.
Digital Article Identifier (DOI):
24
10171
Objective Performance of Compressed Image Quality Assessments
Abstract: Measurement of the quality of image compression is important for image processing application. In this paper, we propose an objective image quality assessment to measure the quality of gray scale compressed image, which is correlation well with subjective quality measurement (MOS) and least time taken. The new objective image quality measurement is developed from a few fundamental of objective measurements to evaluate the compressed image quality based on JPEG and JPEG2000. The reliability between each fundamental objective measurement and subjective measurement (MOS) is found. From the experimental results, we found that the Maximum Difference measurement (MD) and a new proposed measurement, Structural Content Laplacian Mean Square Error (SCLMSE), are the suitable measurements that can be used to evaluate the quality of JPEG200 and JPEG compressed image, respectively. In addition, MD and SCLMSE measurements are scaled to make them equivalent to MOS, given the rate of compressed image quality from 1 to 5 (unacceptable to excellent quality).
Digital Article Identifier (DOI):
23
5742
Predicting Extrusion Process Parameters Using Neural Networks
Abstract: The objective of this paper is to estimate realistic
principal extrusion process parameters by means of artificial neural
network. Conventionally, finite element analysis is used to derive
process parameters. However, the finite element analysis of the
extrusion model does not consider the manufacturing process
constraints in its modeling. Therefore, the process parameters
obtained through such an analysis remains highly theoretical.
Alternatively, process development in industrial extrusion is to a
great extent based on trial and error and often involves full-size
experiments, which are both expensive and time-consuming. The
artificial neural network-based estimation of the extrusion process
parameters prior to plant execution helps to make the actual extrusion
operation more efficient because more realistic parameters may be
obtained. And so, it bridges the gap between simulation and real
manufacturing execution system. In this work, a suitable neural
network is designed which is trained using an appropriate learning
algorithm. The network so trained is used to predict the
manufacturing process parameters.
Digital Article Identifier (DOI):
22
12325
Microstructure and Mechanical Behaviuor of Rotary Friction Welded Titanium Alloys
Abstract: Ti-6Al-4V alloy has demonstrated a high strength to
weight ratio as well as good properties at high temperature. The
successful application of the alloy in some important areas depends
on suitable joining techniques. Friction welding has many
advantageous features to be chosen for joining Titanium alloys. The
present work investigates the feasibility of producing similar metal
joints of this Titanium alloy by rotary friction welding method. The
joints are produced at three different speeds and the performances of
the welded joints are evaluated by conducting microstructure studies,
Vickers Hardness and tensile tests at the joints. It is found that the
weld joints produced are sound and the ductile fractures in the tensile
weld specimens occur at locations away from the welded joints. It is
also found that a rotational speed of 1500 RPM can produce a very
good weld, with other parameters kept constant.
Digital Article Identifier (DOI):
21
8090
T-DOF PI Controller Design for a Speed Control of Induction Motor
Abstract: This paper presents design and implements the
T-DOF PI controller design for a speed control of induction motor.
The voltage source inverter type space vector pulse width modulation
technique is used the drive system. This scheme leads to be able to
adjust the speed of the motor by control the frequency and amplitude
of the input voltage. The ratio of input stator voltage to frequency
should be kept constant. The T-DOF PI controller design by root
locus technique is also introduced to the system for regulates and
tracking speed response. The experimental results in testing the 120
watt induction motor from no-load condition to rated condition show
the effectiveness of the proposed control scheme.
Digital Article Identifier (DOI):
20
403
A Method for Quality Inspection of Motors by Detecting Abnormal Sound
Abstract: Recently, a quality of motors is inspected by human
ears. In this paper, I propose two systems using a method of speech
recognition for automation of the inspection. The first system is based
on a method of linear processing which uses K-means and Nearest
Neighbor method, and the second is based on a method of non-linear
processing which uses neural networks. I used motor sounds in these
systems, and I successfully recognize 86.67% of motor sounds in the
linear processing system and 97.78% in the non-linear processing
system.
Digital Article Identifier (DOI):
19
11397
A Security Module for Car Appliances
Abstract: In this paper we discuss on the security module for the
car appliances to prevent stealing and illegal use on other cars. We
proposed an open structure including authentication and encryption by
embed a security module in each to protect car appliances. Illegal
moving and use a car appliance with the security module without
permission will lead the appliance to useless. This paper also presents
the component identification and deal with relevant procedures. It is at
low cost to recover from destroys by the burglar. Expect this paper to
offer the new business opportunity to the automotive and technology
industry.
Digital Article Identifier (DOI):
18
6706
A Model-following Adaptive Controller for Linear/Nonlinear Plantsusing Radial Basis Function Neural Networks
Abstract: In this paper, we proposed a method to design a
model-following adaptive controller for linear/nonlinear plants.
Radial basis function neural networks (RBF-NNs), which are known
for their stable learning capability and fast training, are used to
identify linear/nonlinear plants. Simulation results show that the
proposed method is effective in controlling both linear and nonlinear
plants with disturbance in the plant input.
Digital Article Identifier (DOI):
17
11165
The Development of Taiwanese Electronic Medical Record Systems Evaluation Instrument
Abstract: This study used Item Analysis, Exploratory Factor
Analysis (EFA) and Reliability Analysis (Cronbach-s α value) to
exam the Questions which selected by the Delphi method based on the
issue of “Socio-technical system (STS)" and user-centered
perspective. A structure questionnaire with seventy-four questions
which could be categorized into nine dimensions (healthcare
environment, organization behaviour, system quality, medical data
quality, service quality, safety quality, user usage, user satisfaction,
and organization net benefits) was provided to evaluate EMR of the
Taiwanese healthcare environment.
Digital Article Identifier (DOI):
16
13404
Creating Streamtubes Based on Mass Conservative Streamlines
Abstract: Streamtube is used to visualize expansion, contraction
and various properties of the fluid flow. These are useful in fluid
mechanics, engineering and geophysics. The streamtube constructed
in this paper only reveals the flow expansion rate along streamline.
Based on the mass conservative streamline, we will show how to
construct the streamtube.
Digital Article Identifier (DOI):
15
6908
2D Fracture Analysis of the First Compression Piston Ring
Abstract: The incidence of mechanical fracture of an
automobile piston rings prompted development of fracture analysis
method on this case. The three rings (two compression rings and one
oil ring) were smashed into several parts during the power-test (after
manufacturing the engine) causing piston and liner to be damaged.
The radial and oblique cracking happened on the failed piston rings.
The aim of the fracture mechanics simulations presented in this paper
was the calculation of particular effective fracture mechanics
parameters, such as J-integrals and stress intensity factors. Crack
propagation angles were calculated as well. Two-dimensional
fracture analysis of the first compression ring has been developed in
this paper using ABAQUS CAE6.5-1 software. Moreover, SEM
fractography was developed on fracture surfaces and is discussed in
this paper. Results of numerical calculations constitute the basis for
further research on real object.
Digital Article Identifier (DOI):
14
15167
GA based Optimal Sizing and Placement of Distributed Generation for Loss Minimization
Abstract: This paper addresses a novel technique for placement of distributed generation (DG) in electric power systems. A GA based approach for sizing and placement of DG keeping in view of system power loss minimization in different loading conditions is explained. Minimal system power loss is obtained under voltage and line loading constraints. Proposed strategy is applied to power distribution systems and its effectiveness is verified through simulation results on 16, 37-bus and 75-bus test systems.
Digital Article Identifier (DOI):
13
12966
Artifacts in Spiral X-ray CT Scanners: Problems and Solutions
Abstract: Artifact is one of the most important factors in
degrading the CT image quality and plays an important role in
diagnostic accuracy. In this paper, some artifacts typically appear in
Spiral CT are introduced. The different factors such as patient,
equipment and interpolation algorithm which cause the artifacts are
discussed and new developments and image processing algorithms to
prevent or reduce them are presented.
Digital Article Identifier (DOI):
12
13222
Novel Ridge Orientation Based Approach for Fingerprint Identification Using Co-Occurrence Matrix
Abstract: In this paper we use the property of co-occurrence
matrix in finding parallel lines in binary pictures for fingerprint
identification. In our proposed algorithm, we reduce the noise by
filtering the fingerprint images and then transfer the fingerprint
images to binary images using a proper threshold. Next, we divide
the binary images into some regions having parallel lines in the same
direction. The lines in each region have a specific angle that can be
used for comparison. This method is simple, performs the
comparison step quickly and has a good resistance in the presence of
the noise.
Digital Article Identifier (DOI):
11
3488
Development of Subjective Measures of Interestingness: From Unexpectedness to Shocking
Abstract: Knowledge Discovery of Databases (KDD) is the
process of extracting previously unknown but useful and significant
information from large massive volume of databases. Data Mining is
a stage in the entire process of KDD which applies an algorithm to
extract interesting patterns. Usually, such algorithms generate huge
volume of patterns. These patterns have to be evaluated by using
interestingness measures to reflect the user requirements.
Interestingness is defined in different ways, (i) Objective measures
(ii) Subjective measures. Objective measures such as support and
confidence extract meaningful patterns based on the structure of the
patterns, while subjective measures such as unexpectedness and
novelty reflect the user perspective. In this report, we try to brief the
more widely spread and successful subjective measures and propose
a new subjective measure of interestingness, i.e. shocking.
Digital Article Identifier (DOI):
10
12152
Application of Extreme Learning Machine Method for Time Series Analysis
Abstract: In this paper, we study the application of Extreme
Learning Machine (ELM) algorithm for single layered feedforward
neural networks to non-linear chaotic time series problems. In this
algorithm the input weights and the hidden layer bias are randomly
chosen. The ELM formulation leads to solving a system of linear
equations in terms of the unknown weights connecting the hidden
layer to the output layer. The solution of this general system of
linear equations will be obtained using Moore-Penrose generalized
pseudo inverse. For the study of the application of the method we
consider the time series generated by the Mackey Glass delay
differential equation with different time delays, Santa Fe A and
UCR heart beat rate ECG time series. For the choice of sigmoid,
sin and hardlim activation functions the optimal values for the
memory order and the number of hidden neurons which give the
best prediction performance in terms of root mean square error are
determined. It is observed that the results obtained are in close
agreement with the exact solution of the problems considered
which clearly shows that ELM is a very promising alternative
method for time series prediction.
Digital Article Identifier (DOI):
9
1335
Multi Objective Micro Genetic Algorithm for Combine and Reroute Problem
Abstract: Several approaches such as linear programming, network
modeling, greedy heuristic and decision support system are well-known
approaches in solving irregular airline operation problem. This paper
presents an alternative approach based on Multi Objective Micro Genetic
Algorithm. The aim of this research is to introduce the concept of Multi
Objective Micro Genetic Algorithm as a tool to solve irregular airline
operation, combine and reroute problem. The experiment result indicated
that the model could obtain optimal solutions within a few second.
Digital Article Identifier (DOI):
8
15786
Implementation of IEEE 802.15.4 Packet Analyzer
Abstract: A packet analyzer is a tool for debugging sensor
network systems and is convenient for developers. In this paper, we
introduce a new packet analyzer based on an embedded system. The
proposed packet analyzer is compatible with IEEE 802.15.4, which is
suitable for the wireless communication standard for sensor networks,
and is available for remote control by adopting a server-client scheme
based on the Ethernet interface. To confirm the operations of the
packet analyzer, we have developed two types of sensor nodes based
on PIC4620 and ATmega128L microprocessors and tested the
functions of the proposed packet analyzer by obtaining the packets
from the sensor nodes.
Digital Article Identifier (DOI):
7
8236
Learning Classifier Systems Approach for Automated Discovery of Crisp and Fuzzy Hierarchical Production Rules
Abstract: This research presents a system for post processing of
data that takes mined flat rules as input and discovers crisp as well as
fuzzy hierarchical structures using Learning Classifier System
approach. Learning Classifier System (LCS) is basically a machine
learning technique that combines evolutionary computing,
reinforcement learning, supervised or unsupervised learning and
heuristics to produce adaptive systems. A LCS learns by interacting
with an environment from which it receives feedback in the form of
numerical reward. Learning is achieved by trying to maximize the
amount of reward received. Crisp description for a concept usually
cannot represent human knowledge completely and practically. In the
proposed Learning Classifier System initial population is constructed
as a random collection of HPR–trees (related production rules) and
crisp / fuzzy hierarchies are evolved. A fuzzy subsumption relation is
suggested for the proposed system and based on Subsumption Matrix
(SM), a suitable fitness function is proposed. Suitable genetic
operators are proposed for the chosen chromosome representation
method. For implementing reinforcement a suitable reward and
punishment scheme is also proposed. Experimental results are
presented to demonstrate the performance of the proposed system.
Digital Article Identifier (DOI):
6
9633
Performance Evaluation and Modeling of a Conical Plunging Jet Aerator
Abstract: Aeration by a plunging water jet is an energetically attractive way to effect oxygen-transfer than conventional oxygenation systems. In the present study, a new type of conical shaped plunging aeration device is fabricated to generate hollow inclined ined plunging jets (jet plunge angle of π/3 ) to investigate its oxygen transfer capacity. The results suggest that the volumetric oxygen-transfer coefficient and oxygen-transfer efficiency of the conical plunging jet aerator are competitive with other types of aeration systems. Relationships of volumetric oxygen-transfer coefficient with jet power per unit volume and jet parameters are also proposed. The suggested relationships predict the volumetric oxygentransfer coefficient within a scatter of ± 15% . Further, the application of Support Vector Machines on the experimental data revealed its utility in the prediction of volumetric oxygen-transfer coefficient and development of conical plunging jet aerators.
Digital Article Identifier (DOI):
5
1191
An Analysis of Collapse Mechanism of Thin- Walled Circular Tubes Subjected to Bending
Abstract: Circular tubes have been widely used as structural
members in engineering application. Therefore, its collapse behavior
has been studied for many decades, focusing on its energy absorption
characteristics. In order to predict the collapse behavior of members,
one could rely on the use of finite element codes or experiments.
These tools are helpful and high accuracy but costly and require
extensive running time. Therefore, an approximating model of tubes
collapse mechanism is an alternative for early step of design. This
paper is also aimed to develop a closed-form solution of thin-walled
circular tube subjected to bending. It has extended the Elchalakani et
al.-s model (Int. J. Mech. Sci.2002; 44:1117-1143) to include the
rate of energy dissipation of rolling hinge in the circumferential
direction. The 3-D geometrical collapse mechanism was analyzed by
adding the oblique hinge lines along the longitudinal tube within the
length of plastically deforming zone. The model was based on the
principal of energy rate conservation. Therefore, the rates of internal
energy dissipation were calculated for each hinge lines which are
defined in term of velocity field. Inextensional deformation and
perfect plastic material behavior was assumed in the derivation of
deformation energy rate. The analytical result was compared with
experimental result. The experiment was conducted with a number of
tubes having various D/t ratios. Good agreement between analytical
and experiment was achieved.
Digital Article Identifier (DOI):
4
4854
Enhancing K-Means Algorithm with Initial Cluster Centers Derived from Data Partitioning along the Data Axis with the Highest Variance
Abstract: In this paper, we propose an algorithm to compute
initial cluster centers for K-means clustering. Data in a cell is
partitioned using a cutting plane that divides cell in two smaller cells.
The plane is perpendicular to the data axis with the highest variance
and is designed to reduce the sum squared errors of the two cells as
much as possible, while at the same time keep the two cells far apart
as possible. Cells are partitioned one at a time until the number of
cells equals to the predefined number of clusters, K. The centers of
the K cells become the initial cluster centers for K-means. The
experimental results suggest that the proposed algorithm is effective,
converge to better clustering results than those of the random
initialization method. The research also indicated the proposed
algorithm would greatly improve the likelihood of every cluster
containing some data in it.
Digital Article Identifier (DOI):
3
12700
Searching for Similar Informational Articles in the Internet Channel
Abstract: In terms of total online audience, newspapers are the most successful form of online content to date. The online audience for newspapers continues to demand higher-quality services, including personalized news services. News providers should be able to offer suitable users appropriate content. In this paper, a news article recommender system is suggested based on a user-s preference when he or she visits an Internet news site and reads the published articles. This system helps raise the user-s satisfaction, increase customer loyalty toward the content provider.
Digital Article Identifier (DOI):
2
9585
Using Quality Models to Evaluate National ID systems: the Case of the UAE
Abstract: This paper presents findings from the evaluation study carried out to review the UAE national ID card software. The paper consults the relevant literature to explain many of the concepts and frameworks explained herein. The findings of the evaluation work that was primarily based on the ISO 9126 standard for system quality measurement highlighted many practical areas that if taken into account is argued to more likely increase the success chances of similar system implementation projects.
Digital Article Identifier (DOI):
1
1287
Data Oriented Modeling of Uniform Random Variable: Applied Approach
Abstract: In this paper we introduce new data oriented modeling
of uniform random variable well-matched with computing systems. Due to this conformity with current computers structure, this modeling will be efficiently used in statistical inference.
Digital Article Identifier (DOI):