175
9594
New Technologies for Modeling of Gas Turbine Cooled Blades
Abstract: In contrast to existing methods which do not take into account multiconnectivity in a broad sense of this term, we develop mathematical models and highly effective combination (BIEM and FDM) numerical methods of calculation of stationary and cvazistationary temperature field of a profile part of a blade with convective cooling (from the point of view of realization on PC). The theoretical substantiation of these methods is proved by appropriate theorems. For it, converging quadrature processes have been developed and the estimations of errors in the terms of A.Ziqmound continuity modules have been received. For visualization of profiles are used: the method of the least squares with automatic conjecture, device spline, smooth replenishment and neural nets. Boundary conditions of heat exchange are determined from the solution of the corresponding integral equations and empirical relationships. The reliability of designed methods is proved by calculation and experimental investigations heat and hydraulic characteristics of the gas turbine 1st stage nozzle blade
Digital Article Identifier (DOI):
174
10826
Numerical Solution of Linear Ordinary Differential Equations in Quantum Chemistry by Clenshaw Method
Abstract: As we know, most differential equations concerning
physical phenomenon could not be solved by analytical method. Even if we use Series Method, some times we need an appropriate change of variable, and even when we can, their closed form solution may be
so complicated that using it to obtain an image or to examine the structure of the system is impossible. For example, if we consider Schrodinger equation, i.e.,
We come to a three-term recursion relations, which work with it takes, at least, a little bit time to get a series solution[6]. For this
reason we use a change of variable such as or when we consider the orbital angular momentum[1], it will be
necessary to solve. As we can observe, working with this equation is tedious. In this paper, after introducing Clenshaw method, which is a kind of Spectral method, we try to solve some of such equations.
Digital Article Identifier (DOI):
173
9707
Fast and Accuracy Control Chart Pattern Recognition using a New cluster-k-Nearest Neighbor
Abstract: By taking advantage of both k-NN which is highly
accurate and K-means cluster which is able to reduce the time of classification, we can introduce Cluster-k-Nearest Neighbor as "variable k"-NN dealing with the centroid or mean point of all subclasses generated by clustering algorithm. In general the algorithm of K-means cluster is not stable, in term of accuracy, for that reason we develop another algorithm for clustering our space which gives a higher accuracy than K-means cluster, less
subclass number, stability and bounded time of classification with respect to the variable data size. We find between 96% and 99.7 % of accuracy in the lassification of 6 different types of Time series by using K-means cluster algorithm and we find 99.7% by using the new clustering algorithm.
Digital Article Identifier (DOI):
172
14503
Acceptance Single Sampling Plan with Fuzzy Parameter with The Using of Poisson Distribution
Abstract: This purpose of this paper is to present the acceptance single sampling plan when the fraction of nonconforming items is a fuzzy number and being modeled based on the fuzzy Poisson distribution. We have shown that the operating characteristic (oc) curves of the plan is like a band having a high and low bounds whose width depends on the ambiguity proportion parameter in the lot when that sample size and acceptance numbers is fixed. Finally we completed discuss opinion by a numerical example. And then we compared the oc bands of using of binomial with the oc bands of using of Poisson distribution.
Digital Article Identifier (DOI):
171
392
The Application of Homotopy Method In Solving Electrical Circuit Design Problem
Abstract: This paper describes simple implementation of
homotopy (also called continuation) algorithm for determining the proper resistance of the resistor to dissipate energy at a specified rate of an electric circuit. Homotopy algorithm can be considered as a developing of the classical methods in numerical computing such as Newton-Raphson and fixed
point methods. In homoptopy methods, an embedding
parameter is used to control the convergence. The method purposed in this work utilizes a special homotopy called Newton homotopy. Numerical example solved in MATLAB is given to show the effectiveness of the purposed method
Digital Article Identifier (DOI):
170
3374
Panoramic Sensor Based Blind Spot Accident Prevention System
Abstract: There are many automotive accidents due to blind spots and driver inattentiveness. Blind spot is the area that is invisible to the driver's viewpoint without head rotation. Several methods are available for assisting the drivers. Simplest methods are — rear mirrors and wide-angle lenses. But, these methods have a disadvantage of the requirement for human assistance. So, the accuracy of these devices depends on driver. Another approach called an automated approach that makes use of sensors such as sonar or radar. These sensors are used to gather range information. The range information will be processed and used for detecting the collision. The disadvantage of this system is — low angular resolution and limited sensing volumes. This paper is a panoramic sensor based automotive vehicle monitoring..
Digital Article Identifier (DOI):
169
14100
Detecting and Measuring Fabric Pills Using Digital Image Analysis
Abstract: In this paper a novel method was presented for
evaluating the fabric pills using digital image processing techniques. This work provides a novel technique for
detecting pills and also measuring their heights, surfaces and
volumes. Surely, measuring the intensity of defects by human vision is an inaccurate method for quality control; as a result, this problem became a motivation for employing digital image processing techniques for detection of defects of fabric
surface. In the former works, the systems were just limited to measuring of the surface of defects, but in the presented
method the height and the volume of defects were also
measured, which leads to a more accurate quality control. An algorithm was developed to first, find pills and then measure their average intensity by using three criteria of height, surface
and volume. The results showed a meaningful relation
between the number of rotations and the quality of pilled fabrics.
Digital Article Identifier (DOI):
168
5433
A Novel Computer Vision Method for Evaluating Deformations of Fibers Cross Section in False Twist Textured Yarns
Abstract: In recent five decades, textured yarns of polyester fiber produced by false twist method are the most
important and mass-produced manmade fibers. There are
many parameters of cross section which affect the physical and mechanical properties of textured yarns. These parameters
are surface area, perimeter, equivalent diameter, large
diameter, small diameter, convexity, stiffness, eccentricity, and hydraulic diameter. These parameters were evaluated by
digital image processing techniques. To find trends between production criteria and evaluated parameters of cross section, three criteria of production line have been adjusted and different types of yarns were produced. These criteria are
temperature, drafting ratio, and D/Y ratio. Finally the relations between production criteria and cross section parameters were
considered. The results showed that the presented technique can recognize and measure the parameters of fiber cross section in acceptable accuracy. Also, the optimum condition
of adjustments has been estimated from results of image analysis evaluation.
Digital Article Identifier (DOI):
167
10199
Time Series Forecasting Using Independent Component Analysis
Abstract: The paper presents a method for multivariate time
series forecasting using Independent Component Analysis (ICA), as a preprocessing tool. The idea of this approach is to do the forecasting in the space of independent components (sources), and then to transform back the results to the original time series
space. The forecasting can be done separately and with a different
method for each component, depending on its time structure. The
paper gives also a review of the main algorithms for independent component analysis in the case of instantaneous mixture models, using second and high-order statistics. The method has been applied in simulation to an artificial multivariate time series
with five components, generated from three sources and a mixing matrix, randomly generated.
Digital Article Identifier (DOI):
166
5631
FHOJ: A New Java Benchmark Framework
Abstract: There are some existing Java benchmarks, application benchmarks as well as micro benchmarks or mixture both of them,such as: Java Grande, Spec98, CaffeMark, HBech, etc. But none of them deal with behaviors of multi tasks operating systems. As a result, the achieved outputs are not satisfied for performance evaluation engineers. Behaviors of multi tasks operating systems are based on a schedule management which is employed in these systems. Different processes can have different priority to share the same resources. The time is measured by estimating from applications started to it is finished does not reflect the real time value which the system need for running those programs. New approach to this problem should be done. Having said that, in this paper we present a new Java benchmark, named FHOJ benchmark, which directly deals with multi tasks behaviors of a system. Our study shows that in some cases, results from FHOJ benchmark are far more reliable in comparison with some existing Java benchmarks.
Digital Article Identifier (DOI):
165
14157
Designing and Implementing an Innovative Course about World Wide Web, Based on the Conceptual Representations of Students
Abstract: Internet is nowadays included to all National Curriculums of the elementary school. A comparative study of their
goals leads to the conclusion that a complete curriculum should aim to student-s acquisition of the abilities to navigate and search for
information and additionally to emphasize on the evaluation of the information provided by the World Wide Web. In a constructivistic knowledge framework the design of a course has to take under
consideration the conceptual representations of students. The following paper presents the conceptual representation of students of eleven years old, attending the Sixth Grade of Greek Elementary School about World Wide Web and their use in the design and
implementation of an innovative course.
Digital Article Identifier (DOI):
164
12699
Identifying New Sequence Features for Exon-Intron Discrimination by Rescaled-Range Frameshift Analysis
Abstract: For identifying the discriminative sequence features between exons and introns, a new paradigm, rescaled-range frameshift analysis (RRFA), was proposed. By RRFA, two new
sequence features, the frameshift sensitivity (FS) and the accumulative
penta-mer complexity (APC), were discovered which
were further integrated into a new feature of larger scale, the persistency in anti-mutation (PAM). The feature-validation experiments
were performed on six model organisms to test the power
of discrimination. All the experimental results highly support that FS, APC and PAM were all distinguishing features between exons
and introns. These identified new sequence features provide new insights into the sequence composition of genes and they have
great potentials of forming a new basis for recognizing the exonintron boundaries in gene sequences.
Digital Article Identifier (DOI):
163
11579
An AHP-Delphi Multi-Criteria Usage Cases Model with Application to Citrogypsum Decisions, Case Study: Kimia Gharb Gostar Industries Company
Abstract: Today, advantage of biotechnology especially in environmental issues compared to other technologies is irrefragable. Kimia Gharb Gostar Industries Company, as a largest producer of citric acid in Middle East, applies biotechnology for this goal. Citrogypsum is a by–product of citric acid production and it considered as a valid residuum of this company. At this paper summary of acid citric production and condition of Citrogypsum production in company were introduced in addition to defmition of Citrogypsum production and its applications in world. According to these information and evaluation of present conditions about Iran needing to Citrogypsum, the best priority was introduced and emphasized on strategy selection and proper programming for self-sufficiency. The Delphi technique was used to elicit expert opinions about criteria for evaluating the usages. The criteria identified by the experts were profitability, capacity of production, the degree of investment, marketable, production ease and time production. The Analytical Hierarchy Process (ARP) and Expert Choice software were used to compare the alternatives on the criteria derived from the Delphi process.
Digital Article Identifier (DOI):
162
15
The Impact of Germination and In Vitro Digestion on the Formation of Angiotensin Converting Enzyme (ACE) Inhibitory Peptides from Lentil Proteins Compared to Whey Proteins
Abstract: Biologically active peptides are of particular interest
in food science and human nutrition because they have been shown to play several physiological roles. In vitro gastrointestinal digestion of lentil and whey proteins in this study produced high angiotensin-I converting enzyme inhibitory activity with 75.5±1.9 and 91.4±2.3%
inhibition, respectively. High ACE inhibitory activity was observed in lentil after 5 days of germination (84.3±1.2%). Fractionation by
reverse phase chromatography gave inhibitory activities as high as
86.3±2.0 for lentil, 94.8±1.8% for whey and 93.7±1.7% at 5th day of germination. Further purification by HPLC resulted in several
inhibitory peptides with IC50 values ranging from 0.064 to 0.164
mg/ml. These results demonstrate that lentil proteins are a good
source of peptides with ACE inhibitory activity that can be released by germination or gastrointestinal digestion. Despite the lower bioactivity in comparison with whey proteins, incorporation of lentil proteins in functional food formulations and natural drugs look promising.
Digital Article Identifier (DOI):
161
15471
DD Models for Reports Building
Abstract: In general, reports are a form of representing data in
such way that user gets the information he needs. They can be built in
various ways, from the simplest (“select from") to the most complex
ones (results derived from different sources/tables with complex
formulas applied). Furthermore, rules of calculations could be written
as a program hard code or built in the database to be used by dynamic
code. This paper will introduce two types of reports, defined in the
DB structure. The main goal is to manage calculations in optimal
way, keeping maintenance of reports as simple and smooth as
possible.
Digital Article Identifier (DOI):
160
12230
Trust Enhanced Dynamic Source Routing Protocol for Adhoc Networks
Abstract: Nodes in mobile Ad Hoc Network (MANET) do not
rely on a central infrastructure but relay packets originated by other
nodes. Mobile ad hoc networks can work properly only if the
participating nodes collaborate in routing and forwarding. For
individual nodes it might be advantageous not to collaborate, though.
In this conceptual paper we propose a new approach based on
relationship among the nodes which makes them to cooperate in an
Adhoc environment. The trust unit is used to calculate the trust
values of each node in the network. The calculated trust values are
being used by the relationship estimator to determine the relationship
status of nodes. The proposed enhanced protocol was compared with
the standard DSR protocol and the results are analyzed using the
network simulator-2.
Digital Article Identifier (DOI):
159
2428
Optimal Solution of Constraint Satisfaction Problems
Abstract: An optimal solution for a large number of constraint
satisfaction problems can be found using the technique of
substitution and elimination of variables analogous to the technique
that is used to solve systems of equations. A decision function
f(A)=max(A2) is used to determine which variables to eliminate. The
algorithm can be expressed in six lines and is remarkable in both its
simplicity and its ability to find an optimal solution. However it is
inefficient in that it needs to square the updated A matrix after each
variable elimination. To overcome this inefficiency the algorithm is
analyzed and it is shown that the A matrix only needs to be squared
once at the first step of the algorithm and then incrementally updated
for subsequent steps, resulting in significant improvement and an
algorithm complexity of O(n3).
Digital Article Identifier (DOI):
158
11945
Mobile Robot Path Planning Utilizing Probability Recursive Function
Abstract: In this work a software simulation model has been
proposed for two driven wheels mobile robot path planning; that can
navigate in dynamic environment with static distributed obstacles.
The work involves utilizing Bezier curve method in a proposed N
order matrix form; for engineering the mobile robot path. The Bezier
curve drawbacks in this field have been diagnosed. Two directions:
Up and Right function has been proposed; Probability Recursive
Function (PRF) to overcome those drawbacks.
PRF functionality has been developed through a proposed;
obstacle detection function, optimization function which has the
capability of prediction the optimum path without comparison
between all feasible paths, and N order Bezier curve function that
ensures the drawing of the obtained path.
The simulation results that have been taken showed; the mobile
robot travels successfully from starting point and reaching its goal
point. All obstacles that are located in its way have been avoided.
This navigation is being done successfully using the proposed PRF
techniques.
Digital Article Identifier (DOI):
157
12131
A Pilot Study for the Optimization of Routes for Waste Collection Vehicles for the Göçmenköy District of Lefkoşa
Abstract: A pilot project was carried out in 2007 by the senior
students of Cyprus International University, aiming to minimize the
total cost of waste collection in Northern Cyprus. Many developed
and developing countries have cut their transportation costs – which
lies between 30-40% – down at a rate of 40% percent, by
implementing network models for their route assignments.
Accordingly, a network model was implemented at Göçmenköy
district, to optimize and standardize waste collection works. The
work environment of the employees were also redesigned to provide
maximum ergonomy and to increase productivity, efficiency and
safety. Following the collection of the required data including waste
densities, lengths of roads and population, a model was constructed
to allocate the optimal route assignment for the waste collection
trucks at Göçmenköy district.
Digital Article Identifier (DOI):
156
2403
Developing New Processes and Optimizing Performance Using Response Surface Methodology
Abstract: Response surface methodology (RSM) is a very
efficient tool to provide a good practical insight into developing new
process and optimizing them. This methodology could help
engineers to raise a mathematical model to represent the behavior of
system as a convincing function of process parameters.
Through this paper the sequential nature of the RSM surveyed for process
engineers and its relationship to design of experiments (DOE), regression
analysis and robust design reviewed. The proposed four-step procedure in
two different phases could help system analyst to resolve the parameter
design problem involving responses. In order to check accuracy of the
designed model, residual analysis and prediction error sum of squares
(PRESS) described.
It is believed that the proposed procedure in this study can resolve a
complex parameter design problem with one or more responses. It can be
applied to those areas where there are large data sets and a number of
responses are to be optimized simultaneously. In addition, the proposed
procedure is relatively simple and can be implemented easily by using
ready-made standard statistical packages.
Digital Article Identifier (DOI):
155
9023
Using Structural Equation Modeling in Causal Relationship Design for Balanced-Scorecards' Strategic Map
Abstract: Through 1980s, management accounting researchers
described the increasing irrelevance of traditional control and
performance measurement systems. The Balanced Scorecard (BSC)
is a critical business tool for a lot of organizations. It is a
performance measurement system which translates mission and
strategy into objectives. Strategy map approach is a development
variant of BSC in which some necessary causal relations must be
established. To recognize these relations, experts usually use
experience. It is also possible to utilize regression for the same
purpose. Structural Equation Modeling (SEM), which is one of the
most powerful methods of multivariate data analysis, obtains more
appropriate results than traditional methods such as regression. In the
present paper, we propose SEM for the first time to identify the
relations between objectives in the strategy map, and a test to
measure the importance of relations. In SEM, factor analysis and test
of hypotheses are done in the same analysis. SEM is known to be
better than other techniques at supporting analysis and reporting. Our
approach provides a framework which permits the experts to design
the strategy map by applying a comprehensive and scientific method
together with their experience. Therefore this scheme is a more
reliable method in comparison with the previously established
methods.
Digital Article Identifier (DOI):
154
11602
Applications of Stable Distributions in Time Series Analysis, Computer Sciences and Financial Markets
Abstract: In this paper, first we introduce the stable distribution, stable process and theirs characteristics. The a -stable distribution family has received great interest in the last decade due to its success in modeling data, which are too impulsive to be accommodated by the Gaussian distribution. In the second part, we propose major applications of alpha stable distribution in telecommunication, computer science such as network delays and signal processing and financial markets. At the end, we focus on using stable distribution to estimate measure of risk in stock markets and show simulated data with statistical softwares.
Digital Article Identifier (DOI):
153
6637
Fuel Cell/DC-DC Convertor Control by Sliding Mode Method
Abstract: Fuel cell's system requires regulating circuit for
voltage and current in order to control power in case of connecting to
other generative devices or load. In this paper Fuel cell system and
convertor, which is a multi-variable system, are controlled using
sliding mode method. Use of weighting matrix in design procedure
made it possible to regulate speed of control. Simulation results show
the robustness and accuracy of proposed controller for controlling
desired of outputs.
Digital Article Identifier (DOI):
152
12505
The Recreation Technique Model from the Perspective of Environmental Quality Elements
Abstract: The quality improvements of the environmental
elements could increase the recreational opportunities in a certain
area (destination). The technique of the need for recreation focuses
on choosing certain destinations for recreational purposes. The basic
exchange taken into consideration is the one between the satisfaction
gained after staying in that area and the value expressed in money
and time allocated. The number of tourists in the respective area, the
duration of staying and the money spent including transportation
provide information on how individuals rank the place or certain
aspects of the area (such as the quality of the environmental
elements).
For the statistical analysis of the environmental benefits offered by
an area through the need of recreation technique, the following stages
are suggested:
- characterization of the reference area based on the
statistical variables considered;
- estimation of the environmental benefit through
comparing the reference area with other similar areas
(having the same environmental characteristics), from
the perspective of the statistical variables considered.
The model compared in recreation technique faced with a series of
difficulties which refers to the reference area and correct
transformation of time in money.
Digital Article Identifier (DOI):
151
4715
Low Dimensional Representation of Dorsal Hand Vein Features Using Principle Component Analysis (PCA)
Abstract: The quest of providing more secure identification
system has led to a rise in developing biometric systems. Dorsal
hand vein pattern is an emerging biometric which has attracted the
attention of many researchers, of late. Different approaches have
been used to extract the vein pattern and match them. In this work,
Principle Component Analysis (PCA) which is a method that has
been successfully applied on human faces and hand geometry is
applied on the dorsal hand vein pattern. PCA has been used to obtain
eigenveins which is a low dimensional representation of vein pattern
features. Low cost CCD cameras were used to obtain the vein
images. The extraction of the vein pattern was obtained by applying
morphology. We have applied noise reduction filters to enhance the
vein patterns. The system has been successfully tested on a database
of 200 images using a threshold value of 0.9. The results obtained are
encouraging.
Digital Article Identifier (DOI):
150
4571
An Advanced Method for Speech Recognition
Abstract: In this paper in consideration of each available
techniques deficiencies for speech recognition, an advanced method
is presented that-s able to classify speech signals with the high
accuracy (98%) at the minimum time. In the presented method, first,
the recorded signal is preprocessed that this section includes
denoising with Mels Frequency Cepstral Analysis and feature
extraction using discrete wavelet transform (DWT) coefficients; Then
these features are fed to Multilayer Perceptron (MLP) network for
classification. Finally, after training of neural network effective
features are selected with UTA algorithm.
Digital Article Identifier (DOI):
149
3377
Detection and Pose Estimation of People in Images
Abstract: Detection, feature extraction and pose estimation of
people in images and video is made challenging by the variability of
human appearance, the complexity of natural scenes and the high
dimensionality of articulated body models and also the important
field in Image, Signal and Vision Computing in recent years. In this
paper, four types of people in 2D dimension image will be tested and
proposed. The system will extract the size and the advantage of them
(such as: tall fat, short fat, tall thin and short thin) from image. Fat
and thin, according to their result from the human body that has been
extract from image, will be obtained. Also the system extract every
size of human body such as length, width and shown them in output.
Digital Article Identifier (DOI):
148
12961
The Approximate Solution of Linear Fuzzy Fredholm Integral Equations of the Second Kind by Using Iterative Interpolation
Abstract: in this paper, we propose a numerical method
for the approximate solution of fuzzy Fredholm functional
integral equations of the second kind by using an iterative
interpolation. For this purpose, we convert the linear fuzzy
Fredholm integral equations to a crisp linear system of integral
equations. The proposed method is illustrated by some fuzzy
integral equations in numerical examples.
Digital Article Identifier (DOI):
147
10237
Vision Based Hand Gesture Recognition
Abstract: With the development of ubiquitous computing,
current user interaction approaches with keyboard, mouse and pen
are not sufficient. Due to the limitation of these devices the useable
command set is also limited. Direct use of hands as an input device is
an attractive method for providing natural Human Computer
Interaction which has evolved from text-based interfaces through 2D
graphical-based interfaces, multimedia-supported interfaces, to fully
fledged multi-participant Virtual Environment (VE) systems.
Imagine the human-computer interaction of the future: A 3Dapplication
where you can move and rotate objects simply by moving
and rotating your hand - all without touching any input device. In this
paper a review of vision based hand gesture recognition is presented.
The existing approaches are categorized into 3D model based
approaches and appearance based approaches, highlighting their
advantages and shortcomings and identifying the open issues.
Digital Article Identifier (DOI):
146
14481
New Feed-Forward/Feedback Generalized Minimum Variance Self-tuning Pole-placement Controller
Abstract: A new Feed-Forward/Feedback Generalized
Minimum Variance Pole-placement Controller to incorporate the
robustness of classical pole-placement into the flexibility of
generalized minimum variance self-tuning controller for Single-Input
Single-Output (SISO) has been proposed in this paper. The design,
which provides the user with an adaptive mechanism, which ensures
that the closed loop poles are, located at their pre-specified positions.
In addition, the controller design which has a feed-forward/feedback
structure overcomes the certain limitations existing in similar poleplacement
control designs whilst retaining the simplicity of
adaptation mechanisms used in other designs. It tracks set-point
changes with the desired speed of response, penalizes excessive
control action, and can be applied to non-minimum phase systems.
Besides, at steady state, the controller has the ability to regulate the
constant load disturbance to zero. Example simulation results using
both simulated and real plant models demonstrate the effectiveness of
the proposed controller.
Digital Article Identifier (DOI):
145
4873
Implementation of an On-Line PD Measurement System Using HFCT
Abstract: In order to perform on-line measuring and detection
of PD signals, a total solution composing of an HFCT, A/D
converter and a complete software package is proposed. The
software package includes compensation of HFCT contribution,
filtering and noise reduction using wavelet transform and soft
calibration routines. The results have shown good performance and
high accuracy.
Digital Article Identifier (DOI):
144
13301
Effect of Azespirilium Bacteria in Reducing Nitrogen Fertilizers (Urea) and the Interaction of it with Stereptomyces Sp due the Biological Control on the Wheat (Triticum Asstivum) Sustinibelation Culture
Abstract: An experiment was conducted in October 2008 due the ability replacement plant associate biofertilizers by chemical fertilizers and the qualifying rate of chemical N fertilizers at the moment of using this biofertilizers and the interaction of this biofertilizer on each other. This field experiment has been done in Persepolis (Throne of Jamshid) and arrange by using factorial with the basis of randomized complete block design, in three replication Azespirilium SP bacteria has been admixed with consistence 108 cfu/g and inoculated with seeds of wheat, The streptomyces SP has been used in amount of 550 gr/ha and concatenated on clay and for the qualifying range of chemical fertilizer 4 level of N chemical fertilizer from the source of urea (N0=0, N1=60, N2=120, N3=180) has been used in this experiment. The results indicated there were Significant differences between levels of Nitrogen fertilizer in the entire characteristic which has been measured in this experiment. The admixed Azespirilium SP showed significant differences between their levels in the characteristics such as No. of fertile ear, No. of grain per ear, grain yield, grain protein percentage, leaf area index and the agronomic fertilizer use efficiency. Due the interaction streptomyses with Azespirilium SP bacteria this actinomycet didn-t show any statistically significant differences between it levels.
Digital Article Identifier (DOI):
143
2767
Power Generation Scheduling of Thermal Units Considering Gas Pipelines Constraints
Abstract: With the growth of electricity generation from gas
energy gas pipeline reliability can substantially impact the electric
generation. A physical disruption to pipeline or to a compressor
station can interrupt the flow of gas or reduce the pressure and lead
to loss of multiple gas-fired electric generators, which could
dramatically reduce the supplied power and threaten the power
system security. Gas pressure drops during peak loading time on
pipeline system, is a common problem in network with no enough
transportation capacity which limits gas transportation and causes
many problem for thermal domain power systems in supplying their
demand. For a feasible generation scheduling planning in networks
with no sufficient gas transportation capacity, it is required to
consider gas pipeline constraints in solving the optimization problem
and evaluate the impacts of gas consumption in power plants on gas
pipelines operating condition. This paper studies about operating of
gas fired power plants in critical conditions when the demand of gas
and electricity peak together. An integrated model of gas and electric
model is used to consider the gas pipeline constraints in the economic
dispatch problem of gas-fueled thermal generator units.
Digital Article Identifier (DOI):
142
8222
Interoperable CNC System for Turning Operations
Abstract: The changing economic climate has made global
manufacturing a growing reality over the last decade, forcing
companies from east and west and all over the world to
collaborate beyond geographic boundaries in the design,
manufacture and assemble of products. The ISO10303 and
ISO14649 Standards (STEP and STEP-NC) have been
developed to introduce interoperability into manufacturing
enterprises so as to meet the challenge of responding to
production on demand. This paper describes and illustrates a
STEP compliant CAD/CAPP/CAM System for the manufacture
of rotational parts on CNC turning centers. The information
models to support the proposed system together with the data
models defined in the ISO14649 standard used to create the NC
programs are also described. A structured view of a STEP
compliant CAD/CAPP/CAM system framework supporting the
next generation of intelligent CNC controllers for turn/mill
component manufacture is provided. Finally a proposed
computational environment for a STEP-NC compliant system
for turning operations (SCSTO) is described. SCSTO is the
experimental part of the research supported by the specification
of information models and constructed using a structured
methodology and object-oriented methods. SCSTO was
developed to generate a Part 21 file based on machining
features to support the interactive generation of process plans
utilizing feature extraction. A case study component has been
developed to prove the concept for using the milling and turning
parts of ISO14649 to provide a turn-mill CAD/CAPP/CAM
environment.
Digital Article Identifier (DOI):
141
2062
STEP-NC-Compliant Systems for the Manufacturing Environment
Abstract: The paper provides a literature review of the STEPNC
compliant research around the world. The first part of this paper
focuses on projects based on STEP compliance followed by research
and development in this area based on machining operations. Review
the literature relating to relevant STEP standards and application in
the area of turning centers. This research will review the various
research work, carried out from the evolution of STEP-NC of the
CNC manufacturing activities. The paper concludes with discussion
of the applications in this particular area.
Digital Article Identifier (DOI):
140
10302
Performance Assessment and Optimization of the After-Sale Networks
Abstract: The after–sales activities are nowadays acknowledged
as a relevant source of revenue, profit and competitive advantage in
most manufacturing industries. Top and middle management,
therefore, should focus on the definition of a structured business
performance measurement system for the after-sales business. The
paper aims at filling this gap, and presents an integrated methodology
for the after-sales network performance measurement, and provides
an empirical application to automotive case companies and their
official service network. This is the first study that presents an
integrated multivariate approach for total assessment and
improvement of after-sale services.
Digital Article Identifier (DOI):
139
11780
Neutral to Earth Voltage Analysis in Harmonic Polluted Distribution Networks with Multi- Grounded Neutrals
Abstract: A multiphase harmonic load flow algorithm is developed based on backward/forward sweep to examine the effects of various factors on the neutral to earth voltage (NEV), including unsymmetrical system configuration, load unbalance and harmonic injection. The proposed algorithm composes fundamental frequency and harmonic frequencies power flows. The algorithm and the associated models are tested on IEEE 13 bus system. The magnitude of NEV is investigated under various conditions of the number of grounding rods per feeder lengths, the grounding rods resistance and the grounding resistance of the in feeding source. Additionally, the harmonic injection of nonlinear loads has been considered and its influences on NEV under different conditions are shown.
Digital Article Identifier (DOI):
138
394
Defect Detection of Tiles Using 2D-Wavelet Transform and Statistical Features
Abstract: In this article, a method has been offered to classify
normal and defective tiles using wavelet transform and artificial
neural networks. The proposed algorithm calculates max and min
medians as well as the standard deviation and average of detail
images obtained from wavelet filters, then comes by feature vectors
and attempts to classify the given tile using a Perceptron neural
network with a single hidden layer. In this study along with the
proposal of using median of optimum points as the basic feature and
its comparison with the rest of the statistical features in the wavelet
field, the relational advantages of Haar wavelet is investigated. This
method has been experimented on a number of various tile designs
and in average, it has been valid for over 90% of the cases. Amongst
the other advantages, high speed and low calculating load are
prominent.
Digital Article Identifier (DOI):
137
10989
Anomaly Detection using Neuro Fuzzy system
Abstract: As the network based technologies become
omnipresent, demands to secure networks/systems against threat
increase. One of the effective ways to achieve higher security is
through the use of intrusion detection systems (IDS), which are a
software tool to detect anomalous in the computer or network. In this
paper, an IDS has been developed using an improved machine
learning based algorithm, Locally Linear Neuro Fuzzy Model
(LLNF) for classification whereas this model is originally used for
system identification. A key technical challenge in IDS and LLNF
learning is the curse of high dimensionality. Therefore a feature
selection phase is proposed which is applicable to any IDS. While
investigating the use of three feature selection algorithms, in this
model, it is shown that adding feature selection phase reduces
computational complexity of our model. Feature selection algorithms
require the use of a feature goodness measure. The use of both a
linear and a non-linear measure - linear correlation coefficient and
mutual information- is investigated respectively
Digital Article Identifier (DOI):
136
15483
User-s Hand Effect on TIS of Different GSM900/1800 Mobile Phone Models Using FDTD Method
Abstract: This paper predicts the effect of the user-s hand-hold
position on the Total Isotropic Sensitivity (TIS) of GSM900/1800
mobile phone antennas of realistic in-use conditions, where different
semi-realistic mobile phone models, i.e., candy bar and clamshell, as
well as different antenna types, i.e., external and internal, are
simulated using a FDTD-based platform. A semi-realistic hand model
consisting of three tissues and the SAM head are used in simulations.
The results show a considerable impact on TIS of the adopted mobile
phone models owing to the user-s hand presence at different
positions, where a maximum level of TIS is obtained while grasping
the upper part of the mobile phone against head. Maximum TIS
levels are recorded in talk position for mobile phones with external
antenna and maximum differences in TIS levels due to the hand-hold
alteration are recorded for clamshell-type phones.
Digital Article Identifier (DOI):
135
13687
A Computer Model of Language Acquisition – Syllable Learning – Based on Hebbian Cell Assemblies and Reinforcement Learning
Abstract: Investigating language acquisition is one of the most
challenging problems in the area of studying language. Syllable
learning as a level of language acquisition has a considerable
significance since it plays an important role in language acquisition.
Because of impossibility of studying language acquisition directly
with children, especially in its developmental phases, computer
models will be useful in examining language acquisition. In this
paper a computer model of early language learning for syllable
learning is proposed. It is guided by a conceptual model of syllable
learning which is named Directions Into Velocities of Articulators
model (DIVA). The computer model uses simple associational and
reinforcement learning rules within neural network architecture
which are inspired by neuroscience. Our simulation results verify the
ability of the proposed computer model in producing phonemes
during babbling and early speech. Also, it provides a framework for
examining the neural basis of language learning and communication
disorders.
Digital Article Identifier (DOI):
134
4101
Detection of Power Quality Disturbances using Wavelet Transform
Abstract: This paper presents features that characterize power
quality disturbances from recorded voltage waveforms using wavelet
transform. The discrete wavelet transform has been used to detect
and analyze power quality disturbances. The disturbances of interest
include sag, swell, outage and transient. A power system network has
been simulated by Electromagnetic Transients Program. Voltage
waveforms at strategic points have been obtained for analysis, which
includes different power quality disturbances. Then wavelet has been
chosen to perform feature extraction. The outputs of the feature
extraction are the wavelet coefficients representing the power quality
disturbance signal. Wavelet coefficients at different levels reveal the
time localizing information about the variation of the signal.
Digital Article Identifier (DOI):
133
589
Secure Protocol for Short Message Service
Abstract: Short Message Service (SMS) has grown in
popularity over the years and it has become a common way of
communication, it is a service provided through General System
for Mobile Communications (GSM) that allows users to send text
messages to others.
SMS is usually used to transport unclassified information, but
with the rise of mobile commerce it has become a popular tool for
transmitting sensitive information between the business and its
clients. By default SMS does not guarantee confidentiality and
integrity to the message content.
In the mobile communication systems, security (encryption)
offered by the network operator only applies on the wireless link.
Data delivered through the mobile core network may not be
protected. Existing end-to-end security mechanisms are provided
at application level and typically based on public key
cryptosystem.
The main concern in a public-key setting is the authenticity of
the public key; this issue can be resolved by identity-based (IDbased)
cryptography where the public key of a user can be derived
from public information that uniquely identifies the user.
This paper presents an encryption mechanism based on the IDbased
scheme using Elliptic curves to provide end-to-end security
for SMS. This mechanism has been implemented over the standard
SMS network architecture and the encryption overhead has been
estimated and compared with RSA scheme. This study indicates
that the ID-based mechanism has advantages over the RSA
mechanism in key distribution and scalability of increasing
security level for mobile service.
Digital Article Identifier (DOI):
132
11102
Discovery of Quantified Hierarchical Production Rules from Large Set of Discovered Rules
Abstract: Automated discovery of Rule is, due to its applicability, one of the most fundamental and important method in KDD. It has been an active research area in the recent past. Hierarchical representation allows us to easily manage the complexity of knowledge, to view the knowledge at different levels of details, and to focus our attention on the interesting aspects only. One of such efficient and easy to understand systems is Hierarchical Production rule (HPRs) system. A HPR, a standard production rule augmented with generality and specificity information, is of the following form: Decision If < condition> Generality Specificity . HPRs systems are capable of handling taxonomical structures inherent in the knowledge about the real world. This paper focuses on the issue of mining Quantified rules with crisp hierarchical structure using Genetic Programming (GP) approach to knowledge discovery. The post-processing scheme presented in this work uses Quantified production rules as initial individuals of GP and discovers hierarchical structure. In proposed approach rules are quantified by using Dempster Shafer theory. Suitable genetic operators are proposed for the suggested encoding. Based on the Subsumption Matrix(SM), an appropriate fitness function is suggested. Finally, Quantified Hierarchical Production Rules (HPRs) are generated from the discovered hierarchy, using Dempster Shafer theory. Experimental results are presented to demonstrate the performance of the proposed algorithm.
Digital Article Identifier (DOI):
131
9474
Discovery of Fuzzy Censored Production Rules from Large Set of Discovered Fuzzy if then Rules
Abstract: Censored Production Rule is an extension of standard
production rule, which is concerned with problems of reasoning with
incomplete information, subject to resource constraints and problem
of reasoning efficiently with exceptions. A CPR has a form: IF A
(Condition) THEN B (Action) UNLESS C (Censor), Where C is the
exception condition. Fuzzy CPR are obtained by augmenting
ordinary fuzzy production rule “If X is A then Y is B with an
exception condition and are written in the form “If X is A then Y is B
Unless Z is C. Such rules are employed in situation in which the
fuzzy conditional statement “If X is A then Y is B" holds frequently
and the exception condition “Z is C" holds rarely. Thus “If X is A
then Y is B" part of the fuzzy CPR express important information
while the unless part acts only as a switch that changes the polarity of
“Y is B" to “Y is not B" when the assertion “Z is C" holds. The
proposed approach is an attempt to discover fuzzy censored
production rules from set of discovered fuzzy if then rules in the
form:
A(X)  B(Y) || C(Z).
Digital Article Identifier (DOI):
130
10452
Auto Classification for Search Intelligence
Abstract: This paper proposes an auto-classification algorithm
of Web pages using Data mining techniques. We consider the
problem of discovering association rules between terms in a set of
Web pages belonging to a category in a search engine database, and
present an auto-classification algorithm for solving this problem that
are fundamentally based on Apriori algorithm. The proposed
technique has two phases. The first phase is a training phase where
human experts determines the categories of different Web pages, and
the supervised Data mining algorithm will combine these categories
with appropriate weighted index terms according to the highest
supported rules among the most frequent words. The second phase is
the categorization phase where a web crawler will crawl through the
World Wide Web to build a database categorized according to the
result of the data mining approach. This database contains URLs and
their categories.
Digital Article Identifier (DOI):
129
4130
Some Issues on Integrating Telepresence Technology into Industrial Robotic Assembly
Abstract: Since the 1940s, many promising telepresence
research results have been obtained. However, telepresence
technology still has not reached industrial usage. As human
intelligence is necessary for successful execution of most manual
assembly tasks, the ability of the human is hindered in some cases,
such as the assembly of heavy parts of small/medium lots or
prototypes. In such a case of manual assembly, the help of industrial
robots is mandatory. The telepresence technology can be considered
as a solution for performing assembly tasks, where the human
intelligence and haptic sense are needed to identify and minimize the
errors during an assembly process and a robot is needed to carry
heavy parts. In this paper, preliminary steps to integrate the
telepresence technology into industrial robot systems are introduced.
The system described here combines both, the human haptic sense
and the industrial robot capability to perform a manual assembly task
remotely using a force feedback joystick. Mapping between the
joystick-s Degrees of Freedom (DOF) and the robot-s ones are
introduced. Simulation and experimental results are shown and future
work is discussed.
Digital Article Identifier (DOI):
128
13319
Position Control of an AC Servo Motor Using VHDL and FPGA
Abstract: In this paper, a new method of controlling position of AC Servomotor using Field Programmable Gate Array (FPGA). FPGA controller is used to generate direction and the number of pulses required to rotate for a given angle. Pulses are sent as a square wave, the number of pulses determines the angle of rotation and frequency of square wave determines the speed of rotation. The proposed control scheme has been realized using XILINX FPGA SPARTAN XC3S400 and tested using MUMA012PIS model Alternating Current (AC) servomotor. Experimental results show that the position of the AC Servo motor can be controlled effectively. KeywordsAlternating Current (AC), Field Programmable Gate Array (FPGA), Liquid Crystal Display (LCD).
Digital Article Identifier (DOI):
127
12992
Effectiveness of Contourlet vs Wavelet Transform on Medical Image Compression: a Comparative Study
Abstract: Discrete Wavelet Transform (DWT) has demonstrated
far superior to previous Discrete Cosine Transform (DCT) and
standard JPEG in natural as well as medical image compression. Due
to its localization properties both in special and transform domain,
the quantization error introduced in DWT does not propagate
globally as in DCT. Moreover, DWT is a global approach that avoids
block artifacts as in the JPEG. However, recent reports on natural
image compression have shown the superior performance of
contourlet transform, a new extension to the wavelet transform in two
dimensions using nonseparable and directional filter banks,
compared to DWT. It is mostly due to the optimality of contourlet in
representing the edges when they are smooth curves. In this work, we
investigate this fact for medical images, especially for CT images,
which has not been reported yet. To do that, we propose a
compression scheme in transform domain and compare the
performance of both DWT and contourlet transform in PSNR for
different compression ratios (CR) using this scheme. The results
obtained using different type of computed tomography images show
that the DWT has still good performance at lower CR but contourlet
transform performs better at higher CR.
Digital Article Identifier (DOI):
126
736
Fast Database Indexing for Large Protein Sequence Collections Using Parallel N-Gram Transformation Algorithm
Abstract: With the rapid development in the field of life
sciences and the flooding of genomic information, the need for faster
and scalable searching methods has become urgent. One of the
approaches that were investigated is indexing. The indexing methods
have been categorized into three categories which are the lengthbased
index algorithms, transformation-based algorithms and mixed
techniques-based algorithms. In this research, we focused on the
transformation based methods. We embedded the N-gram method
into the transformation-based method to build an inverted index
table. We then applied the parallel methods to speed up the index
building time and to reduce the overall retrieval time when querying
the genomic database. Our experiments show that the use of N-Gram
transformation algorithm is an economical solution; it saves time and
space too. The result shows that the size of the index is smaller than
the size of the dataset when the size of N-Gram is 5 and 6. The
parallel N-Gram transformation algorithm-s results indicate that the
uses of parallel programming with large dataset are promising which
can be improved further.
Digital Article Identifier (DOI):
125
15446
Analysis and Classification of Hiv-1 Sub- Type Viruses by AR Model through Artificial Neural Networks
Abstract: HIV-1 genome is highly heterogeneous. Due to this
variation, features of HIV-I genome is in a wide range. For this
reason, the ability to infection of the virus changes depending on
different chemokine receptors. From this point of view, R5 HIV
viruses use CCR5 coreceptor while X4 viruses use CXCR5 and
R5X4 viruses can utilize both coreceptors. Recently, in
Bioinformatics, R5X4 viruses have been studied to classify by using
the experiments on HIV-1 genome.
In this study, R5X4 type of HIV viruses were classified using
Auto Regressive (AR) model through Artificial Neural Networks
(ANNs). The statistical data of R5X4, R5 and X4 viruses was
analyzed by using signal processing methods and ANNs. Accessible
residues of these virus sequences were obtained and modeled by AR
model since the dimension of residues is large and different from
each other. Finally the pre-processed data was used to evolve various
ANN structures for determining R5X4 viruses. Furthermore ROC
analysis was applied to ANNs to show their real performances. The
results indicate that R5X4 viruses successfully classified with high
sensitivity and specificity values training and testing ROC analysis
for RBF, which gives the best performance among ANN structures.
Digital Article Identifier (DOI):
124
5248
A Decision Boundary based Discretization Technique using Resampling
Abstract: Many supervised induction algorithms require discrete
data, even while real data often comes in a discrete
and continuous formats. Quality discretization of continuous
attributes is an important problem that has effects on speed,
accuracy and understandability of the induction models. Usually,
discretization and other types of statistical processes are applied
to subsets of the population as the entire population is practically
inaccessible. For this reason we argue that the discretization
performed on a sample of the population is only an estimate of
the entire population. Most of the existing discretization methods,
partition the attribute range into two or several intervals using
a single or a set of cut points. In this paper, we introduce a
technique by using resampling (such as bootstrap) to generate
a set of candidate discretization points and thus, improving the
discretization quality by providing a better estimation towards
the entire population. Thus, the goal of this paper is to observe
whether the resampling technique can lead to better discretization
points, which opens up a new paradigm to construction of
soft decision trees.
Digital Article Identifier (DOI):
123
8067
Web Service Security Method To SOA Development
Abstract: Web services provide significant new benefits for SOAbased
applications, but they also expose significant new security
risks. There are huge number of WS security standards and
processes. At present, there is still a lack of a comprehensive
approach which offers a methodical development in the construction
of secure WS-based SOA. Thus, the main objective of this paper is
to address this needs, presenting a comprehensive method for Web
Services Security guaranty in SOA. The proposed method defines
three stages, Initial Security Analysis, Architectural Security
Guaranty and WS Security Standards Identification. These facilitate,
respectively, the definition and analysis of WS-specific security
requirements, the development of a WS-based security architecture
and the identification of the related WS security standards that the
security architecture must articulate in order to implement the
security services.
Digital Article Identifier (DOI):
122
14113
Web Pages Aesthetic Evaluation Using Low-Level Visual Features
Abstract: Web sites are rapidly becoming the preferred media
choice for our daily works such as information search, company
presentation, shopping, and so on. At the same time, we live in a
period where visual appearances play an increasingly important
role in our daily life. In spite of designers- effort to develop a web
site which be both user-friendly and attractive, it would be difficult
to ensure the outcome-s aesthetic quality, since the visual
appearance is a matter of an individual self perception and opinion.
In this study, it is attempted to develop an automatic system for
web pages aesthetic evaluation which are the building blocks of
web sites. Based on the image processing techniques and artificial
neural networks, the proposed method would be able to categorize
the input web page according to its visual appearance and aesthetic
quality. The employed features are multiscale/multidirectional
textural and perceptual color properties of the web pages, fed to
perceptron ANN which has been trained as the evaluator. The
method is tested using university web sites and the results
suggested that it would perform well in the web page aesthetic
evaluation tasks with around 90% correct categorization.
Digital Article Identifier (DOI):
121
11355
Toward An Agreement on Semantic Web Architecture
Abstract: There are many problems associated with the World Wide
Web: getting lost in the hyperspace; the web content is still accessible only
to humans and difficulties of web administration. The solution to these
problems is the Semantic Web which is considered to be the extension
for the current web presents information in both human readable and
machine processable form. The aim of this study is to reach new
generic foundation architecture for the Semantic Web because there
is no clear architecture for it, there are four versions, but still up to
now there is no agreement for one of these versions nor is there a
clear picture for the relation between different layers and
technologies inside this architecture. This can be done depending on
the idea of previous versions as well as Gerber-s evaluation method
as a step toward an agreement for one Semantic Web architecture.
Digital Article Identifier (DOI):
120
14329
Incremental Mining of Shocking Association Patterns
Abstract: Association rules are an important problem in data
mining. Massively increasing volume of data in real life databases
has motivated researchers to design novel and incremental algorithms
for association rules mining. In this paper, we propose an incremental
association rules mining algorithm that integrates shocking
interestingness criterion during the process of building the model. A
new interesting measure called shocking measure is introduced. One
of the main features of the proposed approach is to capture the user
background knowledge, which is monotonically augmented. The
incremental model that reflects the changing data and the user beliefs
is attractive in order to make the over all KDD process more
effective and efficient. We implemented the proposed approach and
experiment it with some public datasets and found the results quite
promising.
Digital Article Identifier (DOI):
119
1569
Model-Based Small Area Estimation with Application to Unemployment Estimates
Abstract: The problem of Small Area Estimation (SAE) is complex because of various information sources and insufficient data. In this paper, an approach for SAE is presented for decision-making at national, regional and local level. We propose an Empirical Best Linear Unbiased Predictor (EBLUP) as an estimator in order to combine several information sources to evaluate various indicators. First, we present the urban audit project and its environmental, social and economic indicators. Secondly, we propose an approach for decision making in order to estimate indicators. An application is used to validate the theoretical proposal. Finally, a decision support system is presented based on open-source environment.
Digital Article Identifier (DOI):
118
15606
A Decision Support Tool for Evaluating Mobility Projects
Abstract: Success is a European project that will implement several clean transport offers in three European cities and evaluate the environmental impacts. The goal of these measures is to improve urban mobility or the displacement of residents inside cities. For e.g. park and ride, electric vehicles, hybrid bus and bike sharing etc. A list of 28 criteria and 60 measures has been established for evaluation of these transport projects. The evaluation criteria can be grouped into: Transport, environment, social, economic and fuel consumption. This article proposes a decision support system based that encapsulates a hybrid approach based on fuzzy logic, multicriteria analysis and belief theory for the evaluation of impacts of urban mobility solutions. A web-based tool called DeSSIA (Decision Support System for Impacts Assessment) has been developed that treats complex data. The tool has several functionalities starting from data integration (import of data), evaluation of projects and finishes by graphical display of results. The tool development is based on the concept of MVC (Model, View, and Controller). The MVC is a conception model adapted to the creation of software's which impose separation between data, their treatment and presentation. Effort is laid on the ergonomic aspects of the application. It has codes compatible with the latest norms (XHTML, CSS) and has been validated by W3C (World Wide Web Consortium). The main ergonomic aspect focuses on the usability of the application, ease of learning and adoption. By the usage of technologies such as AJAX (XML and Java Script asynchrones), the application is more rapid and convivial. The positive points of our approach are that it treats heterogeneous data (qualitative, quantitative) from various information sources (human experts, survey, sensors, model etc.).
Digital Article Identifier (DOI):
117
12444
Some Discrete Propositions in IVSs
Abstract: The aim of this paper is to exhibit some properties of
local topologies of an IVS. Also, we Introduce ISG structure as an
interesting structure of semigroups in IVSs.
Digital Article Identifier (DOI):
116
11881
Impact of Viscous and Heat Relaxation Loss on the Critical Temperature Gradients of Thermoacoustic Stacks
Abstract: A stack with a small critical temperature gradient is
desirable for a standing wave thermoacoustic engine to obtain a low
onset temperature difference (the minimum temperature difference to
start engine-s self-oscillation). The viscous and heat relaxation loss in
the stack determines the critical temperature gradient. In this work, a
dimensionless critical temperature gradient factor is obtained based
on the linear thermoacoustic theory. It is indicated that the
impedance determines the proportion between the viscous loss, heat
relaxation losses and the power production from the heat energy. It
reveals the effects of the channel dimensions, geometrical
configuration and the local acoustic impedance on the critical
temperature gradient in stacks. The numerical analysis shows that
there exists a possible optimum combination of these parameters
which leads to the lowest critical temperature gradient. Furthermore,
several different geometries have been tested and compared
numerically.
Digital Article Identifier (DOI):
115
10616
Vortex Shedding at the End of Parallel-plate Thermoacoustic Stack in the Oscillatory Flow Conditions
Abstract: This paper investigates vortex shedding processes
occurring at the end of a stack of parallel plates, due to an oscillating
flow induced by an acoustic standing wave within an acoustic
resonator. Here, Particle Image Velocimetry (PIV) is used to quantify
the vortex shedding processes within an acoustic cycle
phase-by-phase, in particular during the “ejection" of the fluid out of
the stack. Standard hot-wire anemometry measurement is also applied
to detect the velocity fluctuations near the end of the stack.
Combination of these two measurement techniques allowed a detailed
analysis of the vortex shedding phenomena. The results obtained show
that, as the Reynolds number varies (by varying the plate thickness
and drive ratio), different flow patterns of vortex shedding are
observed by the PIV measurement. On the other hand, the
time-dependent hot-wire measurements allow obtaining detailed
frequency spectra of the velocity signal, used for calculating
characteristic Strouhal numbers. The impact of the plate thickness and
the Reynolds number on the vortex shedding pattern has been
discussed. Furthermore, a detailed map of the relationship between the
Strouhal number and Reynolds number has been obtained and
discussed.
Digital Article Identifier (DOI):
114
15531
A Survey on Performance Tools for OpenMP
Abstract: Advances in processors architecture, such as multicore,
increase the size of complexity of parallel computer systems.
With multi-core architecture there are different parallel languages
that can be used to run parallel programs. One of these languages is
OpenMP which embedded in C/Cµ or FORTRAN. Because of this
new architecture and the complexity, it is very important to evaluate
the performance of OpenMP constructs, kernels, and application
program on multi-core systems. Performance is the activity of
collecting the information about the execution characteristics of a
program. Performance tools consists of at least three interfacing
software layers, including instrumentation, measurement, and
analysis. The instrumentation layer defines the measured
performance events. The measurement layer determines what
performance event is actually captured and how it is measured by the
tool. The analysis layer processes the performance data and
summarizes it into a form that can be displayed in performance tools.
In this paper, a number of OpenMP performance tools are surveyed,
explaining how each is used to collect, analyse, and display data
collection.
Digital Article Identifier (DOI):
113
11583
Optimal DG Placement in Distribution systems Using Cost/Worth Analysis
Abstract: DG application has received increasing attention during
recent years. The impact of DG on various aspects of distribution system
operation, such as reliability and energy loss, depend highly on DG
location in distribution feeder. Optimal DG placement is an important
subject which has not been fully discussed yet.
This paper presents an optimization method to determine optimal DG
placement, based on a cost/worth analysis approach. This method
considers technical and economical factors such as energy loss, load point
reliability indices and DG costs, and particularly, portability of DG. The
proposed method is applied to a test system and the impacts of different
parameters such as load growth rate and load forecast uncertainty (LFU)
on optimum DG location are studied.
Digital Article Identifier (DOI):
112
11508
Reliability Analysis in Electrical Distribution System Considering Preventive Maintenance Applications on Circuit Breakers
Abstract: This paper presents the results of a preventive maintenance application-based study and modeling of failure rates in breakers of electrical distribution systems. This is a critical issue in the reliability assessment of a system. In the analysis conducted in this paper, the impacts of failure rate variations caused by a preventive maintenance are examined. This is considered as a part of a Reliability Centered Maintenance (RCM) application program. A number of load point reliability indices is derived using the mathematical model of the failure rate, which is established using the observed data in a distribution system.
Digital Article Identifier (DOI):
111
4972
Reliability-based Selection of Wind Turbines for Large-Scale Wind Farms
Abstract: This paper presents a reliability-based approach to select appropriate wind turbine types for a wind farm considering site-specific wind speed patterns. An actual wind farm in the northern region of Iran with the wind speed registration of one year is studied in this paper. An analytic approach based on total probability theorem is utilized in this paper to model the probabilistic behavior of both turbines- availability and wind speed. Well-known probabilistic reliability indices such as loss of load expectation (LOLE), expected energy not supplied (EENS) and incremental peak load carrying capability (IPLCC) for wind power integration in the Roy Billinton Test System (RBTS) are examined. The most appropriate turbine type achieving the highest reliability level is chosen for the studied wind farm.
Digital Article Identifier (DOI):
110
5132
Tree-on-DAG for Data Aggregation in Sensor Networks
Abstract: Computing and maintaining network structures for efficient
data aggregation incurs high overhead for dynamic events
where the set of nodes sensing an event changes with time. Moreover,
structured approaches are sensitive to the waiting time that is used
by nodes to wait for packets from their children before forwarding
the packet to the sink. An optimal routing and data aggregation
scheme for wireless sensor networks is proposed in this paper. We
propose Tree on DAG (ToD), a semistructured approach that uses
Dynamic Forwarding on an implicitly constructed structure composed
of multiple shortest path trees to support network scalability. The key
principle behind ToD is that adjacent nodes in a graph will have
low stretch in one of these trees in ToD, thus resulting in early
aggregation of packets. Based on simulations on a 2,000-node Mica2-
based network, we conclude that efficient aggregation in large-scale
networks can be achieved by our semistructured approach.
Digital Article Identifier (DOI):
109
14399
An Enhanced Slicing Algorithm Using Nearest Distance Analysis for Layer Manufacturing
Abstract: Although the STL (stereo lithography) file format is
widely used as a de facto industry standard in the rapid prototyping
industry due to its simplicity and ability to tessellation of almost all
surfaces, but there are always some defects and shortcoming in their
usage, which many of them are difficult to correct manually. In
processing the complex models, size of the file and its defects grow
extremely, therefore, correcting STL files become difficult. In this
paper through optimizing the exiting algorithms, size of the files and
memory usage of computers to process them will be reduced. In spite
of type and extent of the errors in STL files, the tail-to-head
searching method and analysis of the nearest distance between tails
and heads techniques were used. As a result STL models sliced
rapidly, and fully closed contours produced effectively and errorless.
Digital Article Identifier (DOI):
108
15054
Manufacture of Electroless Nickel/YSZ Composite Coatings
Abstract: The paper discusses optimising work on a method of processing ceramic / metal composite coatings for various applications and is based on preliminary work on processing anodes for solid oxide fuel cells (SOFCs). The composite coating is manufactured by the electroless co-deposition of nickel and yttria stabilised zirconia (YSZ) simultaneously on to a ceramic substrate. The effect on coating characteristics of substrate surface treatments and electroless nickel bath parameters such as pH and agitation methods are also investigated. Characterisation of the resulting deposit by scanning electron microscopy (SEM) and energy dispersive X-ray analysis (EDXA) is also discussed.
Digital Article Identifier (DOI):
107
5413
Ageing Assessment of Insulation Systems by Absorption/Resorption Currents
Abstract: Degradation of polymeric insulation systems of
electrical equipments increases the space charge density and the
concentration of electrical dipoles. By consequence, the maximum
values and the slopes of absorption/resorption (A/R) currents can
change with insulation systems ageing. In this paper, an analysis of
the nature of the A/R currents and the importance of their
components, especially the polarization current and the current given
by the space charge, is presented. The experimental study concerns
the A/R currents measurements of plane samples (made from
CALMICAGLAS tapes), virgin and thermally accelerated aged. The
obtained results show that the ageing process produces an increase of
the values and a decrease of shapes of the A/R currents. Finally, the
possibility of estimating insulations ageing state and lifetime from
A/R currents measurements is discussed.
Digital Article Identifier (DOI):
106
10549
A Model Predicting the Microbiological Qualityof Aquacultured Sea Bream (Sparus aurata) According to Physicochemical Data: An Application in Western Greece Fish Aquaculture
Abstract: Monitoring of microbial flora in aquacultured sea bream, in relation to the physicochemical parameters of the rearing seawater, ended to a model describing the influence of the last to the quality of the fisheries. Fishes were sampled during eight months from four aqua farms in Western Greece and analyzed for psychrotrophic, H2S producing bacteria, Salmonella sp., heterotrophic plate count (PCA), with simultaneous physical evaluation. Temperature, dissolved oxygen, pH, conductivity, TDS, salinity, NO3 - and NH4 + ions were recorded. Temperature, dissolved oxygen and conductivity were correlated, respectively, to PCA, Pseudomonas sp. and Shewanella sp. counts. These parameters were the inputs of the model, which was driving, as outputs, to the prediction of PCA, Vibrio sp., Pseudomonas sp. and Shewanella sp. counts, and fish microbiological quality. The present study provides, for the first time, a ready-to-use predictive model of fisheries hygiene, leading to an effective management system for the optimization of aquaculture fisheries quality.
Digital Article Identifier (DOI):
105
4831
Academic Program Administration via Semantic Web – A Case Study
Abstract: Generally, administrative systems in an academic
environment are disjoint and support independent queries. The
objective in this work is to semantically connect these independent
systems to provide support to queries run on the integrated platform.
The proposed framework, by enriching educational material in the
legacy systems, provides a value-added semantics layer where
activities such as annotation, query and reasoning can be carried out
to support management requirements. We discuss the development of
this ontology framework with a case study of UAE University
program administration to show how semantic web technologies can
be used by administration to develop student profiles for better
academic program management.
Digital Article Identifier (DOI):
104
2941
Packaging and Interconnection Technologies of Power Devices, Challenges and Future Trends
Abstract: Standard packaging and interconnection technologies
of power devices have difficulties meeting the increasing thermal
demands of new application fields of power electronics devices.
Main restrictions are the decreasing reliability of bond-wires and
solder layers with increasing junction temperature. In the last few
years intensive efforts have been invested in developing new
packaging and interconnection solutions which may open a path to
future application of power devices. In this paper, the main failure
mechanisms of power devices are described and principle of new
packaging and interconnection concepts and their power cycling
reliability are presented.
Digital Article Identifier (DOI):
103
7248
Blood Cell Dynamics in a Simple Shear Flow using an Implicit Fluid-Structure Interaction Method Based on the ALE Approach
Abstract: A numerical method is developed for simulating
the motion of particles with arbitrary shapes in an effectively
infinite or bounded viscous flow. The particle translational and
angular motions are numerically investigated using a fluid-structure
interaction (FSI) method based on the Arbitrary-Lagrangian-Eulerian
(ALE) approach and the dynamic mesh method (smoothing and
remeshing) in FLUENT ( ANSYS Inc., USA). Also, the effects of
arbitrary shapes on the dynamics are studied using the FSI method
which could be applied to the motions and deformations of a single
blood cell and multiple blood cells, and the primary thrombogenesis
caused by platelet aggregation. It is expected that, combined with a
sophisticated large-scale computational technique, the simulation
method will be useful for understanding the overall properties of blood
flow from blood cellular level (microscopic) to the resulting
rheological properties of blood as a mass (macroscopic).
Digital Article Identifier (DOI):
102
14924
Characteristics of Hemodynamics in a Bileaflet Mechanical Heart Valve using an Implicit FSI Method
Abstract: Human heart valves diseased by congenital heart
defects, rheumatic fever, bacterial infection, cancer may cause stenosis
or insufficiency in the valves. Treatment may be with medication but
often involves valve repair or replacement (insertion of an artificial
heart valve). Bileaflet mechanical heart valves (BMHVs) are widely
implanted to replace the diseased heart valves, but still suffer from
complications such as hemolysis, platelet activation, tissue
overgrowth and device failure. These complications are closely related
to both flow characteristics through the valves and leaflet dynamics. In
this study, the physiological flow interacting with the moving leaflets
in a bileaflet mechanical heart valve (BMHV) is simulated with a
strongly coupled implicit fluid-structure interaction (FSI) method
which is newly organized based on the Arbitrary-Lagrangian-Eulerian
(ALE) approach and the dynamic mesh method (remeshing) of
FLUENT. The simulated results are in good agreement with previous
experimental studies. This study shows the applicability of the present
FSI model to the complicated physics interacting between fluid flow
and moving boundary.
Digital Article Identifier (DOI):
101
12710
Multi-Enterprise Tie and Co-Operation Mechanism in Mexican Agro Industry SME's
Abstract: The aim of this paper is to explain what a multienterprise tie is, what evidence its analysis provides and how does the cooperation mechanism influence the establishment of a multienterprise tie. The study focuses on businesses of smaller dimension, geographically dispersed and whose businessmen are learning to cooperate in an international environment. The empirical evidence obtained at this moment permits to conclude the following: The tie is not long-lasting, it has an end; opportunism is an opportunity to learn; the multi-enterprise tie is a space to learn about the cooperation mechanism; the local tie permits a businessman to alternate between competition and cooperation strategies; the disappearance of a tie is an experience of learning for a businessman, diminishing the possibility of failure in the next tie; the cooperation mechanism tends to eliminate hierarchical relations; the multienterprise tie diminishes the asymmetries and permits SME-s to have a better position when they negotiate with large companies; the multi-enterprise tie impacts positively on the local system. The collection of empirical evidence was done trough the following instruments: direct observation in a business encounter to which the businesses attended in 2003 (202 Mexican agro industry SME-s), a survey applied in 2004 (129), a questionnaire applied in 2005 (86 businesses), field visits to the businesses during the period 2006-2008 and; a survey applied by telephone in 2008 (55 Mexican agro industry SME-s).
Digital Article Identifier (DOI):
100
5861
Salient Points Reduction for Content-Based Image Retrieval
Abstract: Salient points are frequently used to represent local
properties of the image in content-based image retrieval. In this paper,
we present a reduction algorithm that extracts the local most salient
points such that they not only give a satisfying representation of an
image, but also make the image retrieval process efficiently. This
algorithm recursively reduces the continuous point set by their
corresponding saliency values under a top-down approach. The
resulting salient points are evaluated with an image retrieval system
using Hausdoff distance. In this experiment, it shows that our method
is robust and the extracted salient points provide better retrieval
performance comparing with other point detectors.
Digital Article Identifier (DOI):
99
3392
Design and Simulation of a Concentrated Luneberg Antenna
Abstract: Luneberg lens is a new generation of antennas that is
developed in the last few years and inserts itself strongly in
Microwaves, Communications and Telescopes area. The idea of this
research is to improve the radiation pattern by decreasing the side
lobes and increasing the main lobe. The new design is proposed to
work in the X-band. The simulated result and analysis are presented.
Digital Article Identifier (DOI):
98
15621
Haptics Enabled Offline AFM Image Analysis
Abstract: Current advancements in nanotechnology are dependent
on the capabilities that can enable nano-scientists to extend their eyes
and hands into the nano-world. For this purpose, a haptics (devices
capable of recreating tactile or force sensations) based system for
AFM (Atomic Force Microscope) is proposed. The system enables
the nano-scientists to touch and feel the sample surfaces, viewed
through AFM, in order to provide them with better understanding of
the physical properties of the surface, such as roughness, stiffness and
shape of molecular architecture. At this stage, the proposed work uses
of ine images produced using AFM and perform image analysis to
create virtual surfaces suitable for haptics force analysis. The research
work is in the process of extension from of ine to online process
where interaction will be done directly on the material surface for
realistic analysis.
Digital Article Identifier (DOI):
97
8294
The Variable Step-Size Gauss-Seidel Pseudo Affine Projection Algorithm
Abstract: In this paper, a new pseudo affine projection (AP)
algorithm based on Gauss-Seidel (GS) iterations is proposed for
acoustic echo cancellation (AEC). It is shown that the algorithm is
robust against near-end signal variations (including double-talk).
Digital Article Identifier (DOI):
96
13634
Empirical Study of Real Retail Trade Turnover
Abstract: This paper deals with econometric analysis of real
retail trade turnover. It is a part of an extensive scientific research
about modern trends in Croatian national economy. At the end of the
period of transition economy, Croatia confronts with challenges and
problems of high consumption society. In such environment as
crucial economic variables: real retail trade turnover, average
monthly real wages and household loans are chosen for consequence
analysis. For the purpose of complete procedure of multiple
econometric analysis data base adjustment has been provided.
Namely, it has been necessary to deflate original national statistics
data of retail trade turnover using consumer price indices, as well as
provide process of seasonally adjustment of its contemporary
behavior. In model establishment it has been necessary to involve the
overcoming procedure for the autocorrelation and colinearity
problems. Moreover, for case of time-series shift a specific
appropriate econometric instrument has been applied. It would be
emphasize that the whole methodology procedure is based on the real
Croatian national economy time-series.
Digital Article Identifier (DOI):
95
8368
Numerical Optimization within Vector of Parameters Estimation in Volatility Models
Abstract: In this paper usefulness of quasi-Newton iteration
procedure in parameters estimation of the conditional variance
equation within BHHH algorithm is presented. Analytical solution of
maximization of the likelihood function using first and second
derivatives is too complex when the variance is time-varying. The
advantage of BHHH algorithm in comparison to the other
optimization algorithms is that requires no third derivatives with
assured convergence. To simplify optimization procedure BHHH
algorithm uses the approximation of the matrix of second derivatives
according to information identity. However, parameters estimation in
a/symmetric GARCH(1,1) model assuming normal distribution of
returns is not that simple, i.e. it is difficult to solve it analytically.
Maximum of the likelihood function can be founded by iteration
procedure until no further increase can be found. Because the
solutions of the numerical optimization are very sensitive to the
initial values, GARCH(1,1) model starting parameters are defined.
The number of iterations can be reduced using starting values close
to the global maximum. Optimization procedure will be illustrated in
framework of modeling volatility on daily basis of the most liquid
stocks on Croatian capital market: Podravka stocks (food industry),
Petrokemija stocks (fertilizer industry) and Ericsson Nikola Tesla
stocks (information-s-communications industry).
Digital Article Identifier (DOI):
94
9739
E-Business Security: Methodological Considerations
Abstract: A great deal of research works in the field information
systems security has been based on a positivist paradigm. Applying
the reductionism concept of the positivist paradigm for information
security means missing the bigger picture and thus, the lack of holism
which could be one of the reasons why security is still overlooked,
comes as an afterthought or perceived from a purely technical
dimension. We need to reshape our thinking and attitudes towards
security especially in a complex and dynamic environment such as e-
Business to develop a holistic understanding of e-Business security in
relation to its context as well as considering all the stakeholders in
the problem area. In this paper we argue the suitability and need for
more inductive interpretive approach and qualitative research method
to investigate e-Business security. Our discussion is based on a
holistic framework of enquiry, nature of the research problem, the
underling theoretical lens and the complexity of e-Business
environment. At the end we present a research strategy for
developing a holistic framework for understanding of e-Business
security problems in the context of developing countries based on an
interdisciplinary inquiry which considers their needs and
requirements.
Digital Article Identifier (DOI):
93
14412
New Identity Management Scheme and its Formal Analysis
Abstract: As the Internet technology has developed rapidly, the
number of identities (IDs) managed by each individual person has
increased and various ID management technologies have been
developed to assist users. However, most of these technologies are
vulnerable to the existing hacking methods such as phishing attacks
and key-logging. If the administrator-s password is exposed, an
attacker can access the entire contents of the stolen user-s data files in
other devices. To solve these problems, we propose here a new ID
management scheme based on a Single Password Protocol. The paper
presents the details of the new scheme as well as a formal analysis of
the method using BAN Logic.
Digital Article Identifier (DOI):
92
13120
Measuring Pressure Wave Velocity in a Hydraulic System
Abstract: Pressure wave velocity in a hydraulic system was
determined using piezo pressure sensors without removing fluid from
the system. The measurements were carried out in a low pressure
range (0.2 – 6 bar) and the results were compared with the results of
other studies. This method is not as accurate as measurement with
separate measurement equipment, but the fluid is in the actual
machine the whole time and the effect of air is taken into
consideration if air is present in the system. The amount of air is
estimated by calculations and comparisons between other studies.
This measurement equipment can also be installed in an existing
machine and it can be programmed so that it measures in real time.
Thus, it could be used e.g. to control dampers.
Digital Article Identifier (DOI):
91
561
Optimizing Spatial Trend Detection By Artificial Immune Systems
Abstract: Spatial trends are one of the valuable patterns in geo
databases. They play an important role in data analysis and
knowledge discovery from spatial data. A spatial trend is a regular
change of one or more non spatial attributes when spatially moving
away from a start object. Spatial trend detection is a graph search
problem therefore heuristic methods can be good solution. Artificial
immune system (AIS) is a special method for searching and
optimizing. AIS is a novel evolutionary paradigm inspired by the
biological immune system. The models based on immune system
principles, such as the clonal selection theory, the immune network
model or the negative selection algorithm, have been finding
increasing applications in fields of science and engineering.
In this paper, we develop a novel immunological algorithm based
on clonal selection algorithm (CSA) for spatial trend detection. We
are created neighborhood graph and neighborhood path, then select
spatial trends that their affinity is high for antibody. In an
evolutionary process with artificial immune algorithm, affinity of
low trends is increased with mutation until stop condition is satisfied.
Digital Article Identifier (DOI):
90
6853
Compensation–Based Current Decomposition
Abstract: This paper deals with the current space-vector
decomposition in three-phase, three-wire systems on the basis of
some case studies. We propose four components of the current spacevector
in terms of DC and AC components of the instantaneous
active and reactive powers. The term of supplementary useless
current vector is also pointed out. The analysis shows that the current
decomposition which respects the definition of the instantaneous
apparent power vector is useful for compensation reasons only if the
supply voltages are sinusoidal. A modified definition of the
components of the current is proposed for the operation under
nonsinusoidal voltage conditions.
Digital Article Identifier (DOI):
89
4894
Shape Optimization of Permanent Magnet Motors Using the Reduced Basis Technique
Abstract: In this paper, a tooth shape optimization method for
cogging torque reduction in Permanent Magnet (PM) motors is
developed by using the Reduced Basis Technique (RBT) coupled by
Finite Element Analysis (FEA) and Design of Experiments (DOE)
methods. The primary objective of the method is to reduce the
enormous number of design variables required to define the tooth
shape. RBT is a weighted combination of several basis shapes. The
aim of the method is to find the best combination using the weights
for each tooth shape as the design variables. A multi-level design
process is developed to find suitable basis shapes or trial shapes at
each level that can be used in the reduced basis technique. Each level
is treated as a separated optimization problem until the required
objective – minimum cogging torque – is achieved. The process is
started with geometrically simple basis shapes that are defined by
their shape co-ordinates. The experimental design of Taguchi method
is used to build the approximation model and to perform
optimization. This method is demonstrated on the tooth shape
optimization of a 8-poles/12-slots PM motor.
Digital Article Identifier (DOI):
88
14394
Effect of Recycle Gas on Activity and Selectivity of Co-Ru/Al2O3 Catalyst in Fischer- Tropsch Synthesis
Abstract: In industrial scale of Gas to Liquid (GTL) process in
Fischer-Tropsch (FT) synthesis, a part of reactor outlet gases such as
CO2 and CH4 as side reaction products, is usually recycled. In this
study, the influence of CO2 and CH4 on the performance and
selectivity of Co-Ru/Al2O3 catalyst is investigated by injection of
these gases (0-20 vol. % of feed) to the feed stream. The effect of
temperature and feed flow rate, are also inspected. The results show
that low amounts of CO2 in the feed stream, doesn`t change the
catalyst activity significantly but increasing the amount of CO2 (more
than 10 vol. %) cause the CO conversion to decrease and the
selectivity of heavy components to increase. Methane acts as an inert
gas and doesn`t affect the catalyst performance. Increasing feed flow
rate has negative effect on both CO conversion and heavy component
selectivity. By raising the temperature, CO conversion will increase
but there are more volatile components in the product. The effect of
CO2 on the catalyst deactivation is also investigated carefully and a
mechanism is suggested to explain the negative influence of CO2 on
catalyst deactivation.
Digital Article Identifier (DOI):
87
6014
New Product-Type Estimators for the Population Mean Using Quartiles of the Auxiliary Variable
Abstract: In this paper, we suggest new product-type estimators for the population mean of the variable of interest exploiting the first or the third quartile of the auxiliary variable. We obtain mean square error equations and the bias for the estimators. We study the properties of these estimators using simple random sampling (SRS) and ranked set sampling (RSS) methods. It is found that, SRS and RSS produce approximately unbiased estimators of the population mean. However, the RSS estimators are more efficient than those obtained using SRS based on the same number of measured units for all values of the correlation coefficient.
Digital Article Identifier (DOI):
86
9983
Evidence of the Long-run Equilibrium between Money Demand Determinants in Croatia
Abstract: In this paper real money demand function is analyzed
within multivariate time-series framework. Cointegration approach is
used (Johansen procedure) assuming interdependence between
money demand determinants, which are nonstationary variables. This
will help us to understand the behavior of money demand in Croatia,
revealing the significant influence between endogenous variables in
vector autoregrression system (VAR), i.e. vector error correction
model (VECM). Exogeneity of the explanatory variables is tested.
Long-run money demand function is estimated indicating slow speed
of adjustment of removing the disequilibrium. Empirical results
provide the evidence that real industrial production and exchange
rate explains the most variations of money demand in the long-run,
while interest rate is significant only in short-run.
Digital Article Identifier (DOI):
85
9789
An Expectation of the Rate of Inflation According to Inflation-Unemployment Interaction in Croatia
Abstract: According to the interaction of inflation and
unemployment, expectation of the rate of inflation in Croatia is
estimated. The interaction between inflation and unemployment is
shown by model based on three first-order differential i.e. difference
equations: Phillips relation, adaptive expectations equation and
monetary-policy equation. The resulting equation is second order
differential i.e. difference equation which describes the time path of
inflation. The data of the rate of inflation and the rate of
unemployment are used for parameters estimation. On the basis of
the estimated time paths, the stability and convergence analysis is
done for the rate of inflation.
Digital Article Identifier (DOI):
84
3989
Comparative Analysis of the Stochastic and Parsimonious Interest Rates Models on Croatian Government Market
Abstract: The paper provides a discussion of the most relevant
aspects of yield curve modeling. Two classes of models are
considered: stochastic and parsimonious function based, through the
approaches developed by Vasicek (1977) and Nelson and Siegel
(1987). Yield curve estimates for Croatia are presented and their
dynamics analyzed and finally, a comparative analysis of models is
conducted.
Digital Article Identifier (DOI):
83
5173
Determining Optimal Demand Rate and Production Decisions: A Geometric Programming Approach
Abstract: In this paper a nonlinear model is presented to
demonstrate the relation between production and marketing
departments. By introducing some functions such as pricing cost and
market share loss functions it will be tried to show some aspects of
market modelling which has not been regarded before. The proposed
model will be a constrained signomial geometric programming
model. For model solving, after variables- modifications an iterative
technique based on the concept of geometric mean will be introduced
to solve the resulting non-standard posynomial model which can be
applied to a wide variety of models in non-standard posynomial
geometric programming form. At the end a numerical analysis will
be presented to accredit the validity of the mentioned model.
Digital Article Identifier (DOI):
82
4950
Validation and Selection between Machine Learning Technique and Traditional Methods to Reduce Bullwhip Effects: a Data Mining Approach
Abstract: The aim of this paper is to present a methodology in
three steps to forecast supply chain demand. In first step, various data
mining techniques are applied in order to prepare data for entering
into forecasting models. In second step, the modeling step, an
artificial neural network and support vector machine is presented
after defining Mean Absolute Percentage Error index for measuring
error. The structure of artificial neural network is selected based on
previous researchers' results and in this article the accuracy of
network is increased by using sensitivity analysis. The best forecast
for classical forecasting methods (Moving Average, Exponential
Smoothing, and Exponential Smoothing with Trend) is resulted based
on prepared data and this forecast is compared with result of support
vector machine and proposed artificial neural network. The results
show that artificial neural network can forecast more precisely in
comparison with other methods. Finally, forecasting methods'
stability is analyzed by using raw data and even the effectiveness of
clustering analysis is measured.
Digital Article Identifier (DOI):
81
9955
Integrating the Theory of Constraints and Six Sigma in Manufacturing Process Improvement
Abstract: Six Sigma is a well known discipline that reduces
variation using complex statistical tools and the DMAIC model. By
integrating Goldratts-s Theory of Constraints, the Five Focusing
Points and System Thinking tools, Six Sigma projects can be selected
where it can cause more impact in the company. This research
defines an integrated model of six sigma and constraint management
that shows a step-by-step guide using the original methodologies
from each discipline and is evaluated in a case study from the
production line of a Automobile engine monoblock V8, resulting in
an increase in the line capacity from 18.7 pieces per hour to 22.4
pieces per hour, a reduction of 60% of Work-In-Process and a
variation decrease of 0.73%.
Digital Article Identifier (DOI):
80
9267
Tracing Quality Cost in a Luggage Manufacturing Industry
Abstract: Quality costs are the costs associated with preventing,
finding, and correcting defective work. Since the main language of
corporate management is money, quality-related costs act as means of
communication between the staff of quality engineering departments
and the company managers. The objective of quality engineering is to
minimize the total quality cost across the life of product. Quality
costs provide a benchmark against which improvement can be
measured over time. It provides a rupee-based report on quality
improvement efforts. It is an effective tool to identify, prioritize and
select quality improvement projects. After reviewing through the
literature it was noticed that a simplified methodology for data
collection of quality cost in a manufacturing industry was required.
The quantified standard methodology is proposed for collecting data
of various elements of quality cost categories for manufacturing
industry. Also in the light of research carried out so far, it is felt
necessary to standardise cost elements in each of the prevention,
appraisal, internal failure and external failure costs. . Here an attempt
is made to standardise the various cost elements applicable to
manufacturing industry and data is collected by using the proposed
quantified methodology. This paper discusses the case study carried
in luggage manufacturing industry.
Digital Article Identifier (DOI):
79
339
On the Variability of Tool Wear and Life at Disparate Operating Parameters
Abstract: The stochastic nature of tool life using conventional discrete-wear data from experimental tests usually exists due to many individual and interacting parameters. It is a common practice in batch production to continually use the same tool to machine different parts, using disparate machining parameters. In such an environment, the optimal points at which tools have to be changed, while achieving minimum production cost and maximum production rate within the surface roughness specifications, have not been adequately studied. In the current study, two relevant aspects are investigated using coated and uncoated inserts in turning operations: (i) the accuracy of using machinability information, from fixed parameters testing procedures, when variable parameters situations are emerged, and (ii) the credibility of tool life machinability data from prior discrete testing procedures in a non-stop machining. A novel technique is proposed and verified to normalize the conventional fixed parameters machinability data to suit the cases when parameters have to be changed for the same tool. Also, an experimental investigation has been established to evaluate the error in the tool life assessment when machinability from discrete testing procedures is employed in uninterrupted practical machining.
Digital Article Identifier (DOI):
78
4683
Production Planning and Measuring Method for Non Patterned Production System Using Stock Cutting Model
Abstract: The simple methods used to plan and measure non
patterned production system are developed from the basic definition
of working efficiency. Processing time is assigned as the variable
and used to write the equation of production efficiency.
Consequently, such equation is extensively used to develop the
planning method for production of interest using one-dimensional
stock cutting problem. The application of the developed method
shows that production efficiency and production planning can be
determined effectively.
Digital Article Identifier (DOI):
77
6947
A New Approach for Prioritization of Failure Modes in Design FMEA using ANOVA
Abstract: The traditional Failure Mode and Effects Analysis
(FMEA) uses Risk Priority Number (RPN) to evaluate the risk level
of a component or process. The RPN index is determined by
calculating the product of severity, occurrence and detection indexes.
The most critically debated disadvantage of this approach is that
various sets of these three indexes may produce an identical value of
RPN. This research paper seeks to address the drawbacks in
traditional FMEA and to propose a new approach to overcome these
shortcomings. The Risk Priority Code (RPC) is used to prioritize
failure modes, when two or more failure modes have the same RPN.
A new method is proposed to prioritize failure modes, when there is a
disagreement in ranking scale for severity, occurrence and detection.
An Analysis of Variance (ANOVA) is used to compare means of
RPN values. SPSS (Statistical Package for the Social Sciences)
statistical analysis package is used to analyze the data. The results
presented are based on two case studies. It is found that the proposed
new methodology/approach resolves the limitations of traditional
FMEA approach.
Digital Article Identifier (DOI):
76
4979
Case on Manufacturing Cell Formation Using Production Flow Analysis
Abstract: This paper offers a case study, in which
methodological aspects of cell design for transformation the
production process are applied. The cell redesign in this work is
tightly focused to reach optimization of material flows under real
manufacturing conditions. Accordingly, more individual techniques
were aggregated into compact methodical procedure with aim to built
one-piece flow production. Case study was concentrated on relatively
typical situation of transformation from batch production to cellular
manufacturing.
Digital Article Identifier (DOI):
75
5035
A Goal Programming Approach for Plastic Recycling System in Thailand
Abstract: Plastic waste is a big issue in Thailand, but the amount of recycled plastic in Thailand is still low due to the high investment and operating cost. Hence, the rest of plastic waste are burnt to destroy or sent to the landfills. In order to be financial viable, an effective reverse logistics infrastructure is required to support the product recovery activities. However, there is a conflict between reducing the cost and raising environmental protection level. The purpose of this study is to build a goal programming (GP) so that it can be used to help analyze the proper planning of the Thailand-s plastic recycling system that involves multiple objectives. This study considers three objectives; reducing total cost, increasing the amount of plastic recovery, and raising the desired plastic materials in recycling process. The results from two priority structures show that it is necessary to raise the total cost budget in order to achieve targets on amount of recycled plastic and desired plastic materials.
Digital Article Identifier (DOI):
74
10694
Deep Web Content Mining
Abstract: The rapid expansion of the web is causing the
constant growth of information, leading to several problems such as
increased difficulty of extracting potentially useful knowledge. Web
content mining confronts this problem gathering explicit information
from different web sites for its access and knowledge discovery.
Query interfaces of web databases share common building blocks.
After extracting information with parsing approach, we use a new
data mining algorithm to match a large number of schemas in
databases at a time. Using this algorithm increases the speed of
information matching. In addition, instead of simple 1:1 matching,
they do complex (m:n) matching between query interfaces. In this
paper we present a novel correlation mining algorithm that matches
correlated attributes with smaller cost. This algorithm uses Jaccard
measure to distinguish positive and negative correlated attributes.
After that, system matches the user query with different query
interfaces in special domain and finally chooses the nearest query
interface with user query to answer to it.
Digital Article Identifier (DOI):
73
705
An Efficient Approach to Mining Frequent Itemsets on Data Streams
Abstract: The increasing importance of data stream arising in a
wide range of advanced applications has led to the extensive study of
mining frequent patterns. Mining data streams poses many new
challenges amongst which are the one-scan nature, the unbounded
memory requirement and the high arrival rate of data streams. In this
paper, we propose a new approach for mining itemsets on data
stream. Our approach SFIDS has been developed based on FIDS
algorithm. The main attempts were to keep some advantages of the
previous approach and resolve some of its drawbacks, and
consequently to improve run time and memory consumption. Our
approach has the following advantages: using a data structure similar
to lattice for keeping frequent itemsets, separating regions from each
other with deleting common nodes that results in a decrease in search
space, memory consumption and run time; and Finally, considering
CPU constraint, with increasing arrival rate of data that result in
overloading system, SFIDS automatically detect this situation and
discard some of unprocessing data. We guarantee that error of results
is bounded to user pre-specified threshold, based on a probability
technique. Final results show that SFIDS algorithm could attain
about 50% run time improvement than FIDS approach.
Digital Article Identifier (DOI):
72
14661
Small and Silly? or Private Pitfall of Small and Medium-Sized Enterprises
Abstract: Knowledge and these notions have become more and
more important and we speak about a knowledge based society
today. A lot of small and big companies have reacted upon these new
challenges. But there is a deep abyss about knowledge conception
and practice between the professional researchers and company - life.
The question of this research was: How can small and mediumsized
companies be equal to the demands of new economy?
Questionnaires were used in this research and a special segment of
the native knowledge based on economy was focused on.
Researchers would have liked to know what the sources of success
are and how they can be in connection with questions of knowledge
acquisition, knowledge transfer, knowledge utilization in small and
medium-sized companies. These companies know that they have to
change their behaviour and thinking, but they are not on the suitable
level that they can compete with bigger or multinational companies.
Digital Article Identifier (DOI):
71
11192
A Modified Fuzzy C-Means Algorithm for Natural Data Exploration
Abstract: In Data mining, Fuzzy clustering algorithms have
demonstrated advantage over crisp clustering algorithms in dealing
with the challenges posed by large collections of vague and uncertain
natural data. This paper reviews concept of fuzzy logic and fuzzy
clustering. The classical fuzzy c-means algorithm is presented and its
limitations are highlighted. Based on the study of the fuzzy c-means
algorithm and its extensions, we propose a modification to the cmeans
algorithm to overcome the limitations of it in calculating the
new cluster centers and in finding the membership values with
natural data. The efficiency of the new modified method is
demonstrated on real data collected for Bhutan-s Gross National
Happiness (GNH) program.
Digital Article Identifier (DOI):
70
15463
Design Analysis of a Slotted Microstrip Antenna for Wireless Communication
Abstract: In this paper, a new design technique for enhancing
bandwidth that improves the performance of a conventional
microstrip patch antenna is proposed. This paper presents a novel
wideband probe fed inverted slotted microstrip patch antenna. The
design adopts contemporary techniques; coaxial probe feeding,
inverted patch structure and slotted patch. The composite effect of
integrating these techniques and by introducing the proposed patch,
offer a low profile, broadband, high gain, and low cross-polarization
level. The results for the VSWR, gain and co-and cross-polarization
patterns are presented. The antenna operating the band of 1.80-2.36
GHz shows an impedance bandwidth (2:1 VSWR) of 27% and a gain
of 10.18 dBi with a gain variation of 1.12 dBi. Good radiation
characteristics, including a cross-polarization level in xz-plane less
than -42 dB, have been obtained.
Digital Article Identifier (DOI):
69
9848
Digital Learning Environments for Joint Master in Science Programmes in Building and Construction in Europe: Experimenting with Tools and Technologies
Abstract: Recent developments in information and
communication technologies (ICT) have created excellent conditions
for profoundly enhancing the traditional learning and teaching
practices. New modes of teaching in higher education subjects can
profoundly enhance ones ability to proactively constructing his or her
personal learning universe. These developments have contributed to
digital learning environments becoming widely available and
accessible. In addition, there is a trend towards enlargement and
specialization in higher education in Europe. With as a result that
existing Master of Science (MSc) programmes are merged or new
programmes have been established that are offered as joint MSc
programmes to students. In these joint MSc programmes, the need for
(common) digital learning environments capable of surmounting the
barriers of time and location has become evident. This paper
discusses the past and ongoing efforts to establish such common
digital learning environments in two joint MSc programmes in
Europe and discusses the way technology-based learning
environments affect the traditional way of learning.
Digital Article Identifier (DOI):
68
207
Using Weblog to Promote Critical Thinking – An Exploratory Study
Abstract: Weblog is an Internet tool that is believed to possess
great potential to facilitate learning in education. This study wants to
know if weblog can be used to promote students- critical thinking. It
used a group of secondary two students from a Singapore school to
write weblogs as a means of substitution for their traditional
handwritten assignments. The topics for the weblogging are taken
from History syllabus but modified to suit the purpose of this study.
Weblogs from the students were collected and analysed using a
known coding system for measuring critical thinking. Results show
that the topic for blogging is crucial in determining the types of
critical thinking employed by the students. Students are seen to
display critical thinking traits in the areas of information sourcing,
linking information to arguments and viewpoints justification.
Students- criticalness is more profound when the information for
writing a topic is readily available. Otherwise, they tend to be less
critical and subjective. The study also found that students lack the
ability to source for external information suggesting that students
may need to be taught information literacy in order to widen their use
of critical thinking skills.
Digital Article Identifier (DOI):
67
10124
The Experiences of South-African High-School Girls in a Fab Lab Environment
Abstract: This paper reports on an effort to address the issue of
inequality in girls- and women-s access to science, engineering and
technology (SET) education and careers through raising awareness on
SET among secondary school girls in South Africa. Girls participated
in hands-on high-tech rapid prototyping environment of a fabrication
laboratory that was aimed at stimulating creativity and innovation as
part of a Fab Kids initiative. The Fab Kids intervention is about
creating a SET pipeline as part of the Young Engineers and Scientists
of Africa Initiative.The methodology was based on a real world
situation and a hands-on approach. In the process, participants
acquired a number of skills including computer-aided design,
research skills, communication skills, teamwork skills, technical
drawing skills, writing skills and problem-solving skills. Exposure to
technology enhanced the girls- confidence in being able to handle
technology-related tasks.
Digital Article Identifier (DOI):
66
4073
Explorative Data Mining of Constructivist Learning Experiences and Activities with Multiple Dimensions
Abstract: This paper discusses the use of explorative data
mining tools that allow the educator to explore new relationships
between reported learning experiences and actual activities,
even if there are multiple dimensions with a large number
of measured items. The underlying technology is based on
the so-called Compendium Platform for Reproducible Computing
(http://www.freestatistics.org) which was built on top the computational
R Framework (http://www.wessa.net).
Digital Article Identifier (DOI):
65
11887
Social Influence in the Adoption Process and Usage of Innovation: Gender Differences
Abstract: The purpose of this study is to determine in what
ways elementary education prospective teachers are being informed
about innovations and to explain the role of social influence in the
usage process of a technological innovation in terms of genders. The
study group consisted of 300 prospective teachers, including 234
females and 66 males. Data have been collected by a questionnaire
developed by the researchers. The result of the study showed that,
while prospective teachers are being informed about innovations
most frequently by mass media, they rarely seek to take expert
advice. In addition, analysis of results showed that the social
influence on females were significantly higher than males in usage
process of a technological innovation.
Digital Article Identifier (DOI):
64
2752
The Usage of Social Networks in Educational Context
Abstract: Possible advantages of technology in educational
context required the defining boundaries of formal and informal
learning. Increasing opportunity to ubiquitous learning by
technological support has revealed a question of how to discover
the potential of individuals in the spontaneous environments such as
social networks. This seems to be related with the question of what
purposes in social networks have been being used? Social networks
provide various advantages in educational context as collaboration,
knowledge sharing, common interests, active participation and
reflective thinking. As a consequence of these, the purpose of this
study is composed of proposing a new model that could determine
factors which effect adoption of social network applications for usage
in educational context. While developing a model proposal, the
existing adoption and diffusion models have been reviewed and they
are thought to be suitable on handling an original perspective instead
of using completely other diffusion or acceptance models because of
different natures of education from other organizations. In the
proposed model; social factors, perceived ease of use, perceived
usefulness and innovativeness are determined four direct constructs
that effect adoption process. Facilitating conditions, image,
subjective norms and community identity are incorporated to model
as antecedents of these direct four constructs.
Digital Article Identifier (DOI):
63
11047
Instructional Design Using the Virtual Ecological Pond for Science Education in Elementary Schools
Abstract: Ecological ponds can be a good teaching tool for
science teachers, but they must be built and maintained properly to
provide students with a safe and suitable learning environment.
Hence, many schools do not have the ability to build an ecological
pond. This study used virtual reality technology to develop a webbased
virtual ecological pond. Supported by situated learning theory
and the instructional design of “Aquatic Life" learning unit,
elementary school students can actively explore in the virtual
ecological pond to observe aquatic animals and plants and learn
about the concept of ecological conservation. A teaching experiment
was conducted to investigate the learning effectiveness and
practicability of this instructional design, and the results showed that
students improved a great deal in learning about aquatic life. They
found the virtual ecological pond interesting, easy to operate and
helpful to understanding the aquatic ecological system. Therefore, it
is useful in elementary science education.
Digital Article Identifier (DOI):
62
15080
IDEL - A simple Instructional Design Tool for E-Learning
Abstract: Today-s Information and Knowledge Society has
placed new demands on education and a new paradigm of education
is required. Learning, facilitated by educational systems and the
pedagogic process, is globally undergoing dramatic changes. The aim
of this paper is the development of a simple Instructional Design tool
for E-Learning, named IDEL (Instructional Design for Electronic
Learning), that provides the educators with facilities to create their
own courses with the essential educational material and manage
communication with students. It offers flexibility in the way of
learning and provides ease in employment and reusability of
resources. IDEL is a web-based Instructional System and is designed
to facilitate course design process in accordance with the ADDIE
model and the instructional design principles with emphasis placed
on the use of technology enhanced learning. An example case of
using the ADDIE model to systematically develop a course and its
implementation with the aid of IDEL is given and some results from
student evaluation of the tool and the course are reported.
Digital Article Identifier (DOI):
61
2658
Database Development and Discrimination Algorithms for Membrane Protein Functions
Abstract: We have developed a database for membrane protein functions, which has more than 3000 experimental data on functionally important amino acid residues in membrane proteins along with sequence, structure and literature information. Further, we have proposed different methods for identifying membrane proteins based on their functions: (i) discrimination of membrane transport proteins from other globular and membrane proteins and classifying them into channels/pores, electrochemical and active transporters, and (ii) β-signal for the insertion of mitochondrial β-barrel outer membrane proteins and potential targets. Our method showed an accuracy of 82% in discriminating transport proteins and 68% to classify them into three different transporters. In addition, we have identified a motif for targeting β-signal and potential candidates for mitochondrial β-barrel membrane proteins. Our methods can be used as effective tools for genome-wide annotations.
Digital Article Identifier (DOI):
60
9103
Optimization by Ant Colony Hybryde for the Bin-Packing Problem
Abstract: The problem of bin-packing in two dimensions (2BP) consists in placing a given set of rectangular items in a minimum number of rectangular and identical containers, called bins. This article treats the case of objects with a free orientation of 90Ôùª. We propose an approach of resolution combining optimization by colony of ants (ACO) and the heuristic method IMA to resolve this NP-Hard problem.
Digital Article Identifier (DOI):
59
13604
BIDENS: Iterative Density Based Biclustering Algorithm With Application to Gene Expression Analysis
Abstract: Biclustering is a very useful data mining technique for
identifying patterns where different genes are co-related based on a
subset of conditions in gene expression analysis. Association rules
mining is an efficient approach to achieve biclustering as in
BIMODULE algorithm but it is sensitive to the value given to its
input parameters and the discretization procedure used in the
preprocessing step, also when noise is present, classical association
rules miners discover multiple small fragments of the true bicluster,
but miss the true bicluster itself. This paper formally presents a
generalized noise tolerant bicluster model, termed as μBicluster. An
iterative algorithm termed as BIDENS based on the proposed model
is introduced that can discover a set of k possibly overlapping
biclusters simultaneously. Our model uses a more flexible method to
partition the dimensions to preserve meaningful and significant
biclusters. The proposed algorithm allows discovering biclusters that
hard to be discovered by BIMODULE. Experimental study on yeast,
human gene expression data and several artificial datasets shows that
our algorithm offers substantial improvements over several
previously proposed biclustering algorithms.
Digital Article Identifier (DOI):
58
410
Fuzzy Relatives of the CLARANS Algorithm With Application to Text Clustering
Abstract: This paper introduces new algorithms (Fuzzy relative
of the CLARANS algorithm FCLARANS and Fuzzy c Medoids
based on randomized search FCMRANS) for fuzzy clustering of
relational data. Unlike existing fuzzy c-medoids algorithm (FCMdd)
in which the within cluster dissimilarity of each cluster is minimized
in each iteration by recomputing new medoids given current
memberships, FCLARANS minimizes the same objective function
minimized by FCMdd by changing current medoids in such away
that that the sum of the within cluster dissimilarities is minimized.
Computing new medoids may be effected by noise because outliers
may join the computation of medoids while the choice of medoids in
FCLARANS is dictated by the location of a predominant fraction of
points inside a cluster and, therefore, it is less sensitive to the
presence of outliers. In FCMRANS the step of computing new
medoids in FCMdd is modified to be based on randomized search.
Furthermore, a new initialization procedure is developed that add
randomness to the initialization procedure used with FCMdd. Both
FCLARANS and FCMRANS are compared with the robust and
linearized version of fuzzy c-medoids (RFCMdd). Experimental
results with different samples of the Reuter-21578, Newsgroups
(20NG) and generated datasets with noise show that FCLARANS is
more robust than both RFCMdd and FCMRANS. Finally, both
FCMRANS and FCLARANS are more efficient and their outputs
are almost the same as that of RFCMdd in terms of classification
rate.
Digital Article Identifier (DOI):
57
5892
A Metametadata Architecture forPedagogic Data Description
Abstract: This paper focuses on a novel method for semantic
searching and retrieval of information about learning materials.
Metametadata encapsulate metadata instances by using the properties
and attributes provided by ontologies rather than describing learning
objects. A novel metametadata taxonomy has been developed which
provides the basis for a semantic search engine to extract, match and
map queries to retrieve relevant results. The use of ontological views
is a foundation for viewing the pedagogical content of metadata
extracted from learning objects by using the pedagogical attributes
from the metametadata taxonomy. Using the ontological approach
and metametadata (based on the metametadata taxonomy) we present
a novel semantic searching mechanism.These three strands – the
taxonomy, the ontological views, and the search algorithm – are
incorporated into a novel architecture (OMESCOD) which has been
implemented.
Digital Article Identifier (DOI):
56
6539
Construction and Performance Characterization of the Looped-Tube Travelling-Wave Thermoacoustic Engine with Ceramic Regenerator
Abstract: In a travelling wave thermoacoustic device, the
regenerator sandwiched between a pair of (hot and cold) heat
exchangers constitutes the so-called thermoacoustic core, where the
thermoacoustic energy conversion from heat to acoustic power takes
place. The temperature gradient along the regenerator caused by the
two heat exchangers excites and maintains the acoustic wave in the
resonator. The devices are called travelling wave thermoacoustic
systems because the phase angle difference between the pressure and
velocity oscillation is close to zero in the regenerator. This paper
presents the construction and testing of a thermoacoustic engine
equipped with a ceramic regenerator, made from a ceramic material
that is usually used as catalyst substrate in vehicles- exhaust systems,
with fine square channels (900 cells per square inch). The testing
includes the onset temperature difference (minimum temperature
difference required to start the acoustic oscillation in an engine), the
acoustic power output, thermal efficiency and the temperature profile
along the regenerator.
Digital Article Identifier (DOI):
55
10635
Heat Transfer and Frictional Characteristics in Rectangular Channel with Inclined Perforated Baffles
Abstract: A numerical study on the turbulent flow and heat
transfer characteristics in the rectangular channel with different types
of baffles is carried out. The inclined baffles have the width of 19.8
cm, the square diamond type hole having one side length of 2.55 cm,
and the inclination angle of 5o. Reynolds number is varied between
23,000 and 57,000. The SST turbulence model is applied in the
calculation. The validity of the numerical results is examined by the
experimental data. The numerical results of the flow field depict that
the flow patterns around the different baffle type are entirely different
and these significantly affect the local heat transfer characteristics.
The heat transfer and friction factor characteristics are significantly
affected by the perforation density of the baffle plate. It is found that
the heat transfer enhancement of baffle type II (3 hole baffle) has the
best values.
Digital Article Identifier (DOI):
54
10193
Bitrate Reduction Using FMO for Video Streaming over Packet Networks
Abstract: Flexible macroblock ordering (FMO), adopted in the
H.264 standard, allows to partition all macroblocks (MBs) in a frame
into separate groups of MBs called Slice Groups (SGs). FMO can not
only support error-resilience, but also control the size of video packets
for different network types. However, it is well-known that the number
of bits required for encoding the frame is increased by adopting FMO.
In this paper, we propose a novel algorithm that can reduce the bitrate
overhead caused by utilizing FMO. In the proposed algorithm, all MBs
are grouped in SGs based on the similarity of the transform
coefficients. Experimental results show that our algorithm can reduce
the bitrate as compared with conventional FMO.
Digital Article Identifier (DOI):
53
2077
A New Method for Multiobjective Optimization Based on Learning Automata
Abstract: The necessity of solving multi dimensional
complicated scientific problems beside the necessity of several
objective functions optimization are the most motive reason of born
of artificial intelligence and heuristic methods.
In this paper, we introduce a new method for multiobjective
optimization based on learning automata. In the proposed method,
search space divides into separate hyper-cubes and each cube is
considered as an action. After gathering of all objective functions
with separate weights, the cumulative function is considered as the
fitness function. By the application of all the cubes to the cumulative
function, we calculate the amount of amplification of each action and
the algorithm continues its way to find the best solutions. In this
Method, a lateral memory is used to gather the significant points of
each iteration of the algorithm. Finally, by considering the
domination factor, pareto front is estimated. Results of several
experiments show the effectiveness of this method in comparison
with genetic algorithm based method.
Digital Article Identifier (DOI):
52
6146
Data Mining Using Learning Automata
Abstract: In this paper a data miner based on the learning
automata is proposed and is called LA-miner. The LA-miner extracts
classification rules from data sets automatically. The proposed
algorithm is established based on the function optimization using
learning automata. The experimental results on three benchmarks
indicate that the performance of the proposed LA-miner is
comparable with (sometimes better than) the Ant-miner (a data miner
algorithm based on the Ant Colony optimization algorithm) and CNZ
(a well-known data mining algorithm for classification).
Digital Article Identifier (DOI):
51
8436
Position Based Routing Protocol with More Reliability in Mobile Ad Hoc Network
Abstract: Position based routing protocols are the kinds of
routing protocols, which they use of nodes location information,
instead of links information to routing. In position based routing
protocols, it supposed that the packet source node has position
information of itself and it's neighbors and packet destination node.
Greedy is a very important position based routing protocol. In one of
it's kinds, named MFR (Most Forward Within Radius), source node
or packet forwarder node, sends packet to one of it's neighbors with
most forward progress towards destination node (closest neighbor to
destination). Using distance deciding metric in Greedy to forward
packet to a neighbor node, is not suitable for all conditions. If closest
neighbor to destination node, has high speed, in comparison with
source node or intermediate packet forwarder node speed or has very
low remained battery power, then packet loss probability is
increased. Proposed strategy uses combination of metrics distancevelocity
similarity-power, to deciding about giving the packet to
which neighbor. Simulation results show that the proposed strategy
has lower lost packets average than Greedy, so it has more reliability.
Digital Article Identifier (DOI):
50
11934
Anomaly Detection and Characterization to Classify Traffic Anomalies Case Study: TOT Public Company Limited Network
Abstract: This paper represents four unsupervised clustering algorithms namely sIB, RandomFlatClustering, FarthestFirst, and FilteredClusterer that previously works have not been used for network traffic classification. The methodology, the result, the products of the cluster and evaluation of these algorithms with efficiency of each algorithm from accuracy are shown. Otherwise, the efficiency of these algorithms considering form the time that it use to generate the cluster quickly and correctly. Our work study and test the best algorithm by using classify traffic anomaly in network traffic with different attribute that have not been used before. We analyses the algorithm that have the best efficiency or the best learning and compare it to the previously used (K-Means). Our research will be use to develop anomaly detection system to more efficiency and more require in the future.
Digital Article Identifier (DOI):
49
10482
A Novel Deinterlacing Algorithm Based on Adaptive Polynomial Interpolation
Abstract: In this paper, a novel deinterlacing algorithm is
proposed. The proposed algorithm approximates the distribution of the
luminance into a polynomial function. Instead of using one
polynomial function for all pixels, different polynomial functions are
used for the uniform, texture, and directional edge regions. The
function coefficients for each region are computed by matrix
multiplications. Experimental results demonstrate that the proposed
method performs better than the conventional algorithms.
Digital Article Identifier (DOI):
48
6149
Surface Topography Assessment Techniques based on an In-process Monitoring Approach of Tool Wear and Cutting Force Signature
Abstract: The quality of a machined surface is becoming more and more important to justify the increasing demands of sophisticated component performance, longevity, and reliability. Usually, any machining operation leaves its own characteristic evidence on the machined surface in the form of finely spaced micro irregularities (surface roughness) left by the associated indeterministic characteristics of the different elements of the system: tool-machineworkpart- cutting parameters. However, one of the most influential sources in machining affecting surface roughness is the instantaneous state of tool edge. The main objective of the current work is to relate the in-process immeasurable cutting edge deformation and surface roughness to a more reliable easy-to-measure force signals using a robust non-linear time-dependent modeling regression techniques. Time-dependent modeling is beneficial when modern machining systems, such as adaptive control techniques are considered, where the state of the machined surface and the health of the cutting edge are monitored, assessed and controlled online using realtime information provided by the variability encountered in the measured force signals. Correlation between wear propagation and roughness variation is developed throughout the different edge lifetimes. The surface roughness is further evaluated in the light of the variation in both the static and the dynamic force signals. Consistent correlation is found between surface roughness variation and tool wear progress within its initial and constant regions. At the first few seconds of cutting, expected and well known trend of the effect of the cutting parameters is observed. Surface roughness is positively influenced by the level of the feed rate and negatively by the cutting speed. As cutting continues, roughness is affected, to different extents, by the rather localized wear modes either on the tool nose or on its flank areas. Moreover, it seems that roughness varies as wear attitude transfers from one mode to another and, in general, it is shown that it is improved as wear increases but with possible corresponding workpart dimensional inaccuracy. The dynamic force signals are found reasonably sensitive to simulate either the progressive or the random modes of tool edge deformation. While the frictional force components, feeding and radial, are found informative regarding progressive wear modes, the vertical (power) components is found more representative carrier to system instability resulting from the edge-s random deformation.
Digital Article Identifier (DOI):
47
12566
Using Linear Quadratic Gaussian Optimal Control for Lateral Motion of Aircraft
Abstract: The purpose of this paper is to provide a practical
example to the Linear Quadratic Gaussian (LQG) controller. This
method includes a description and some discussion of the discrete
Kalman state estimator. One aspect of this optimality is that the
estimator incorporates all information that can be provided to it. It
processes all available measurements, regardless of their precision, to
estimate the current value of the variables of interest, with use of
knowledge of the system and measurement device dynamics, the
statistical description of the system noises, measurement errors, and
uncertainty in the dynamics models.
Since the time of its introduction, the Kalman filter has been the
subject of extensive research and application, particularly in the area
of autonomous or assisted navigation. For example, to determine the
velocity of an aircraft or sideslip angle, one could use a Doppler
radar, the velocity indications of an inertial navigation system, or the
relative wind information in the air data system. Rather than ignore
any of these outputs, a Kalman filter could be built to combine all of
this data and knowledge of the various systems- dynamics to
generate an overall best estimate of velocity and sideslip angle.
Digital Article Identifier (DOI):
46
8085
Grid Learning; Computer Grid Joins to e- Learning
Abstract: According to development of communications and
web-based technologies in recent years, e-Learning has became very
important for everyone and is seen as one of most dynamic teaching
methods.
Grid computing is a pattern for increasing of computing power
and storage capacity of a system and is based on hardware and
software resources in a network with common purpose. In this article
we study grid architecture and describe its different layers. In this
way, we will analyze grid layered architecture. Then we will
introduce a new suitable architecture for e-Learning which is based
on grid network, and for this reason we call it Grid Learning
Architecture. Various sections and layers of suggested architecture
will be analyzed; especially grid middleware layer that has key role.
This layer is heart of grid learning architecture and, in fact,
regardless of this layer, e-Learning based on grid architecture will
not be feasible.
Digital Article Identifier (DOI):
45
2520
A Matlab / Simulink Based Tool for Power Electronic Circuits
Abstract: Transient simulation of power electronic circuits is of
considerable interest to the designer. The switching nature of the
devices used permits development of specialized algorithms which
allow a considerable reduction in simulation time compared to
general purpose simulation algorithms. This paper describes a
method used to simulate a power electronic circuits using the
SIMULINK toolbox within MATLAB software. Theoretical results
are presented provides the basis of transient analysis of a power
electronic circuits.
Digital Article Identifier (DOI):
44
13184
Interfacing C and TMS320C6713 Assembly Language (Part-I)
Abstract: This paper describes an interfacing of C and the
TMS320C6713 assembly language which is crucially important for
many real-time applications. Similarly, interfacing of C with the
assembly language of a conventional microprocessor such as
MC68000 is presented for comparison. However, it should be noted
that the way the C compiler passes arguments among various
functions in the TMS320C6713-based environment is totally
different from the way the C compiler passes arguments in a
conventional microprocessor such as MC68000. Therefore, it is very
important for a user of the TMS320C6713-based system to properly
understand and follow the register conventions when interfacing C
with the TMS320C6713 assembly language subroutine. It should be
also noted that in some cases (examples 6-9) the endian-mode of the
board needs to be taken into consideration. In this paper, one method
is presented in great detail. Other methods will be presented in the
future.
Digital Article Identifier (DOI):
43
11786
Cosastudio: A Software Architecture Modeling Tool
Abstract: A key aspect of the design of any software system is
its architecture. An architecture description provides a formal model
of the architecture in terms of components and connectors and how
they are composed together. COSA (Component-Object based
Software Structures), is based on object-oriented modeling and
component-based modeling. The model improves the reusability by
increasing extensibility, evolvability, and compositionality of the
software systems. This paper presents the COSA modelling tool
which help architects the possibility to verify the structural coherence
of a given system and to validate its semantics with COSA approach.
Digital Article Identifier (DOI):
42
9751
Information Resource Management Maturity Model
Abstract: Nowadays there are more than thirty maturity models
in different knowledge areas. Maturity model is an area of interest
that contributes organizations to find out where they are in a specific
knowledge area and how to improve it. As Information Resource
Management (IRM) is the concept that information is a major
corporate resource and must be managed using the same basic
principles used to manage other assets, assessment of the current
IRM status and reveal the improvement points can play a critical role
in developing an appropriate information structure in organizations.
In this paper we proposed a framework for information resource
management maturity model (IRM3) that includes ten best practices
for the maturity assessment of the organizations' IRM.
Digital Article Identifier (DOI):
41
7735
Analysis of a Secondary Autothermal Reformer Using a Thermodynamic POX Model
Abstract: Partial oxidation (POX) of light hydrocarbons (e.g.
methane) is occurred in the first part of the autothermal reformer
(ATR). The results of the detailed modeling of the reformer based on
the thermodynamic model of the POX and 1D heterogeneous
catalytic model for the fixed bed section are considered here.
According to the results, the overall performance of the ATR can be
improved by changing the important feed parameters.
Digital Article Identifier (DOI):
40
15800
Design for Manufacturability and Concurrent Engineering for Product Development
Abstract: In the 1980s, companies began to feel the effect of three major influences on their product development: newer and innovative technologies, increasing product complexity and larger organizations. And therefore companies were forced to look for new product development methods. This paper tries to focus on the two of new product development methods (DFM and CE). The aim of this paper is to see and analyze different product development methods specifically on Design for Manufacturability and Concurrent Engineering. Companies can achieve and be benefited by minimizing product life cycle, cost and meeting delivery schedule. This paper also presents simplified models that can be modified and used by different companies based on the companies- objective and requirements. Methodologies that are followed to do this research are case studies. Two companies were taken and analysed on the product development process. Historical data, interview were conducted on these companies in addition to that, Survey of literatures and previous research works on similar topics has been done during this research. This paper also tries to show the implementation cost benefit analysis and tries to calculate the implementation time. From this research, it has been found that the two companies did not achieve the delivery time to the customer. Some of most frequently coming products are analyzed and 50% to 80 % of their products are not delivered on time to the customers. The companies are following the traditional way of product development that is sequentially design and production method, which highly affect time to market. In the case study it is found that by implementing these new methods and by forming multi disciplinary team in designing and quality inspection; the company can reduce the workflow steps from 40 to 30.
Digital Article Identifier (DOI):
39
6089
Auto Regressive Tree Modeling for Parametric Optimization in Fuzzy Logic Control System
Abstract: The advantage of solving the complex nonlinear
problems by utilizing fuzzy logic methodologies is that the
experience or expert-s knowledge described as a fuzzy rule base can
be directly embedded into the systems for dealing with the problems.
The current limitation of appropriate and automated designing of
fuzzy controllers are focused in this paper. The structure discovery
and parameter adjustment of the Branched T-S fuzzy model is
addressed by a hybrid technique of type constrained sparse tree
algorithms. The simulation result for different system model is
evaluated and the identification error is observed to be minimum.
Digital Article Identifier (DOI):
38
9105
Modeling, Simulation and Monitoring of Nuclear Reactor Using Directed Graph and Bond Graph
Abstract: The main objective developed in this paper is to find a
graphic technique for modeling, simulation and diagnosis of the
industrial systems. This importance is much apparent when it is about
a complex system such as the nuclear reactor with pressurized water
of several form with various several non-linearity and time scales. In
this case the analytical approach is heavy and does not give a fast
idea on the evolution of the system. The tool Bond Graph enabled us
to transform the analytical model into graphic model and the
software of simulation SYMBOLS 2000 specific to the Bond Graphs
made it possible to validate and have the results given by the
technical specifications. We introduce the analysis of the problem
involved in the faults localization and identification in the complex
industrial processes. We propose a method of fault detection applied
to the diagnosis and to determine the gravity of a detected fault. We
show the possibilities of application of the new diagnosis approaches
to the complex system control. The industrial systems became
increasingly complex with the faults diagnosis procedures in the
physical systems prove to become very complex as soon as the
systems considered are not elementary any more. Indeed, in front of
this complexity, we chose to make recourse to Fault Detection and
Isolation method (FDI) by the analysis of the problem of its control
and to conceive a reliable system of diagnosis making it possible to
apprehend the complex dynamic systems spatially distributed applied
to the standard pressurized water nuclear reactor.
Digital Article Identifier (DOI):
37
14357
An Effective Approach for Distribution System Power Flow Solution
Abstract: An effective approach for unbalanced three-phase
distribution power flow solutions is proposed in this paper. The
special topological characteristics of distribution networks have been
fully utilized to make the direct solution possible. Two matrices–the
bus-injection to branch-current matrix and the branch-current to busvoltage
matrix– and a simple matrix multiplication are used to
obtain power flow solutions. Due to the distinctive solution
techniques of the proposed method, the time-consuming LU
decomposition and forward/backward substitution of the Jacobian
matrix or admittance matrix required in the traditional power flow
methods are no longer necessary. Therefore, the proposed method is
robust and time-efficient. Test results demonstrate the validity of the
proposed method. The proposed method shows great potential to be
used in distribution automation applications.
Digital Article Identifier (DOI):
36
917
The Potential Use of Nanofilters to Supply Potable Water in Persian Gulf and Oman Sea Watershed Basin
Abstract: In a world worried about water resources with the
shadow of drought and famine looming all around, the quality of
water is as important as its quantity. The source of all concerns is the
constant reduction of per capita quality water for different uses.
Iran With an average annual precipitation of 250 mm compared to
the 800 mm world average, Iran is considered a water scarce country
and the disparity in the rainfall distribution, the limitations of
renewable resources and the population concentration in the margins
of desert and water scarce areas have intensified the problem.
The shortage of per capita renewable freshwater and its poor
quality in large areas of the country, which have saline, brackish or
hard water resources, and the profusion of natural and artificial
pollutant have caused the deterioration of water quality.
Among methods of treatment and use of these waters one can refer
to the application of membrane technologies, which have come into
focus in recent years due to their great advantages. This process is
quite efficient in eliminating multi-capacity ions; and due to the
possibilities of production at different capacities, application as
treatment process in points of use, and the need for less energy in
comparison to Reverse Osmosis processes, it can revolutionize the
water and wastewater sector in years to come. The article studied the
different capacities of water resources in the Persian Gulf and Oman
Sea watershed basins, and processes the possibility of using
nanofiltration process to treat brackish and non-conventional waters
in these basins.
Digital Article Identifier (DOI):
35
7399
Modeling the Fischer-Tropsch Reaction In a Slurry Bubble Column Reactor
Abstract: Fischer-Tropsch synthesis is one of the most
important catalytic reactions that convert the synthetic gas to light
and heavy hydrocarbons. One of the main issues is selecting the type
of reactor. The slurry bubble reactor is suitable choice for Fischer-
Tropsch synthesis because of its good qualification to transfer heat
and mass, high durability of catalyst, low cost maintenance and
repair. The more common catalysts for Fischer-Tropsch synthesis are
Iron-based and Cobalt-based catalysts, the advantage of these
catalysts on each other depends on which type of hydrocarbons we
desire to produce. In this study, Fischer-Tropsch synthesis is modeled
with Iron and Cobalt catalysts in a slurry bubble reactor considering
mass and momentum balance and the hydrodynamic relations effect
on the reactor behavior. Profiles of reactant conversion and reactant
concentration in gas and liquid phases were determined as the
functions of residence time in the reactor. The effects of temperature,
pressure, liquid velocity, reactor diameter, catalyst diameter, gasliquid
and liquid-solid mass transfer coefficients and kinetic
coefficients on the reactant conversion have been studied. With 5%
increase of liquid velocity (with Iron catalyst), H2 conversions
increase about 6% and CO conversion increase about 4%, With 8%
increase of liquid velocity (with Cobalt catalyst), H2 conversions
increase about 26% and CO conversion increase about 4%. With
20% increase of gas-liquid mass transfer coefficient (with Iron
catalyst), H2 conversions increase about 12% and CO conversion
increase about 10% and with Cobalt catalyst H2 conversions increase
about 10% and CO conversion increase about 6%. Results show that
the process is sensitive to gas-liquid mass transfer coefficient and
optimum condition operation occurs in maximum possible liquid
velocity. This velocity must be more than minimum fluidization
velocity and less than terminal velocity in such a way that avoid
catalysts particles from leaving the fluidized bed.
Digital Article Identifier (DOI):
34
13892
Reactive Absorption of Hydrogen Sulfide in Aqueous Ferric Sulfate Solution
Abstract: Many commercial processes are available for the
removal of H2S from gaseous streams. The desulfurization of gas
streams using aqueous ferric sulfate solution as washing liquor is
studied. Apart from sulfur, only H2O is generated in the process, and
consequently, no waste treatment facilities are required. A distinct
advantage of the process is that the reaction of H2S with is so rapid
and complete that there remains no danger of discharging toxic waste
gas. In this study, the reactive absorption of hydrogen sulfide into
aqueous ferric sulfate solution has been studied and design
calculations for equipments have been done and effective operation
parameters on this process considered. Results show that high
temperature and low pressure are suitable for absorption reaction.
Variation of hydrogen sulfide concentration and Fe3+ concentration
with time in absorption reaction shown that the reaction of ferric
sulfate and hydrogen sulfide is first order with respect to the both
reactant. At low Fe2(SO4)3 concentration the absorption rate of H2S
increase with increasing the Fe2(SO4)3 concentration. At higher
concentration a decrease in the absorption rate was found. At higher
concentration of Fe2(SO4)3, the ionic strength and viscosity of
solution increase remarkably resulting in a decrease of solubility,
diffusivity and hence absorption rate.
Digital Article Identifier (DOI):
33
11186
Twin-Screw Extruder and Effective Parameters on the HDPE Extrusion Process
Abstract: In the process of polyethylene extrusion polymer
material similar to powder or granule is under compression, melting
and transmission operation and on base of special form, extrudate has
been produced. Twin-screw extruders are applicable in industries
because of their high capacity. The powder mixing with chemical
additives and melting with thermal and mechanical energy in three
zones (feed, compression and metering zone) and because of gear
pump and screw's pressure, converting to final product in latest plate.
Extruders with twin-screw and short distance between screws are
better than other types because of their high capacity and good
thermal and mechanical stress.
In this paper, process of polyethylene extrusion and various tapes
of extruders are studied. It is necessary to have an exact control on
process to producing high quality products with safe operation and
optimum energy consumption.
The granule size is depending on granulator motor speed. Results
show at constant feed rate a decrease in granule size was found whit
Increase in motor speed. Relationships between HDPE feed rate and
speed of granulator motor, main motor and gear pump are calculated
following as:
x = HDPE feed flow rate, yM = Main motor speed
yM = (-3.6076e-3) x^4+ (0.24597) x^3+ (-5.49003) x^2+ (64.22092)
x+61.66786 (1)
x = HDPE feed flow rate, yG = Gear pump speed
yG = (-2.4996e-3) x^4+ (0.18018) x^3+ (-4.22794) x^2+ (48.45536)
x+18.78880 (2)
x = HDPE feed flow rate, y = Granulator motor speed
10th Degree Polynomial Fit: y = a+bx+cx^2+dx^3... (3)
a = 1.2751, b = 282.4655, c = -165.2098,
d = 48.3106, e = -8.18715, f = 0.84997
g = -0.056094, h = 0.002358, i = -6.11816e-5
j = 8.919726e-7, k = -5.59050e-9
Digital Article Identifier (DOI):
32
15675
Removal of CO2 and H2S using Aqueous Alkanolamine Solusions
Abstract: This work presents a theoretical investigation of the
simultaneous absorption of CO2 and H2S into aqueous solutions of
MDEA and DEA. In this process the acid components react
with the basic alkanolamine solution via an exothermic,
reversible reaction in a gas/liquid absorber. The use of amine
solvents for gas sweetening has been investigated using
process simulation programs called HYSYS and ASPEN. We
use Electrolyte NRTL and Amine Package and Amines
(experimental) equation of state. The effects of temperature and
circulation rate and amine concentration and packed column and
murphree efficiency on the rate of absorption were studied.
When lean amine flow and concentration increase, CO2 and H2S
absorption increase too. With the improvement of inlet amine
temperature in absorber, CO2 and H2S penetrate to upper stages of
absorber and absorption of acid gases in absorber decreases. The CO2
concentration in the clean gas can be greatly influenced by the
packing height, whereas for the H2S concentration in the clean gas the
packing height plays a minor role. HYSYS software can not
estimate murphree efficiency correctly and it applies the same
contributions in all diagrams for HYSYS software. By
improvement in murphree efficiency, maximum temperature
of absorber decrease and the location of reaction transfer to the
stages of bottoms absorber and the absorption of acid gases
increase.
Digital Article Identifier (DOI):
31
13364
Thermodynamic Modeling of the High Temperature Shift Converter Reactor Using Minimization of Gibbs Free Energy
Abstract: The equilibrium chemical reactions taken place in a converter reactor of the Khorasan Petrochemical Ammonia plant was studied using the minimization of Gibbs free energy method. In the minimization of the Gibbs free energy function the Davidon– Fletcher–Powell (DFP) optimization procedure using the penalty terms in the well-defined objective function was used. It should be noted that in the DFP procedure along with the corresponding penalty terms the Hessian matrices for the composition of constituents in the Converter reactor can be excluded. This, in fact, can be considered as the main advantage of the DFP optimization procedure. Also the effect of temperature and pressure on the equilibrium composition of the constituents was investigated. The results obtained in this work were compared with the data collected from the converter reactor of the Khorasan Petrochemical Ammonia plant. It was concluded that the results obtained from the method used in this work are in good agreement with the industrial data. Notably, the algorithm developed in this work, in spite of its simplicity, takes the advantage of short computation and convergence time.
Digital Article Identifier (DOI):
30
2478
Low Temperature Ethanol Gas Sensor based on SnO2/MWNTs Nanocomposite
Abstract: A composite made of plasma functionalized multiwall
carbon nanotubes (MWNTs) coated with SnO2 was synthesized by
sonochemical precipitation method. Thick layer of this
nanocomposite material was used as ethanol sensor at low
temperatures. The composite sensitivity for ethanol has increased by
a factor of 2 at room temperature and by a factor of 13 at 250°C in
comparison to that of pure SnO2. SEM image of nanocomposite
material showed MWNTs were embedded in SnO2 matrix and also a
higher surface area was observed in the presence of functionalized
MWNTs. Greatly improved sensitivity of the composite material to
ethanol can be attributed to new gas accessing passes through
MWNTs and higher specific surface area.
Digital Article Identifier (DOI):
29
1033
Sonochemically Prepared SnO2 Quantum Dots as a Selective and Low Temperature CO Sensor
Abstract: In this study, a low temperature sensor highly selective to CO in presence of methane is fabricated by using 4 nm SnO2 quantum dots (QDs) prepared by sonication assisted precipitation. SnCl4 aqueous solution was precipitated by ammonia under sonication, which continued for 2 h. A part of the sample was then dried and calcined at 400°C for 1.5 h and characterized by XRD and BET. The average particle size and the specific surface area of the SnO2 QDs as well as their sensing properties were compared with the SnO2 nano-particles which were prepared by conventional sol-gel method. The BET surface area of sonochemically as-prepared product and the one calcined at 400°C after 1.5 hr are 257 m2/gr and 212 m2/gr respectively while the specific surface area for SnO2 nanoparticles prepared by conventional sol-gel method is about 80m2/gr. XRD spectra revealed pure crystalline phase of SnO2 is formed for both as-prepared and calcined samples of SnO2 QDs. However, for the sample prepared by sol-gel method and calcined at 400°C SnO crystals are detected along with those of SnO2. Quantum dots of SnO2 show exceedingly high sensitivity to CO with different concentrations of 100, 300 and 1000 ppm in whole range of temperature (25- 350°C). At 50°C a sensitivity of 27 was obtained for 1000 ppm CO, which increases to a maximum of 147 when the temperature rises to 225°C and then drops off while the maximum sensitivity for the SnO2 sample prepared by the sol-gel method was obtained at 300°C with the amount of 47.2. At the same time no sensitivity to methane is observed in whole range of temperatures for SnO2 QDs. The response and recovery times of the sensor sharply decreases with temperature, while the high selectivity to CO does not deteriorate.
Digital Article Identifier (DOI):
28
5775
Functionalization of Carbon Nanotubes Using Nitric Acid Oxidation and DBD Plasma
Abstract: In this study, multiwall carbon nanotubes (MWNTs)
were modified with nitric acid chemically and by dielectric barrier
discharge (DBD) plasma in an oxygen-based atmosphere. Used
carbon nanotubes (CNTs) were prepared by chemical vapour
deposition (CVD) floating catalyst method. For removing amorphous
carbon and metal catalyst, MWNTs were exposed to dry air and
washed with hydrochloric acid. Heating purified CNTs under helium
atmosphere caused elimination of acidic functional groups. Fourier
transformed infrared spectroscopy (FTIR) shows formation of
oxygen containing groups such as C=O and COOH. Brunauer,
Emmett, Teller (BET) analysis revealed that functionalization causes
generation of defects on the sidewalls and opening of the ends of
CNTs. Results of temperature-programmed desorption (TPD) and gas
chromatography(GC) indicate that nitric acid treatment create more
acidic groups than plasma treatment.
Digital Article Identifier (DOI):
27
12524
Mass Transfer Modeling in a Packed Bed of Palm Kernels under Supercritical Conditions
Abstract: Studies on gas solid mass transfer using Supercritical fluid CO2 (SC-CO2) in a packed bed of palm kernels was investigated at operating conditions of temperature 50 °C and 70 °C and pressures ranges from 27.6 MPa, 34.5 MPa, 41.4 MPa and 48.3 MPa. The development of mass transfer models requires knowledge of three properties: the diffusion coefficient of the solute, the viscosity and density of the Supercritical fluids (SCF). Matematical model with respect to the dimensionless number of Sherwood (Sh), Schmidt (Sc) and Reynolds (Re) was developed. It was found that the model developed was found to be in good agreement with the experimental data within the system studied.
Digital Article Identifier (DOI):
26
10570
Mass Transfer of Palm Kernel Oil under Supercritical Conditions
Abstract: The purpose of the study was to determine the amount of Palm Kernel Oil (PKO) extracted from a packed bed of palm kernels in a supercritical fluid extractor using supercritical carbon dioxide (SC-CO2) as an environmental friendly solvent. Further, the study sought to ascertain the values of the overall mass transfer coefficient (K) of PKO evaluation through a mass transfer model, at constant temperature of 50 °C, 60 °C, and 70 °C and pressures range from 27.6 MPa, 34.5 MPa, 41.4 MPa and 48.3 MPa respectively. Finally, the study also seeks to demonstrate the application of the overall mass transfer coefficient values in relation to temperature and pressure. The overall mass transfer coefficient was found to be dependent pressure at each constant temperature of 50 °C, 60 °C and 70 °C. The overall mass transfer coefficient for PKO in a packed bed of palm kernels was found to be in the range of 1.21X 10-4 m min-1 to 1.72 X 10-4 m min-1 for a constant temperature of 50 °C and in the range of 2.02 X 10-4 m min-1 to 2.43 X 10-4 m min-1 for a constant temperature of 60 °C. Similar increasing trend of the overall mass transfer coefficient from 1.77 X 10-4 m min-1 to 3.64 X 10-4 m min-1 was also observed at constant temperature of 70 °C within the same pressure range from 27.6 MPa to 48.3 MPa.
Digital Article Identifier (DOI):
25
7145
The Development of Decision Support System for Waste Management; a Review
Abstract: Most Decision Support Systems (DSS) for waste
management (WM) constructed are not widely marketed and lack
practical applications. This is due to the number of variables and
complexity of the mathematical models which include the
assumptions and constraints required in decision making. The
approach made by many researchers in DSS modelling is to isolate a
few key factors that have a significant influence to the DSS. This
segmented approach does not provide a thorough understanding of
the complex relationships of the many elements involved. The
various elements in constructing the DSS must be integrated and
optimized in order to produce a viable model that is marketable and
has practical application. The DSS model used in assisting decision
makers should be integrated with GIS, able to give robust prediction
despite the inherent uncertainties of waste generation and the plethora
of waste characteristics, and gives optimal allocation of waste stream
for recycling, incineration, landfill and composting.
Digital Article Identifier (DOI):
24
7221
Adhesion Properties of Bifidobacterium Pseudocatenulatum G4 and Bifidobacterium Longum BB536 on HT-29 Human Epithelium Cell Line at Different Times and pH
Abstract: Adhesion to the human intestinal cell is considered
as one of the main selection criteria of lactic acid bacteria for
probiotic use. The adhesion ability of two Bifidobacteriums strains
Bifidobacterium longum BB536 and Bifidobacterium
psudocatenulatum G4 was done using HT-29 human epithelium
cell line as in vitro study. Four different level of pH were used 5.6,
5.7, 6.6, and 6.8 with four different times 15, 30, 60, and 120 min.
Adhesion was quantified by counting the adhering bacteria after
Gram staining. The adhesion of B. longum BB536 was higher than
B. psudocatenulatum G4. Both species showed significant
different in the adhesion properties at the factors tested. The
highest adhesion for both Bifidobacterium was observed at 120
min and the low adhesion was in 15 min. The findings of this
study will contribute to the introduction of new effective probiotic
strain for future utilization.
Digital Article Identifier (DOI):
23
7903
Are PEG Molecules a Universal Protein Repellent?
Abstract: Poly (ethylene glycol) (PEG) molecules attached to surfaces have shown high potential as a protein repellent due to their flexibility and highly water solubility. A quartz crystal microbalance recording frequency and dissipation changes (QCM-D) has been used to study the adsorption from aqueous solutions, of lysozyme and α-lactalbumin proteins (the last with and without calcium) onto modified stainless steel surfaces. Surfaces were coated with poly(ethylene imine) (PEI) and silicate before grafting on PEG molecules. Protein adsorption was also performed on the bare stainless steel surface as a control. All adsorptions were conducted at 23°C and pH 7.2. The results showed that the presence of PEG molecules significantly reduced the adsorption of lysozyme and α- lactalbumin (with calcium) onto the stainless steel surface. By contrast, and unexpected, PEG molecules enhanced the adsorption of α-lactalbumin (without calcium). It is suggested that the PEG -α- lactalbumin hydrophobic interaction plays a dominant role which leads to protein aggregation at the surface for this latter observation. The findings also lead to the general conclusion that PEG molecules are not a universal protein repellent. PEG-on-PEI surfaces were better at inhibiting the adsorption of lysozyme and α-lactalbumin (with calcium) than with PEG-on-silicate surfaces.
Digital Article Identifier (DOI):
22
1318
Bioethanol Production from Enzymatically Saccharified Sunflower Stalks Using Steam Explosion as Pretreatment
Abstract: Sunflower stalks were analysed for chemical
compositions: pentosan 15.84%, holocellulose 70.69%,
alphacellulose 45.74%, glucose 27.10% and xylose 7.69% based on
dry weight of 100-g raw material. The most optimum condition for
steam explosion pretreatment was as follows. Sunflower stalks were
cut into small pieces and soaked in 0.02 M H2SO4 for overnight.
After that, they were steam exploded at 207 C and 21 kg/cm2 for 3
minutes to fractionate cellulose, hemicellulose and lignin. The
resulting hydrolysate, containing hemicellulose, and cellulose pulp
contained xylose sugar at 2.53% and 7.00%, respectively.The pulp
was further subjected to enzymatic saccharification at 50 C, pH 4.8 citrate buffer) with pulp/buffer 6% (w/w)and Celluclast 1.5L/pulp
2.67% (w/w) to obtain single glucose with maximum yield 11.97%.
After fixed-bed fermentation under optimum condition using
conventional yeast mixtures to produce bioethanol, it indicated
maximum ethanol yield of 0.028 g/100 g sunflower stalk.
Digital Article Identifier (DOI):
21
4874
Removal of Cationic Heavy Metal and HOC from Soil-Washed Water Using Activated Carbon
Abstract: Soil washing process with a surfactant solution is a potential technology for the rapid removal of hydrophobic organic compound (HOC) from soil. However, large amount of washed water would be produced during operation and this should be treated effectively by proper methods. The soil washed water for complex contaminated site with HOC and heavy metals might contain high amount of pollutants such as HOC and heavy metals as well as used surfactant. The heavy metals in the soil washed water have toxic effects on microbial activities thus these should be removed from the washed water before proceeding to a biological waste-water treatment system. Moreover, the used surfactant solutions are necessary to be recovered for reducing the soil washing operation cost. In order to simultaneously remove the heavy metals and HOC from soil-washed water, activated carbon (AC) was used in the present study. In an anionic-nonionic surfactant mixed solution, the Cd(II) and phenanthrene (PHE) were effectively removed by adsorption on activated carbon. The removal efficiency for Cd(II) was increased from 0.027 mmol-Cd/g-AC to 0.142 mmol-Cd/g-AC as the mole ratio of SDS increased in the presence of PHE. The adsorptive capacity of PHE was also increased according to the SDS mole ratio due to the decrement of molar solubilization ratios (MSR) for PHE in an anionic-nonionic surfactant mixture. The simultaneous adsorption of HOC and cationic heavy metals using activated carbon could be a useful method for surfactant recovery and the reduction of heavy metal toxicity in a surfactant-enhanced soil washing process.
Digital Article Identifier (DOI):
20
5404
Decolorization of Reactive Black 5 and Reactive Red 198 using Nanoscale Zerovalent Iron
Abstract: Residual dye contents in textile dyeing wastewater have complex aromatic structures that are resistant to degrade in biological wastewater treatment. The objectives of this study were to determine the effectiveness of nanoscale zerovalent iron (NZVI) to decolorize Reactive Black 5 (RB5) and Reactive Red 198 (RR198) in synthesized wastewater and to investigate the effects of the iron particle size, iron dosage and solution pHs on the destruction of RB5 and RR198. Synthesized NZVI was confirmed by transmission electron microscopy (TEM), X-ray diffraction (XRD), and X-ray photoelectron spectroscopy (XPS). The removal kinetic rates (kobs) of RB5 (0.0109 min-1) and RR198 (0.0111 min-1) by 0.5% NZVI were many times higher than those of microscale zerovalent iron (ZVI) (0.0007 min-1 and 0.0008 min-1, respectively). The iron dosage increment exponentially increased the removal efficiencies of both RB5 and RR198. Additionally, lowering pH from 9 to 5 increased the decolorization kinetic rates of both RB5 and RR198 by NZVI. The destruction of azo bond (N=N) in the chromophore of both reactive dyes led to decolorization of dye solutions.
Digital Article Identifier (DOI):
19
9960
Some Biological and Molecular Characterization of Bean Common Mosaic Necrosis Virus Isolated from Soybean in Tehran Province, Iran
Abstract: Bean common mosaic necrosis virus (BCMNV) is a
potyvirus with a worldwide distribution. This virus causes serious
economic losses in Iran in many leguminoses. During 20008,
samples were collected from soybeans fields in Tehran Province.
Four isolates (S1, S2 and S3) were inoculated on 15 species of
Cucurbitaceae, Chenopodiaceae, Solanacae and Leguminosae.
Chenopodium quinoa and C. amaranticolor.
Did not developed any symptoms.all isolates caused mosaic
symptoms on Phaseolus vulgaris cv. Red Kidney and P. vulgaris cv.
Bountiful. The molecular weights of coat protein using SDS-PAGE
and western blotting were estimated at 33 kDa. Reverse transcription
polymerase chain reaction (RT-PCR) was performed using one
primer pairs designed by L. XU et al. An approximately 920 bp
fragment was amplified with a specific primer.
Digital Article Identifier (DOI):
18
11585
Molecular Characterization of Free Radicals Decomposing Genes on Plant Developmental Stages
Abstract: Biochemical and molecular analysis of some
antioxidant enzyme genes revealed different level of gene expression
on oilseed (Brassica napus). For molecular and biochemical
analysis, leaf tissues were harvested from plants at eight different
developmental stages, from young to senescence. The levels of total
protein and chlorophyll were increased during maturity stages of
plant, while these were decreased during the last stages of plant
growth. Structural analysis (nucleotide and deduced amino acid
sequence, and phylogenic tree) of a complementary DNA revealed a
high level of similarity for a family of Catalase genes. The
expression of the gene encoded by different Catalase isoforms was
assessed during different plant growth phase. No significant
difference between samples was observed, when Catalase activity
was statistically analyzed at different developmental stages. EST
analysis exhibited different transcripts levels for a number of other
relevant antioxidant genes (different isoforms of SOD and
glutathione). The high level of transcription of these genes at
senescence stages was indicated that these genes are senescenceinduced
genes.
Digital Article Identifier (DOI):
17
4422
Study on the Effect of Sulphur, Glucose, Nitrogen and Plant Residues on the Immobilization of Sulphate-S in Soil
Abstract: In order to evaluate the relationship between the sulphur (S), glucose (G), nitrogen (N) and plant residues (st), sulphur immobilization and microbial transformation were monitored in five soil samples from 0-30 cm of Bastam farmers fields of Shahrood area following 11 treatments with different levels of Sulphur (S), glucose (G), N and plant residues (wheat straw) in a randomized block design with three replications and incubated over 20, 45 and 60 days, the immobilization of SO4 -2-S presented as a percentage of that added, was inversely related to its addition rate. Additions of glucose and plant residues increased with the C-to-S ratio of the added amendments, irrespective of their origins (glucose and plant residues). In the presence of C sources (glucose or plant residues). N significantly increased the immobilization of SO4 -2-S, whilst the effect of N was insignificant in the absence of a C amendment. In first few days the amounts of added SO4 -2-S immobilized were linearly correlated with the amounts of added S recovered in the soil microbial biomass. With further incubation the proportions of immobilized SO4 -2-S remaining as biomass-S decreased. Decrease in biomass-S was thought to be due to the conversion of biomass-S into soil organic-S. Glucose addition increased the immobilization (microbial utilization and incorporation into the soil organic matter) of native soil SO4 -2-S. However, N addition enhance the mineralization of soil organic-S, increasing the concentration of SO4 - 2-S in soil.
Digital Article Identifier (DOI):
16
1080
Effects of Skim Milk Powder Supplementation to Soy Yogurts on Biotransformation of Isoflavone Glycosides to Biologically Active Forms during Storage
Abstract: Three batches of yogurts were made with soy protein
isolate (SPI) supplemented with 2% (S2), 4% (S4) or 6% (S6) of
skim milk powder (SMP). The fourth batch (control; S0) was
prepared from SPI without SMP supplementation. Lactobacillus
delbrueckii ssp. bulgaricus ATCC 11842 (Lb 11842) and
Streptococcus thermophilus ST 1342 (ST 1342) were used as the
starter culture. Biotransformation of the inactive forms, isoflavone
glycosides (IG) to biologically active forms, isoflavone aglycones
(IA), was determined during 28 d storage. The viability of both
microorganisms was significantly higher (P < 0.05) in S2, S4, and S6
than that in S0. The ratio of lactic acid/acetic acid in S0 was in the
range of 15.53 – 22.31 compared to 7.24 – 12.81 in S2, S4 and S6.
The biotransformation of IG to IA in S2, S4 and S6 was also
enhanced by 9.9 -13.3% compared to S0.
Digital Article Identifier (DOI):
15
7978
Evaluating the Response of Rainfed-Chickpea to Population Density in Iran, Using Simulation
Abstract: The response of growth and yield of rainfed-chickpea
to population density should be evaluated based on long-term
experiments to include the climate variability. This is achievable just
by simulation. In this simulation study, this evaluation was done by
running the CYRUS model for long-term daily weather data of five
locations in Iran. The tested population densities were 7 to 59 (with
interval of 2) stands per square meter. Various functions, including
quadratic, segmented, beta, broken linear, and dent-like functions,
were tested. Considering root mean square of deviations and linear
regression statistics [intercept (a), slope (b), and correlation
coefficient (r)] for predicted versus observed variables, the quadratic
and broken linear functions appeared to be appropriate for describing
the changes in biomass and grain yield, and in harvest index,
respectively. Results indicated that in all locations, grain yield tends
to show increasing trend with crowding the population, but
subsequently decreases. This was also true for biomass in five
locations. The harvest index appeared to have plateau state across
low population densities, but decreasing trend with more increasing
density. The turning point (optimum population density) for grain
yield was 30.68 stands per square meter in Isfahan, 30.54 in Shiraz,
31.47 in Kermanshah, 34.85 in Tabriz, and 32.00 in Mashhad. The
optimum population density for biomass ranged from 24.6 (in
Tabriz) to 35.3 stands per square meter (Mashhad). For harvest index
it varied between 35.87 and 40.12 stands per square meter.
Digital Article Identifier (DOI):
14
3194
Effects of Some Natural Antioxidants Mixtures on Margarine Stability
Abstract: Application of synthetic antioxidants such as tertbutylhydroquinon
(TBHQ), in spite of their efficiency, is questioned
because of their possible carcinogenic effect. The purpose of this
study was application of mixtures of natural antioxidants that provide
the best oxidative stability for margarine. Antioxidant treatments
included 10 various mixtures (F1- F10) containing 100-500ppm
tocopherol mixture (Toc), 100-200ppm ascorbyl palmitate (AP), 100-
200ppm rosemary extract (Ros) and 1000ppm lecithin(Lec) along
with a control or F0 (with no antioxidant) and F11 with 120ppm
TBHQ. The effect of antioxidant mixtures on the stability of
margarine samples during oven test (60°C), rancimat test at 110°C
and storage at 4°C was evaluated. Final ranking of natural antioxidant
mixtures was as follows: F2,F10>F5,F9>F8>F1,F3,F4>F6, F7.
Considering the results of this research and ranking criteria,
F2(200ppmAp + 200ppmRos) and F10(200ppmRos + 200ppmToc
+1000ppmLec) were recommended as substitutes for TBHQ to
maintain the quality and increase the shelf-life of margarine.
Digital Article Identifier (DOI):
13
12844
Effect of Phosphate Solubilization Microorganisms (PSM) and Plant Growth Promoting Rhizobacteria (PGPR) on Yield and Yield Components of Corn (Zea mays L.)
Abstract: In order to study the effect of phosphate solubilization
microorganisms (PSM) and plant growth promoting rhizobacteria
(PGPR) on yield and yield components of corn Zea mays (L. cv.
SC604) an experiment was conducted at research farm of Sari
Agricultural Sciences and Natural Resources University, Iran during
2007. Experiment laid out as split plot based on randomized
complete block design with three replications. Three levels of
manures (consisted of 20 Mg.ha-1 farmyard manure, 15 Mg.ha-1 green
manure and check or without any manures) as main plots and eight
levels of biofertilizers (consisted of 1-NPK or conventional fertilizer
application; 2-NPK+PSM+PGPR; 3 NP50%K+PSM+PGPR; 4-
N50%PK+PSM +PGPR; 5-N50%P50%K+PSM+ PGPR; 6-PK+PGPR; 7-
NK+PSM and 8-PSM+PGPR) as sub plots were treatments. Results
showed that farmyard manure application increased row number, ear
weight, grain number per ear, grain yield, biological yield and
harvest index compared to check. Furthermore, using of PSM and
PGPR in addition to conventional fertilizer applications (NPK) could
improve ear weight, row number and grain number per row and
ultimately increased grain yield in green manure and check plots.
According to results in all fertilizer treatments application of PSM
and PGPR together could reduce P application by 50% without any
significant reduction of grain yield. However, this treatment could
not compensate 50% reduction of N application.
Digital Article Identifier (DOI):
12
2471
Effects of Discharge Fan on the Drying Efficiency in Flat-bed type Dryer
Abstract: The study of interaction among the grain, moisture,
and the surrounding space (air) is key to understanding the graindrying
process. In Iran, rice (mostly Indica type) is dried by flat
bed type dryer until the final MC reaches to 6 to 8%. The
experiments were conducted to examine the effect of application of
discharge fan with different heights of paddy on the drying
efficiency. Experiments were designed based on two different
configurations of the drying methods; with and without discharge
fan with three different heights of paddy including; 5, 10, and 15
cm. The humid heated air will be going out immediately by the
suction of discharge fan. The drying time is established upon the
average final MC to achieve about 8%. To save energy and reduce
the drying time, the distribution of temperature between layers
should be fast and uniform with minimum difference; otherwise
the difference of MC gradient between layers will be high and will
induce grain breakage. The difference of final MC between layers
in the two methods was 48-73%. The steady state of temperature
between the two methods has saved time in the range of 10-20%,
and the efficiency of temperature distribution increased 17-26% by
the use of discharge fan.
Digital Article Identifier (DOI):
11
7052
Economic effects and Energy Use Efficiency of Incorporating Alfalfa and Fertilizer into Grass- Based Pasture Systems
Abstract: A ten-year grazing study was conducted at the
Agriculture and Agri-Food Canada Brandon Research Centre in
Manitoba to study the effect of alfalfa inclusion and fertilizer (N, P,
K, and S) addition on economics and efficiency of non-renewable
energy use in meadow brome grass-based pasture systems for beef
production. Fertilizing grass-only or alfalfa-grass pastures to full soil
test recommendations improved pasture productivity, but did not
improve profitability compared to unfertilized pastures. Fertilizing
grass-only pastures resulted in the highest net loss of any pasture
management strategy in this study. Adding alfalfa at the time of
seeding, with no added fertilizer, was economically the best pasture
improvement strategy in this study. Because of moisture limitations,
adding commercial fertilizer to full soil test recommendations is
probably not economically justifiable in most years, especially with
the rising cost of fertilizer. Improving grass-only pastures by adding
fertilizer and/or alfalfa required additional non-renewable energy
inputs; however, the additional energy required for unfertilized
alfalfa-grass pastures was minimal compared to the fertilized
pastures. Of the four pasture management strategies, adding alfalfa
to grass pastures without adding fertilizer had the highest efficiency
of energy use. Based on energy use and economic performance, the
unfertilized alfalfa-grass pasture was the most efficient and
sustainable pasture system.
Digital Article Identifier (DOI):
10
5453
Efficiency of Floristic and Molecular Markers to Determine Diversity in Iranian Populations of T. boeoticum
Abstract: In order to study floristic and molecular classification
of common wild wheat (Triticum boeoticum Boiss.), an analysis was
conducted on populations of the Triticum boeoticum collected from
different regions of Iran. Considering all floristic compositions of
habitats, six floristic groups (syntaxa) within the populations were
identified. A high level of variation of T. boeoticum also detected
using SSR markers. Our results showed that molecular method
confirmed the grouping of floristic method. In other word, the results
from our study indicate that floristic classification are still useful,
efficient, and economic tools for characterizing the amount and
distribution of genetic variation in natural populations of T.
boeoticum. Nevertheless, molecular markers appear as useful and
complementary techniques for identification and for evaluation of
genetic diversity in studied populations.
Digital Article Identifier (DOI):
9
32
Genetic Variation of Durum Wheat Landraces and Cultivars Using Morphological and Protein Markers
Abstract: Knowledge of patterns of genetic diversity enhances
the efficiency of germplasm conservation and improvement. In this
study 96 Iranian landraces of Triticum turgidum originating from
different geographical areas of Iran, along with 18 durum cultivars
from ten countries were evaluated for variation in morphological and
high molecular weight glutenin subunit (HMW-GS) composition.
The first two principal components clearly separated the Iranian
landraces from cultivars. Three alleles were present at the Glu-A1
locus and 11 alleles at Glu-B1. In both cultivars and landraces of
durum wheat, the null allele (Glu-A1c) was observed more
frequently than the Glu-A1a and Glu-A1b alleles. Two alleles,
namely Glu-B1a (subunit 7) and Glu-B1e (subunit 20) represented
the more frequent alleles at Glu-B1 locus. The results showed that
the evaluated Iranian landraces formed an interesting source of
favourable glutenin subunits that might be very desirable in breeding
activities for improving pasta-making quality.
Digital Article Identifier (DOI):
8
12018
Estimation of Critical Period for Weed Control in Corn in Iran
Abstract: The critical period for weed control (CPWC) is the period in the crop growth cycle during which weeds must be controlled to prevent unacceptable yield losses. Field studies were conducted in 2005 and 2006 in the University of Birjand at the south east of Iran to determine CPWC of corn using a randomized complete block design with 14 treatments and four replications. The treatments consisted of two different periods of weed interference, a critical weed-free period and a critical time of weed removal, were imposed at V3, V6, V9, V12, V15, and R1 (based on phonological stages of corn development) with a weedy check and a weed-free check. The CPWC was determined with the use of 2.5, 5, 10, 15 and 20% acceptable yield loss levels by non-linear Regression method and fitting Logistic and Gompertz nonlinear equations to relative yield data. The CPWC of corn was from 5- to 15-leaf stage (19-55 DAE) to prevent yield losses of 5%. This period to prevent yield losses of 2.5, 10 and 20% was 4- to 17-leaf stage (14-59 DAE), 6- to 12-leaf stage (25-47 DAE) and 8- to 9-leaf stage (31-36 DAE) respectively. The height and leaf area index of corn were significantly decreased by weed competition in both weed free and weed infested treatments (P<0.01). Results also showed that there was a significant positive correlation between yield and LAI of corn at silk stage when competing with weeds (r= 0.97).
Digital Article Identifier (DOI):
7
2683
Multi-Criteria Decision-Making Selection Model with Application to Chemical Engineering Management Decisions
Abstract: Chemical industry project management involves
complex decision making situations that require discerning abilities
and methods to make sound decisions. Project managers are faced
with decision environments and problems in projects that are
complex. In this work, case study is Research and Development
(R&D) project selection. R&D is an ongoing process for forward
thinking technology-based chemical industries. R&D project
selection is an important task for organizations with R&D project
management. It is a multi-criteria problem which includes both
tangible and intangible factors. The ability to make sound decisions
is very important to success of R&D projects. Multiple-criteria
decision making (MCDM) approaches are major parts of decision
theory and analysis. This paper presents all of MCDM approaches
for use in R&D project selection. It is hoped that this work will
provide a ready reference on MCDM and this will encourage the
application of the MCDM by chemical engineering management.
Digital Article Identifier (DOI):
6
9727
Optimization of Some Process Parameters to Produce Raisin Concentrate in Khorasan Region of Iran
Abstract: Raisin Concentrate (RC) are the most important
products obtained in the raisin processing industries. These RC
products are now used to make the syrups, drinks and confectionery
productions and introduced as natural substitute for sugar in food
applications. Iran is a one of the biggest raisin exporter in the world
but unfortunately despite a good raw material, no serious effort to
extract the RC has been taken in Iran. Therefore, in this paper, we
determined and analyzed affected parameters on extracting RC
process and then optimizing these parameters for design the
extracting RC process in two types of raisin (round and long)
produced in Khorasan region. Two levels of solvent (1:1 and 2:1),
three levels of extraction temperature (60°C, 70°C and 80°C), and
three levels of concentration temperature (50°C, 60°C and 70°C)
were the treatments. Finally physicochemical characteristics of the
obtained concentrate such as color, viscosity, percentage of reduction
sugar, acidity and the microbial tests (mould and yeast) were
counted. The analysis was performed on the basis of factorial in the
form of completely randomized design (CRD) and Duncan's multiple
range test (DMRT) was used for the comparison of the means.
Statistical analysis of results showed that optimal conditions for
production of concentrate is round raisins when the solvent ratio was
2:1 with extraction temperature of 60°C and then concentration
temperature of 50°C. Round raisin is cheaper than the long one, and
it is more economical to concentrate production. Furthermore, round
raisin has more aromas and the less color degree with increasing the
temperature of concentration and extraction. Finally, according to
mentioned factors the concentrate of round raisin is recommended.
Digital Article Identifier (DOI):
5
5877
Wheat Yield Prediction through Agro Meteorological Indices for Ardebil District
Abstract: Wheat prediction was carried out using different meteorological variables together with agro meteorological indices in Ardebil district for the years 2004-2005 & 2005–2006. On the basis of correlation coefficients, standard error of estimate as well as relative deviation of predicted yield from actual yield using different statistical models, the best subset of agro meteorological indices were selected including daily minimum temperature (Tmin), accumulated difference of maximum & minimum temperatures (TD), growing degree days (GDD), accumulated water vapor pressure deficit (VPD), sunshine hours (SH) & potential evapotranspiration (PET). Yield prediction was done two months in advance before harvesting time which was coincide with commencement of reproductive stage of wheat (5th of June). It revealed that in the final statistical models, 83% of wheat yield variability was accounted for variation in above agro meteorological indices.
Digital Article Identifier (DOI):
4
4212
Disinfestation of Wheat Using Liquid Nitrogen Aeration
Abstract: A study was undertaken to investigate the effect of
liquid nitrogen aeration on mortalities of adult Cryptolestes
furrugineus, rusty grain beetles, in a prototype cardboard grain bin
equipped with an aeration system. The grain bin was filled with Hard
Red Spring wheat and liquid nitrogen was introduced from the bottom
of the bin. The survival of both cold acclimated and unacclimated C.
furrugineus was tested. The study reveals that cold acclimated insects
had higher survival than unacclimated insects under similar cooling
conditions. In most cases, mortalities of as high as 100% were
achieved at the bottom 100 cm of the grain bin for unacclimated
insects for most of the trials. Insect survival increased as the distance
from the bottom of the grain bin increased. There was no adverse
effect of liquid nitrogen aeration on wheat germination.
Digital Article Identifier (DOI):
3
14657
The Effect of Plant Growth Promoting Rhizobacteria (PGPR) on Germination, Seedling Growth and Yield of Maize
Abstract: The effect of plant growth-promoting rhizobacteria
(PGPR) on seed germination, seedling growth and yield of field
grown maize were evaluated in three experiments. In these
experiments six bacterial strains include P.putida strain R-168,
P.fluorescens strain R-93, P.fluorescens DSM 50090, P.putida
DSM291, A.lipoferum DSM 1691, A.brasilense DSM 1690 were
used. Results of first study showed seed Inoculation significantly
enhanced seed germination and seedling vigour of maize. In second
experiment, leaf and shoot dry weight and also leaf surface area
significantly were increased by bacterial inoculation in both sterile
and non-sterile soil. The results showed that inoculation with
bacterial treatments had a more stimulating effect on growth and
development of plants in nonsterile than sterile soil. In the third
experiment, Inoculation of maize seeds with all bacterial strains
significantly increased plant height, 100 seed weight, number of seed
per ear and leaf area .The results also showed significant increase in
ear and shoot dry weight of maize.
Digital Article Identifier (DOI):
2
4334
Cold Hardiness in Near Isogenic Lines of Bread Wheat (Triticum Aestivum L. em. Thell.)
Abstract: Low temperature (LT) is one of the most abiotic
stresses causing loss of yield in wheat (T. aestivum). Four major
genes in wheat (Triticum aestivum L.) with the dominant alleles
designated Vrn–A1,Vrn–B1,Vrn–D1 and Vrn4, are known to have
large effects on the vernalization response, but the effects on cold
hardiness are ambiguous. Poor cold tolerance has restricted winter
wheat production in regions of high winter stress [9]. It was known
that nearly all wheat chromosomes [5] or at least 10 chromosomes of
21 chromosome pairs are important in winter hardiness [15]. The
objective of present study was to clarify the role of each chromosome
in cold tolerance. With this purpose we used 20 isogenic lines of
wheat. In each one of these isogenic lines only a chromosome from
‘Bezostaya’ variety (a winter habit cultivar) was substituted to
‘Capple desprez’ variety. The plant materials were planted in
controlled conditions with 20º C and 16 h day length in moderately
cold areas of Iran at Karaj Agricultural Research Station in 2006-07
and the acclimation period was completed for about 4 weeks in a
cold room with 4º C. The cold hardiness of these isogenic lines was
measured by LT50 (the temperature in which 50% of the plants are
killed by freezing stress).The experimental design was completely
randomized block design (RCBD)with three replicates. The results
showed that chromosome 5A had a major effect on freezing
tolerance, and then chromosomes 1A and 4A had less effect on this
trait. Further studies are essential to understanding the importance of
each chromosome in controlling cold hardiness in wheat.
Digital Article Identifier (DOI):
1
14606
Blood Lymphocyte and Neutrophil Response of Cultured Rainbow Trout, Oncorhynchus mykiss, Administered Varying Dosages of an Oral Immunomodulator – ‘Fin-Immune™’
Abstract: In a 10-week (May – August, 2008) Phase I trial, 840, 1+ rainbow trout, Oncorhynchus mykiss, received a commercial oral immunomodulator, Fin Immune™, at four different dosages (0, 10, 20 and 30 mg g-1) to evaluate immune response and growth. The overall objective of was to determine an optimal dosage of this product for rainbow trout that provides enhanced immunity with maximal growth and health. Biweekly blood samples were taken from 10 randomly selected fish in each tank (30 samples per treatment) to evaluate the duration of enhanced immunity conferred by Fin-Immune™. The immunological assessment included serum white blood cell (lymphocyte, neutrophil) densities and blood hematocrit (packed cell volume %). Of these three variables, only lymphocyte density increased significantly among trout fed Fin- Immune™ at 20 and 30 mg g-1 which peaked at week 6. At week 7, all trout were switched to regular feed (lacking Fin-Immune™) and by week 10, lymphocyte levels decreased among all levels but were still greater than at week 0. There was growth impairment at the highest dose of Fin-Immune™ tested (30 mg g-1) which can be associated with a physiological compensatory mechanism due to a dose-specific threshold level. Thus, our main objective of this Phase I study was achieved, the 20 mg g-1 dose of Fin-Immune™ should be the most efficacious (of those we tested) to use for a Phase II disease challenge trial.
Digital Article Identifier (DOI):