Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Abstract Count: 60776

On q-Non-extensive Statistics with Non-Tsallisian Entropy
We combine an axiomatics of Rényi with the q-deformed version of Khinchin axioms to obtain a measure of information (i.e., entropy) which accounts both for systems with embedded self-similarity and non-extensivity. We show that the entropy thus obtained is uniquely solved in terms of a one-parameter family of information measures. The ensuing maximal-entropy distribution is phrased in terms of a special function known as the Lambert W-function. We analyze the corresponding ‘high’ and ‘low-temperature’ asymptotics and reveal a non-trivial structure of the parameter space.
Entropy Measures on Neutrosophic Soft Sets and Its Application in Multi Attribute Decision Making
The focus of the paper is to furnish the entropy measure for a neutrosophic set and neutrosophic soft set which is a measure of uncertainty and it permeates discourse and system. Various characterization of entropy measures are derived. Further we exemplify this concept by applying entropy in various real time decision making problems.
On the Topological Entropy of Nonlinear Dynamical Systems
The topological entropy plays a key role in linear dynamical systems, allowing one to establish the existence of stabilizing feedback controllers for linear systems in the presence of communications constraints. This paper addresses the determination of a robust value of the topological entropy in nonlinear dynamical systems, specifically the largest value of the topological entropy over all linearized models in a region of interest of the state space. It is shown that a sufficient condition for establishing upper bounds of the sought robust value of the topological entropy can be given in terms of a semidefinite program (SDP), which belongs to the class of convex optimization problems.
Linear Study of Electrostatic Ion Temperature Gradient Mode with Entropy Gradient Drift and Sheared Ion Flows
History of plasma reveals that continuous struggle of experimentalists and theorists are not fruitful for confinement up to now. It needs a change to bring the research through entropy. Approximately, all the quantities like number density, temperature, electrostatic potential, etc. are connected to entropy. Therefore, it is better to change the way of research. In ion temperature gradient mode with the help of Braginskii model, Boltzmannian electrons, effect of velocity shear is studied inculcating entropy in the magnetoplasma. New dispersion relation is derived for ion temperature gradient mode, and dependence on entropy gradient drift is seen. It is also seen velocity shear enhances the instability but in anomalous transport, its role is not seen significantly but entropy. This work will be helpful to the next step of tokamak and space plasmas.
Reasons for Non-Applicability of Software Entropy Metrics for Bug Prediction in Android
Software Entropy Metrics for bug prediction have been validated on various software systems by different researchers. In our previous research, we have validated that Software Entropy Metrics calculated for Mozilla subsystem’s predict the future bugs reasonably well. In this study, the Software Entropy metrics are calculated for a subsystem of Android and it is noticed that these metrics are not suitable for bug prediction. The results are compared with a subsystem of Mozilla and a comparison is made between the two software systems to determine the reasons why Software Entropy metrics are not applicable for Android.
Analysis of EEG Signals Using Wavelet Entropy and Approximate Entropy: A Case Study on Depression Patients
Analyzing brain signals of the patients suffering from the state of depression may lead to interesting observations in the signal parameters that is quite different from a normal control. The present study adopts two different methods: Time frequency domain and nonlinear method for the analysis of EEG signals acquired from depression patients and age and sex matched normal controls. The time frequency domain analysis is realized using wavelet entropy and approximate entropy is employed for the nonlinear method of analysis. The ability of the signal processing technique and the nonlinear method in differentiating the physiological aspects of the brain state are revealed using Wavelet entropy and Approximate entropy.
Entropy Analysis of a Thermo-Acoustic Stack
The inherent irreversibility of thermo-acoustics primarily in the stack region causes poor efficiency of thermo-acoustic engines which is the major weakness of these devices. In view of the above, this study examines entropy generation in the stack of a thermo-acoustic system. For this purpose two parallel plates representative of the stack is considered. A general equation for entropy generation is derived based on the Second Law of thermodynamics. Assumptions such as Rott’s linear thermo-acoustic approximation, boundary layer type flow, etc. are made to simplify the governing continuity, momentum and energy equations to achieve analytical solutions for velocity and temperature. The entropy generation equation is also simplified based on the same assumptions and then is converted to dimensionless form by using characteristic entropy generation. A time averaged entropy generation rate followed by a global entropy generation rate are calculated and graphically represented for further analysis and inspecting the effect of different parameters on the entropy generation.
Econophysics: The Use of Entropy Measures in Finance
Concepts of econophysics are usually used to solve problems related to uncertainty and nonlinear dynamics. In the theory of option pricing the risk neutral probabilities play very important role. The application of entropy in finance can be regarded as the extension of both information entropy and the probability entropy. It can be an important tool in various financial methods such as measure of risk, portfolio selection, option pricing and asset pricing. Gulko applied Entropy Pricing Theory (EPT) for pricing stock options and introduced an alternative framework of Black-Scholes model for pricing European stock option. In this article, we present solutions to maximum entropy problems based on Tsallis, Weighted-Tsallis, Kaniadakis, Weighted-Kaniadakies entropies, to obtain risk-neutral densities. We have also obtained the value of European call and put in this framework.
Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes
The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.
Entropy Risk Factor Model of Exchange Rate Prediction
We investigate the predictability of the USD/ZAR (South African Rand) exchange rate with sample entropy analytics for the period of 2004-2015. We calculate sample entropy based on the daily data of the exchange rate and conduct empirical implementation of several market timing rules based on these entropy signals. The dynamic investment portfolio based on entropy signals produces better risk adjusted performance than a buy and hold strategy. The returns are estimated on the portfolio values in U.S. dollars. These results are preliminary and do not yet account for reasonable transactions costs, although these are very small in currency markets.
An Alternative Proof for the Topological Entropy of the Motzkin Shift
A Motzkin shift is a mathematical model for constraints on genetic sequences. In terms of the theory of symbolic dynamics, the Motzkin shift is nonsofic, and therefore, we cannot use the Perron-Frobenius theory to calculate its topological entropy. The Motzkin shift M(M,N) which comes from language theory, is defined to be the shift system over an alphabet A that consists of N negative symbols, N positive symbols and M neutral symbols. For an x in the full shift AZ, x is in M(M,N) if and only if every finite block appearing in x has a non-zero reduced form. Therefore, the constraint for x cannot be bounded in length. K. Inoue has shown that the entropy of the Motzkin shift M(M,N) is log(M + N + 1). In this paper, we find a new method of calculating the topological entropy of the Motzkin shift M(M,N) without any measure theoretical discussion.
Maximum Entropy Based Image Segmentation of Human Skin Lesion
Image segmentation plays an important role in medical imaging applications. Therefore, accurate methods are needed for the successful segmentation of medical images for diagnosis and detection of various diseases. In this paper, we have used maximum entropy to achieve image segmentation. Maximum entropy has been calculated using Shannon, Renyi, and Tsallis entropies. This work has novelty based on the detection of skin lesion caused by the bite of a parasite called Sand Fly causing the disease is called Cutaneous Leishmaniasis.
Entropy Generation of Natural Convection Heat Transfer in a Square Cavity Using Al2O3-Water Nanofluid
Entropy generation of an Al2O3-water nanofluid due to heat transfer and fluid friction irreversibility has been investigated in a square cavity subject to different side wall temperatures using a nanofluid for natural convection flow. This study has been carried out for the pertinent parameters in the following ranges: Rayleigh number between 104 to 107 and volume fraction between 0 to 0.05. Based on the obtained dimensionless velocity and temperature values, the distributions of local entropy generation, average entropy generation and average Bejan number are determined. The results are compared for a pure fluid and a nanofluid. It is totally found that the heat transfer and entropy generation of the nanofluid is more than the pure fluid and minimum entropy generation and Nusselt number occur in the pure fluid at any Rayleigh number. Results depict that the addition of nanoparticles to the pure fluid has more effect on the entropy generation as the Rayleigh number goes up.
Religion: The Human Entropy
Death is not a terminal; it is just a junction. From Agamas to Vedas, from Buddhism to Judaism, all the major scriptures and religions of the world always do converge to this hypothesis of death. Death is the ultimate catastrophe of life and it is the genesis of every religion on this Earth. Several hundred thousand years ago, the Homo Sapiens in Paleolithic age introduced the notion of religion on this Earth in its most primitive form just to escape from death and natural catastrophes through their belief in supernatural things which created the sense of superstition among the Homo Sapiens which has only increased over time. This sense of superstition and belief in supernatural things are building blocks of religion. Religion is like entropy, a degree of disorder. Entropy for an irreversible system like our own Universe always increases. Same is happening to our human civilization where the disorder had been increasing over time. The degree of this disorder of human civilization is religion divides and conquers over the human civilization of Earth. Religion is the human entropy which had been governing and will govern us. Just like entropy, religion is also an essential intrinsic property of the system which makes the system evolved. We have to optimize this ambivalence of the human entropy to make our civilization an inclusive and sustainable one.
Identification of the Main Transition Velocities in a Bubble Column Based on a Modified Shannon Entropy
The gas holdup fluctuations in a bubble column (0.15 m in ID) have been recorded by means of a conductivity wire-mesh sensor in order to extract information about the main transition velocities. These parameters are very important for bubble column design, operation and scale-up. For this purpose, the classical definition of the Shannon entropy was modified and used to identify both the onset (at UG=0.034 m/s) of the transition flow regime and the beginning (at UG=0.089 m/s) of the churn-turbulent flow regime. The results were compared with the Kolmogorov entropy (KE) results. A slight discrepancy was found, namely the transition velocities identified by means of the KE were shifted to somewhat higher (0.045 and 0.101 m/s) superficial gas velocities UG.
Entropy Analysis in a Bubble Column Based on Ultrafast X-Ray Tomography Data
By means of the ultrafast X-ray tomography facility, data were obtained at different superficial gas velocities UG in a bubble column (0.1 m in ID) operated with an air-deionized water system at ambient conditions. Raw reconstructed images were treated by both the information entropy (IE) and the reconstruction entropy (RE) algorithms in order to identify the main transition velocities in a bubble column. The IE values exhibited two well-pronounced minima at UG=0.025 m/s and UG=0.085 m/s identifying the boundaries of the homogeneous, transition and heterogeneous regimes. The RE extracted from the central region of the column’s cross-section exhibited only one characteristic peak at UG=0.03 m/s, which was attributed to the transition from the homogeneous to the heterogeneous flow regime. This result implies that the transition regime is non-existent in the core of the column.
Entropy Production in Mixed Convection in a Horizontal Porous Channel Using Darcy-Brinkman Formulation
The paper reports a numerical investigation of the entropy generation analysis due to mixed convection in laminar flow through a channel filled with porous media. The second law of thermodynamics is applied to investigate the entropy generation rate. The Darcy-Brinkman Model is employed. The entropy generation due to heat transfer and friction dissipations has been determined in mixed convection by solving numerically the continuity, momentum and energy equations, using a control volume finite element method. The effects of Darcy number, modified Brinkman number and the Rayleigh number on averaged entropy generation and averaged Nusselt number are investigated. The Rayleigh number varied between 103 ≤ Ra ≤ 105 and the modified Brinkman number ranges between 10-5 ≤ Br≤ 10-1 with fixed values of porosity and Reynolds number at 0.5 and 10 respectively. The Darcy number varied between 10-6 ≤ Da ≤10.
A Modified Shannon Entropy Measure for Improved Image Segmentation
The Shannon Entropy measure has been widely used for measuring uncertainty. However, in partial settings, the histogram is used to estimate the underlying distribution. The histogram is dependent on the number of bins used. In this paper, a modification is proposed that makes the Shannon entropy based on histogram consistent. For providing the benefits, two application are picked in medical image processing applications. The simulations are carried out to show the superiority of this modified measure for image segmentation problem. The improvement may be contributed to robustness shown to uneven background in images.
Complete Enumeration Approach for Calculation of Residual Entropy for Diluted Spin Ice
We consider the antiferromagnetic systems of Ising spins located at the sites of the hexagonal, triangular and pyrochlore lattices. Such systems can be diluted to a certain concentration level by randomly replacing the magnetic spins with nonmagnetic ones. Quite recently we studied density of states (DOS) was calculated by the Wang-Landau method. Based on the obtained data, we calculated the dependence of the residual entropy (entropy at a temperature tending to zero) on the dilution concentration for quite large systems (more than 2000 spins). In the current study, we obtained the same data for small systems (less than 20 spins) by a complete search of all possible magnetic configurations and compared the result with the result for large systems. The shape of the curve remains unchanged in both cases, but the specific values of the residual entropy are different because of the finite size effect.
Automatic Seizure Detection Using Weighted Permutation Entropy and Support Vector Machine
The automated epileptic seizure detection research field has emerged in the recent years; this involves analyzing the Electroencephalogram (EEG) signals instead of the traditional visual inspection performed by expert neurologists. In this study, a Support Vector Machine (SVM) that uses Weighted Permutation Entropy (WPE) as the input feature is proposed for classifying normal and seizure EEG records. WPE is a modified statistical parameter of the permutation entropy (PE) that measures the complexity and irregularity of a time series. It incorporates both the mapped ordinal pattern of the time series and the information contained in the amplitude of its sample points. The proposed system utilizes the fact that entropy based measures for the EEG segments during epileptic seizure are lower than in normal EEG.
Numerical Prediction of Entropy Generation in Heat Exchangers
The concept of second law is assumed to be important to optimize the energy losses in heat exchangers. The present study is devoted to the numerical prediction of entropy generation due to heat transfer and friction in a double tube heat exchanger partly or fully filled with a porous medium. The goal of this work is to find the optimal conditions that allow minimizing entropy generation. For this purpose, numerical modeling based on the control volume method is used to describe the flow and heat transfer phenomena in the fluid and the porous medium. Effects of the porous layer thickness, its permeability, and the effective thermal conductivity have been investigated. Unexpectedly, the fully porous heat exchanger yields a lower entropy generation than the partly porous case or the fluid case even if the friction increases the entropy generation.
Non-linear Analysis of Spontaneous EEG After Spinal Cord Injury: An Experimental Study
Spinal cord injury (SCI) brings great negative influence to the patients and society. Neurological loss in human after SCI is a major challenge in clinical. Instead, neural regeneration could have been seen in animals after SCI, and such regeneration could be retarded by blocking neural plasticity pathways, showing the importance of neural plasticity in functional recovery. Here we used sample entropy as an indicator of nonlinear dynamical in the brain to quantify plasticity changes in spontaneous EEG recordings of rats before and after SCI. The results showed that the entropy values were increased after the injury during the recovery in one week. The increasing tendency of sample entropy values is consistent with that of behavioral evaluation scores. It is indicated the potential application of sample entropy analysis for the evaluation of neural plasticity in spinal cord injury rat model.
Method of Estimating Absolute Entropy of Municipal Solid Waste
Entropy, as an outcome of the second law of thermodynamics, measures the level of irreversibility associated with any process. The identification and reduction of irreversibility in the energy conversion process helps to improve the efficiency of the system. The entropy of pure substances known as absolute entropy is determined at an absolute reference point and is useful in the thermodynamic analysis of chemical reactions; however, municipal solid waste (MSW) is a structurally complicated material with unknown absolute entropy. In this work, an empirical model to calculate the absolute entropy of MSW based on the content of carbon, hydrogen, oxygen, nitrogen, sulphur, and chlorine on a dry ash free basis (daf) is presented. The proposed model was derived from 117 relevant organic substances which represent the main constituents in MSW with known standard entropies using statistical analysis. The substances were divided into different waste fractions; namely, food, wood/paper, textiles/rubber and plastics waste and the standard entropies of each waste fraction and for the complete mixture were calculated. The correlation of the standard entropy of the complete waste mixture derived was found to be somsw= 0.0101C + 0.0630H + 0.0106O + 0.0108N + 0.0155S + 0.0084Cl ( and the present correlation can be used for estimating the absolute entropy of MSW by using the elemental compositions of the fuel within the range of 10.3% ≤ C ≤ 95.1%, 0.0% ≤ H ≤ 14.3%, 0.0% ≤ O ≤ 71.1%, 0.0 ≤ N ≤ 66.7%, 0.0% ≤ S ≤ 42.1%, 0.0% ≤ Cl ≤ 89.7%. The model is also applicable for the efficient modelling of a combustion system in a waste-to-energy plant.
Minimization Entropic Applied to Rotary Dryers to Reduce the Energy Consumption
The drying process is an important operation in the chemical industry and it is widely used in the food, grain industry and fertilizer industry. However, for demanding a considerable consumption of energy, such a process requires a deep energetic analysis in order to reduce operating costs. This paper deals with thermodynamic optimization applied to rotary dryers based on the entropy production minimization, aiming at to reduce the energy consumption. To do this, the mass, energy and entropy balance was used for developing a relationship that represents the rate of entropy production. The use of the Second Law of Thermodynamics is essential because it takes into account constraints of nature. Since the entropy production rate is minimized, optimals conditions of operations can be established and the process can obtain a substantial gain in energy saving. The minimization strategy had been led using classical methods such as Lagrange multipliers and implemented in the MATLAB platform. As expected, the preliminary results reveal a significant energy saving by the application of the optimal parameters found by the procedure of the entropy minimization It is important to say that this method has shown easy implementation and low cost.
Numerical and Analytical Approach for Film Condensation on Different Forms of Surfaces
This paper seeks to the solution of condensation around of a flat plate, circular and elliptical tube in way of numerical and analytical methods. Also, it calculates the entropy production rates. The first, problem was solved by using mesh dynamic and rational assumptions, next it was compared with the numerical solution that the result had acceptable errors. An additional supporting relation was applied based on a characteristic of condensation phenomenon for condensing elements. As it has been shown here, due to higher rates of heat transfer for elliptical tubes, they have more entropy production rates, in comparison to circular ones. Findings showed that two methods were efficient. Furthermore, analytical methods can be used to optimize the problem and reduce the entropy production rate.
Max-Entropy Feed-Forward Clustering Neural Network
The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution under some limited conditions. As this paper defines two processes in Feed-Forward Neural Network, our limited condition is the abstracted features of samples which are worked out in the abstraction process. And the final outputs are the probability distribution for different clusters in the clustering process. As Entropy-Based Principle is considered into the feed-forward neural network, a clustering method is born. We have conducted some experiments on six open UCI data sets, comparing with a few baselines and applied purity as the measurement. The results illustrate that our method outperforms all the other baselines that are most popular clustering methods.
Entropy Generation of Unsteady Reactive Hydromagnetic Generalized Couette Fluid Flow of a Two-Step Exothermic Chemical Reaction Through a Channel
In this study, analysis of the entropy generation of an unsteady reactive hydromagnetic generalized couette fluid flow of a two-step exothermic chemical reaction through a channel with isothermal wall temperature under the influence of different chemical kinetics namely: Sensitized, Arrhenius and Bimolecular kinetics was investigated. The modelled nonlinear dimensionless equations governing the fluid flow were simplified and solved using the combined Laplace Differential Transform Method (LDTM). The effects of fluid parameters associated with the problem on the fluid temperature, entropy generation rate and Bejan number were discussed and presented through graphs.
Heat Transfer and Entropy Generation in a Partial Porous Channel Using LTNE and Exothermicity/Endothermicity Features
This work aims to provide a comprehensive study on the heat transfer and entropy generation rates of a horizontal channel partially filled with a porous medium which experiences internal heat generation or consumption due to exothermic or endothermic chemical reaction. The focus has been given to the local thermal non-equilibrium (LTNE) model. The LTNE approach helps us to deliver more accurate data regarding temperature distribution within the system and accordingly to provide more accurate Nusselt number and entropy generation rates. Darcy-Brinkman model is used for the momentum equations, and constant heat flux is assumed for boundary conditions for both upper and lower surfaces. Analytical solutions have been provided for both velocity and temperature fields. By incorporating the investigated velocity and temperature formulas into the provided fundamental equations for the entropy generation, both local and total entropy generation rates are plotted for a number of cases. Bifurcation phenomena regarding temperature distribution and interface heat flux ratio are observed. It has been found that the exothermicity or endothermicity characteristic of the channel does have a considerable impact on the temperature fields and entropy generation rates.
A Paradigm Shift in Energy Policy and Use: Exergy and Hybrid Renewable Energy Technologies
Sustainable energy use is exploiting energy resources within acceptable levels of global resource depletion without destroying the ecological balance of an area. In the context of sustainability, the rush to quell the energy crisis of the fossil fuels of the 1970's by embarking on nuclear energy technology has now been seen as a disaster. In the circumstance, action (policy) suggested in this study to avoid future occurrence is exergy maximization/entropy generation minimization and the use is renewable energy technologies that are hybrid based. Thirty-two (32) selected hybrid renewable energy technologies were assessed with respect to their energetic efficiencies and entropy generation. The results indicated that determining which of the hybrid technologies is the most efficient process and sustainable is a matter of defining efficiency and knowing which of them possesses the minimum entropy generation.
Fuzzy Logic Modeling of Evaluation the Urban Skylines by the Entropy Approach
When evaluating the aesthetics of cities, an analysis of the urban form development depending on design properties with a variety of factors is performed together with a study of the effects of this appearance on human beings. Different methods are used while making an aesthetical evaluation related to a city. Entropy, in its preliminary meaning, is the mathematical representation of thermodynamic results. Measuring the entropy is related to the distribution of positional figures of a message or information from the probabilities standpoint. In this study, analysis of evaluation the urban skylines by the entropy approach was modelled with Rule-Based Mamdani-Type Fuzzy (RBMTF) modelling technique. Input-output parameters were described by RBMTF if-then rules. Numerical parameters of input and output variables were fuzzificated as linguistic variables: Very Very Low (L1), Very Low (L2), Low (L3), Negative Medium (L4), Medium (L5), Positive Medium (L6), High (L7), Very High (L8) and Very Very High (L9) linguistic classes. The comparison between application data and RBMTF is done by using absolute fraction of variance (R2). The actual values and RBMTF results indicated that RBMTF can be successfully used for the analysis of evaluation the urban skylines by the entropy approach. As a result, RBMTF model has shown satisfying relation with experimental results, which suggests an alternative method to evaluation of the urban skylines by the entropy approach.
Design and Implementation of Pseudorandom Number Generator Using Android Sensors
A smartphone or tablet require a strong randomness to establish secure encrypted communication, encrypt files, etc. Therefore, random number generation is one of the main keys to provide secrecy. Android devices are equipped with hardware-based sensors, such as accelerometer, gyroscope, etc. Each of these sensors provides a stochastic process which has a potential to be used as an extra randomness source, in addition to /dev/random and /dev/urandom pseudorandom number generators. Android sensors can provide randomness automatically. To obtain randomness from Android sensors, each one of Android sensors shall be used to construct an entropy source. After all entropy sources are constructed, output from these entropy sources are combined to provide more entropy. Then, a deterministic process is used to produces a sequence of random bits from the combined output. All of these processes are done in accordance with NIST SP 800-22 and the series of NIST SP 800-90. The operation conditions are done 1) on Android user-space, and 2) the Android device is placed motionless on a desk.
Decision Making Approach through Generalized Fuzzy Entropy Measure
Uncertainty is found everywhere and its understanding is central to decision making. Uncertainty emerges as one has less information than the total information required describing a system and its environment. Uncertainty and information are so closely associated that the information provided by an experiment for example, is equal to the amount of uncertainty removed. It may be pertinent to point out that uncertainty manifests itself in several forms and various kinds of uncertainties may arise from random fluctuations, incomplete information, imprecise perception, vagueness etc. For instance, one encounters uncertainty due to vagueness in communication through natural language. Uncertainty in this sense is represented by fuzziness resulting from imprecision of meaning of a concept expressed by linguistic terms. Fuzzy set concept provides an appropriate mathematical framework for dealing with the vagueness. Both information theory, proposed by Shannon (1948) and fuzzy set theory given by Zadeh (1965) plays an important role in human intelligence and various practical problems such as image segmentation, medical diagnosis etc. Numerous approaches and theories dealing with inaccuracy and uncertainty have been proposed by different researcher. In the present communication, we generalize fuzzy entropy proposed by De Luca and Termini (1972) corresponding to Shannon entropy(1948). Further, some of the basic properties of the proposed measure were examined. We also applied the proposed measure to the real life decision making problem.
Using Maximization Entropy in Developing a Filipino Phonetically Balanced Wordlist for a Phoneme-Level Speech Recognition System
In this paper, a set of Filipino Phonetically Balanced Word list consisting of 250 words (PBW250) were constructed for a phoneme-level ASR system for the Filipino language. The Entropy Maximization is used to obtain phonological balance in the list. Entropy of phonemes in a word is maximized, providing an optimal balance in each word’s phonological distribution using the Add-Delete Method (PBW algorithm) and is compared to the modified PBW algorithm implemented in a dynamic algorithm approach to obtain optimization. The gained entropy score of 4.2791 and 4.2902 for the PBW and modified algorithm respectively. The PBW250 was recorded by 40 respondents, each with 2 sets data. Recordings from 30 respondents were trained to produce an acoustic model that were tested using recordings from 10 respondents using the HMM Toolkit (HTK). The results of test gave the maximum accuracy rate of 97.77% for a speaker dependent test and 89.36% for a speaker independent test.
Using Surface Entropy Reduction to Improve the Crystallization Properties of a Recombinant Antibody Fragment RNA Crystallization Chaperone
Phage displaying synthetic Fab libraries have been used to obtain Fabs that bind to specific RNA targets with high affinity and specificity. These Fabs have been demonstrated to facilitate RNA crystallization. However, the antibody framework used in the construction of these phage display libraries contains numerous bulky, flexible, and charged residues, which facilitate solubility and hinder aggregation. These residues can interfere with crystallization due to the entropic cost associated with burying them within crystal contacts. To systematically reduce the surface entropy of the Fabs and improve their crystallization properties, a protein engineering strategy termed surface entropy reduction (SER) is being applied to the Fab framework. In this approach, high entropy residues are mutated to smaller ones such as alanine or serine. Focusing initially on Fab BL3-6, which binds an RNA AAACA pentaloop with 20nM affinity, the SER P server ( was used and analysis was performed on existing RNA-Fab BL3-6 co-crystal structures. From this analysis twelve surface entropy reduced mutants were designed. These SER mutants were expressed and are now being measured for their crystallization and diffraction performance with various RNA targets. So far, one mutant has generated 3.02 angstrom diffraction with the yjdF riboswitch RNA. Ultimately, the most productive mutations will be combined into a new Fab framework to be used in a optimized phage displayed Fab library.
Optimized and Secured Digital Watermarking Using Entropy, Chaotic Grid Map and Its Performance Analysis
This paper presents an optimized, robust, and secured watermarking technique. The methodology used in this work is the combination of entropy and chaotic grid map. The proposed methodology incorporates Discrete Cosine Transform (DCT) on the host image. To improve the imperceptibility of the method, the host image DCT blocks, where the watermark is to be embedded, are further optimized by considering the entropy of the blocks. Chaotic grid is used as a key to reorder the DCT blocks so that it will further increase security while selecting the watermark embedding locations and its sequence. Without a key, one cannot reveal the exact watermark from the watermarked image. The proposed method is implemented on four different images. It is concluded that the proposed method is giving better results in terms of imperceptibility measured through PSNR and found to be above 50. In order to prove the effectiveness of the method, the performance analysis is done after implementing different attacks on the watermarked images. It is found that the methodology is very strong against JPEG compression attack even with the quality parameter up to 15. The experimental results are confirming that the combination of entropy and chaotic grid map method is strong and secured to different image processing attacks.
Performance Complexity Measurement of Tightening Equipment Based on Kolmogorov Entropy
The performance of the tightening equipment will decline with the working process in manufacturing system. The main manifestations are the randomness and discretization degree increasing of the tightening performance. To evaluate the degradation tendency of the tightening performance accurately, a complexity measurement approach based on Kolmogorov entropy is presented. At first, the states of performance index are divided for calibrating the discrete degree. Then the complexity measurement model based on Kolmogorov entropy is built. The model describes the performance degradation tendency of tightening equipment quantitatively. At last, a study case is applied for verifying the efficiency and validity of the approach. The research achievement shows that the presented complexity measurement can effectively evaluate the degradation tendency of the tightening equipment. It can provide theoretical basis for preventive maintenance and life prediction of equipment.
Entropy Generation Analysis of Cylindrical Heat Pipe Using Nanofluid
In this study, second law of thermodynamic is employed to evaluate heat pipe thermal performance. In fact, nanofluids potential to decrease the entropy generation of cylindrical heat pipes are studied and the results are compared with experimental data. Some cylindrical copper heat pipes of 200 mm length and 6.35 mm outer diameter were fabricated and tested with distilled water and water based Al2O3 nanofluids with volume concentrations of 1-5% as working fluids. Nanofluids are nanotechnology-based colloidal suspensions fabricated by suspending nanoparticles in a base liquid. These fluids have shown potential to enhance heat transfer properties of the base liquids used in heat transfer application. When the working fluid undergoes between different states in heat pipe cycle the entropy is generated. Different sources of irreversibility in heat pipe thermodynamic cycle are investigated and nanofluid effect on each of these sources is studied. Both experimental and theoretical studies reveal that nanofluid is a good choice to minimize the entropy generation in heat pipe thermodynamic cycle which results in higher thermal performance and efficiency of the system.
Optimal ECG Sampling Frequency for Multiscale Entropy-Based HRV
Multiscale entropy (MSE) is an extensively used index to provide a general understanding of multiple complexity of physiologic mechanism of heart rate variability (HRV) that operates on a wide range of time scales. Accurate selection of electrocardiogram (ECG) sampling frequency is an essential concern for clinically significant HRV quantification; high ECG sampling rate increase memory requirements and processing time, whereas low sampling rate degrade signal quality and results in clinically misinterpreted HRV. In this work, the impact of ECG sampling frequency on MSE based HRV have been quantified. MSE measures are found to be sensitive to ECG sampling frequency and effect of sampling frequency will be a function of time scale.
Effect of Carbon Additions on FeCrNiMnTi High Entropy Alloy
Recently, the high entropy alloys (HEA) are the focus of attention in metallurgical and materials science due to their desirable and superior properties in comparison to conventional alloys. The HEA field has promoted the exploration of several compositions including the addition of non-metallic elements like carbon, which in traditional metallurgy is mainly used in the steel industry. The aim of this work was the synthesis of equiatomic FeCrNiMnTi high entropy alloys, with minor carbon content, by mechanical alloying and sintering. The effect of the addition of carbon nanotubes and graphite were evaluated by X-ray diffraction, scanning electron microscopy, and microhardness test. The structural and microstructural characteristics of the equiatomic alloys, as well as their hardness were compared with those of an austenitic AISI 321 stainless steel processed under the same conditions. The results showed that porosity in bulk samples decreases with carbon nanotubes addition, while the equiatomic composition favors the formation of titanium carbide and increased the AISI 321 hardness more than three times.
Relative Entropy Used to Determine the Divergence of Cells in Single Cell RNA Sequence Data Analysis
Single cell RNA sequence (scRNA-seq) is one of the effective tools to study transcriptomics of biological processes. Recently, similarity measurement of cells is Euclidian distance or its derivatives. However, the process of scRNA-seq is a multi-variate Bernoulli event model, thus we hypothesize that it would be more efficient when the divergence between cells is valued with relative entropy than Euclidian distance. In this study, we compared the performances of Euclidian distance, Spearman correlation distance and Relative Entropy using scRNA-seq data of the early, medial and late stage of limb development generated in our lab. Relative Entropy is better than other methods according to cluster potential test. Furthermore, we developed KL-SNE, an algorithm modifying t-SNE whose definition of divergence between cells Euclidian distance to Kullback–Leibler divergence. Results showed that KL-SNE was more effective to dissect cell heterogeneity than t-SNE, indicating the better performance of relative entropy than Euclidian distance. Specifically, the chondrocyte expressing Comp was clustered together with KL-SNE but not with t-SNE. Surprisingly, cells in early stage were surrounded by cells in medial stage in the processing of KL-SNE while medial cells neighbored to late stage with the process of t-SNE. This results parallel to Heatmap which showed cells in medial stage were more heterogenic than cells in other stages. In addition, we also found that results of KL-SNE tend to follow Gaussian distribution compared with those of the t-SNE, which could also be verified with the analysis of scRNA-seq data from another study on human embryo development. Therefore, it is also an effective way to convert non-Gaussian distribution to Gaussian distribution and facilitate the subsequent statistic possesses. Thus, relative entropy is potentially a better way to determine the divergence of cells in scRNA-seq data analysis.
Evaluation of Soil Thermal-Entropy Properties with a Single-Probe Heat-Pulse Technique
Although soil thermal properties are required in many areas to improve oil recovery, they are seldom measured on a routine basis. Reasons for this are unclear, but may be related to a lack of suitable instrumentation and entropy theory. We integrate single probe thermal gradient for the radial conduction of a short-duration heat pulse away from a single electrode source, and compared it with the theory for an instantaneously heated line source. By measuring the temperature response at a short distance from the line source, and applying short-duration heat-pulse theory, we can extract all the entropy properties, the thermal diffusivity, heat capacity, and conductivity, from a single heat-pulse measurement. Results of initial experiments carried out on air-dry sand and clay materials indicate that this heat-pulse method yields soil thermal properties that compare well with thermal properties measured by single electrode.
Specific Emitter Identification Based on Refined Composite Multiscale Dispersion Entropy
The wireless communication network is developing rapidly, thus the wireless security becomes more and more important. Specific emitter identification (SEI) is an vital part of wireless communication security as a technique to identify the unique transmitters. In this paper, a SEI method based on multiscale dispersion entropy (MDE) and refined composite multiscale dispersion entropy (RCMDE) is proposed. The algorithms of MDE and RCMDE are used to extract features for identification of five wireless devices and cross-validation support vector machine (CV-SVM) is used as the classifier. The experimental results show that the total identification accuracy is 99.3%, even at low signal-to-noise ratio(SNR) of 5dB, which proves that MDE and RCMDE can describe the communication signal series well. In addition, compared with other methods, the proposed method is effective and provides better accuracy and stability for SEI.
Three-Dimensional Unsteady Natural Convection and Entropy Generation in an Inclined Cubical Trapezoidal Cavity Subjected to Uniformly Heated Bottom Wall
Numerical computation of unsteady laminar three-dimensional natural convection and entropy generation in an inclined cubical trapezoidal air-filled cavity is performed for the first time in this work. The vertical right and left sidewalls of the cavity are maintained at constant cold temperatures. The lower wall is subjected to a constant hot temperature, while the upper one is considered insulated. Computations are performed for Rayleigh numbers varied as 103 ≤ Ra ≤ 105, while the trapezoidal cavity inclination angle is varied as 0° ≤ ϕ ≤ 180°. Prandtl number is considered constant at Pr = 0.71. The second law of thermodynamics is applied to obtain thermodynamic losses inside the cavity due to both heat transfer and fluid friction irreversibilities. The variation of local and average Nusselt numbers are presented and discussed.While, streamlines, isotherms and entropy contours are presented in both two and three-dimensional pattern. The results show that when the Rayleigh number increases, the flow patterns are changed especially in three-dimensional results and the flow circulation increases. Also, the inclination angle effect on the total entropy generation becomes insignificant when the Rayleigh number is low.Moreover, when the Rayleigh number increases the average Nusselt number increases.
Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.
Evaluation of Heat Transfer and Entropy Generation by Al2O3-Water Nanofluid
In this numerical work, natural convection and entropy generation of Al2O3–water nanofluid in square cavity have been studied. A two-dimensional steady laminar natural convection in a differentially heated square cavity of length L, filled with a nanofluid is investigated numerically. The horizontal walls are considered adiabatic. Vertical walls corresponding to x=0 and x=L are respectively maintained at hot temperature, Th and cold temperature, Tc. The resolution is performed by the CFD code "FLUENT" in combination with GAMBIT as mesh generator. These simulations are performed by maintaining the Rayleigh numbers varied as 103 ≤ Ra ≤ 106, while the solid volume fraction varied from 1% to 5%, the particle size is fixed at dp=33 nm and a range of the temperature from 20 to 70 °C. We used models of thermophysical nanofluids properties based on experimental measurements for studying the effect of adding solid particle into water in natural convection heat transfer and entropy generation of nanofluid. Such as models of thermal conductivity and dynamic viscosity which are dependent on solid volume fraction, particle size and temperature. The average Nusselt number is calculated at the hot wall of the cavity in a different solid volume fraction. The most important results is that at low temperatures (less than 40 °C), the addition of nanosolids Al2O3 into water leads to a decrease in heat transfer and entropy generation instead of the expected increase, whereas at high temperature, heat transfer and entropy generation increase with the addition of nanosolids. This behavior is due to the contradictory effects of viscosity and thermal conductivity of the nanofluid. These effects are discussed in this work.
Factory Communication System for Customer-Based Production Execution: An Empirical Study on the Manufacturing System Entropy
The manufacturing industry is currently experiencing a paradigm shift into the Fourth Industrial Revolution in which customers are increasingly at the epicentre of production. The high degree of production customization and personalization requires a flexible manufacturing system that will rapidly respond to the dynamic and volatile changes driven by the market. They are a gap in technology that allows for the optimal flow of information and optimal manufacturing operations on the shop floor regardless of the rapid changes in the fixture and part demands. Information is the reduction of uncertainty; it gives meaning and context on the state of each cell. The amount of information needed to describe cellular manufacturing systems is investigated by two measures: the structural entropy and the operational entropy. Structural entropy is the expected amount of information needed to describe scheduled states of a manufacturing system. While operational entropy is the amount of information that describes the scheduled states of a manufacturing system, which occur during the actual manufacturing operation. Using Anylogic simulator a typical manufacturing job shop was set-up with a cellular manufacturing configuration. The cellular make-up of the configuration included; a Material handling cell, 3D Printer cell, Assembly cell, manufacturing cell and Quality control cell. The factory shop provides manufactured parts to a number of clients, and there are substantial variations in the part configurations, new part designs are continually being introduced to the system. Based on the normal expected production schedule, the schedule adherence was calculated from the structural entropy and operation entropy of varying the amounts of information communicated in simulated runs. The structural entropy denotes a system that is in control; the necessary real-time information is readily available to the decision maker at any point in time. For contractive analysis, different out of control scenarios were run, in which changes in the manufacturing environment were not effectively communicated resulting in deviations in the original predetermined schedule. The operational entropy was calculated from the actual operations. From the results obtained in the empirical study, it was seen that increasing, the efficiency of a factory communication system increases the degree of adherence of a job to the expected schedule. The performance of downstream production flow fed from the parallel upstream flow of information on the factory state was increased.
On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept
Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.
Electroencephalography (EEG) Analysis of Alcoholic and Control Subjects Using Multiscale Permutation Entropy
Brain electrical activity as reflected in Electroencephalography (EEG) have been analyzed and diagnosed using various techniques. Among them, complexity measure, nonlinearity, disorder, and unpredictability play vital role due to the nonlinear interconnection between functional and anatomical subsystem emerged in brain in healthy state and during various diseases. There are many social and economical issues of alcoholic abuse as memory weakness, decision making, impairments, and concentrations etc. Alcoholism not only defect the brains but also associated with emotional, behavior, and cognitive impairments damaging the white and gray brain matters. A recently developed signal analysis method i.e. Multiscale Permutation Entropy (MPE) is proposed to estimate the complexity of long-range temporal correlation time series EEG of Alcoholic and Control subjects acquired from University of California Machine Learning repository and results are compared with MSE. Using MPE, coarsed grained series is first generated and the PE is computed for each coarsed grained time series against the electrodes O1, O2, C3, C4, F2, F3, F4, F7, F8, Fp1, Fp2, P3, P4, T7, and T8. The results computed against each electrode using MPE gives higher significant values as compared to MSE as well as mean rank differences accordingly. Likewise, ROC and Area under the ROC also gives higher separation against each electrode using MPE in comparison to MSE.
Entropy Generation Analysis of Heat Recovery Vapor Generator for Ammonia-Water Mixture
This paper carries out a performance analysis based on the first and second laws of thermodynamics for heat recovery vapor generator (HRVG) of ammonia-water mixture when the heat source is low-temperature energy in the form of sensible heat. In the analysis, effects of the ammonia mass concentration and mass flow ratio of the binary mixture are investigated on the system performance including the effectiveness of heat transfer, entropy generation, and exergy efficiency. The results show that the ammonia concentration and the mass flow ratio of the mixture have significant effects on the system performance of HRVG.
SIP Flooding Attacks Detection and Prevention Using Shannon, Renyi and Tsallis Entropy
Voice over IP (VOIP) network, also known as Internet telephony, is growing increasingly having occupied a large part of the communications market. With the growth of each technology, the related security issues become of particular importance. Taking advantage of this technology in different environments with numerous features put at our disposal, there arises an increasing need to address the security threats. Being IP-based and playing a signaling role in VOIP networks, Session Initiation Protocol (SIP) lets the invaders use weaknesses of the protocol to disable VOIP service. One of the most important threats is denial of service attack, a branch of which in this article we have discussed as flooding attacks. These attacks make server resources wasted and deprive it from delivering service to authorized users. Distributed denial of service attacks and attacks with a low rate can mislead many attack detection mechanisms. In this paper, we introduce a mechanism which not only detects distributed denial of service attacks and low rate attacks, but can also identify the attackers accurately. We detect and prevent flooding attacks in SIP protocol using Shannon (FDP-S), Renyi (FDP-R) and Tsallis (FDP-T) entropy. We conducted an experiment to compare the percentage of detection and rate of false alarm messages using any of the Shannon, Renyi and Tsallis entropy as a measure of disorder. Implementation results show that, according to the parametric nature of the Renyi and Tsallis entropy, by changing the parameters, different detection percentages and false alarm rates will be gained with the possibility to adjust the sensitivity of the detection mechanism.
An Entropy Stable Three Dimensional Ideal MHD Solver with Guaranteed Positive Pressure
A high-order numerical magentohydrodynamics (MHD) solver built upon a non-linear entropy stable numerical flux function that supports eight traveling wave solutions will be described. The method is designed to treat the divergence-free constraint on the magnetic field in a similar fashion to a hyperbolic divergence cleaning technique. The solver is especially well-suited for flows involving strong discontinuities due to its strong stability without the need to enforce artificial low density or energy limits. Furthermore, a new formulation of the numerical algorithm to guarantee positivity of the pressure during the simulation is described and presented. By construction, the solver conserves mass, momentum, and energy and is entropy stable. High spatial order is obtained through the use of a third order limiting technique. High temporal order is achieved by utilizing the family of strong stability preserving (SSP) Runge-Kutta methods. Main attributes of the solver are presented as well as details on an implementation of the new solver into the multi-physics, multi-scale simulation code FLASH. The accuracy, robustness, and computational efficiency is demonstrated with a variety of numerical tests. Comparisons are also made between the new solver and existing methods already present in FLASH framework.
Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score
Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p< 0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.
An Improved C-Means Model for MRI Segmentation
Medical images are important to help identifying different diseases, for example, Magnetic resonance imaging (MRI) can be used to investigate the brain, spinal cord, bones, joints, breasts, blood vessels, and heart. Image segmentation, in medical image analysis, is usually the first step to find out some characteristics with similar color, intensity or texture so that the diagnosis could be further carried out based on these features. This paper introduces an improved C-means model to segment the MRI images. The model is based on information entropy to evaluate the segmentation results by achieving global optimization. Several contributions are significant. Firstly, Genetic Algorithm (GA) is used for achieving global optimization in this model where fuzzy C-means clustering algorithm (FCMA) is not capable of doing that. Secondly, the information entropy after segmentation is used for measuring the effectiveness of MRI image processing. Experimental results show the outperformance of the proposed model by comparing with traditional approaches.
Optimized and Secured Digital Watermarking Using Fuzzy Entropy, Bezier Curve and Visual Cryptography
Recent development in the usage of internet for different purposes creates a great threat for the copyright protection of the digital images. Digital watermarking can be used to address the problem. This paper presents detailed review of the different watermarking techniques, latest trends in the field of secured, robust and imperceptible watermarking. It also discusses the different optimization techniques used in the field of watermarking in order to improve the robustness and imperceptibility of the method. Different measures are discussed to evaluate the performance of the watermarking algorithm. At the end, this paper proposes a watermarking algorithm using (2, 2) share visual cryptography and Bezier curve based algorithm to improve the security of the watermark. The proposed method uses fractional transformation to improve the robustness of the copyright protection of the method. The algorithm is optimized using fuzzy entropy for better results.
Comparison of Entropy Coefficient and Internal Resistance of Two (Used and Fresh) Cylindrical Commercial Lithium-Ion Battery (NCR18650) with Different Capacities
The temperature rising within a battery cell depends on the level of heat generation, the thermal properties and the heat transfer around the cell. The rising of temperature is a serious problem of Lithium-Ion batteries and the internal resistance of battery is the main reason for this heating up, so the heat generation rate of the batteries is an important investigating factor in battery pack design. The delivered power of a battery is directly related to its capacity, decreases in the battery capacity means the growth of the Solid Electrolyte Interface (SEI) layer which is because of the deposits of lithium from the electrolyte to form SEI layer that increases the internal resistance of the battery. In this study two identical cylindrical Lithium-Ion (NCR18650)batteries from the same company with noticeable different in capacity (a fresh and a used battery) were compared for more focusing on their heat generation parameters (entropy coefficient and internal resistance) according to Brandi model, by utilizing potentiometric method for entropy coefficient and EIS method for internal resistance measurement. The results clarify the effect of capacity difference on cell electrical (R) and thermal (dU/dT) parameters. It can be very noticeable in battery pack design for its Safety.
Nonlinear Analysis in Investigating the Complexity of Neurophysiological Data during Reflex Behavior
Methods of nonlinear signal analysis are based on finding that random behavior can arise in deterministic nonlinear systems with a few degrees of freedom. Considering the dynamical systems, entropy is usually understood as a rate of information production. Changes in temporal dynamics of physiological data are indicating evolving of system in time, thus a level of new signal pattern generation. During last decades, many algorithms were introduced to assess some patterns of physiological responses to external stimulus. However, the reflex responses are usually characterized by short periods of time. This characteristic represents a great limitation for usual methods of nonlinear analysis. To solve the problems of short recordings, parameter of approximate entropy has been introduced as a measure of system complexity. Low value of this parameter is reflecting regularity and predictability in analyzed time series. On the other side, increasing of this parameter means unpredictability and a random behavior, hence a higher system complexity. Reduced neurophysiological data complexity has been observed repeatedly when analyzing electroneurogram and electromyogram activities during defence reflex responses. Quantitative phrenic neurogram changes are also obvious during severe hypoxia, as well as during airway reflex episodes. Concluding, the approximate entropy parameter serves as a convenient tool for analysis of reflex behavior characterized by short lasting time series.
Mathematical and Numerical Analysis of a Nonlinear Cross Diffusion System
We consider a nonlinear parabolic cross diffusion model arising in applied mathematics. A fully practical piecewise linear finite element approximation of the model is studied. By using entropy-type inequalities and compactness arguments, existence of a global weak solution is proved. Providing further regularity of the solution of the model, some uniqueness results and error estimates are established. Finally, some numerical experiments are performed.
Effect of Aging on the Second Law Efficiency, Exergy Destruction and Entropy Generation in the Skeletal Muscles during Exercise
The second law muscle work efficiency is obtained by multiplying the metabolic and mechanical work efficiencies. Thermodynamic analyses are carried out with 19 sets of arms and legs exercise data which were obtained from the healthy young people. These data are used to simulate the changes occurring during aging. The muscle work efficiency decreases with aging as a result of the reduction of the metabolic energy generation in the mitochondria. The reduction of the mitochondrial energy efficiency makes it difficult to carry out the maintenance of the muscle tissue, which in turn causes a decline of the muscle work efficiency. When the muscle attempts to produce more work, entropy generation and exergy destruction increase. Increasing exergy destruction may be regarded as the result of the deterioration of the muscles. When the exergetic efficiency is 0.42, exergy destruction becomes 1.49 folds of the work performance. This proportionality becomes 2.50 and 5.21 folds when the exergetic efficiency decreases to 0.30 and 0.17 respectively.
Entropy in a Field of Emergence in an Aspect of Linguo-Culture
Communicative situation is a basis, which designates potential models of ‘constructed forms’, a motivated basis of a text, for a text can be assumed as a product of the communicative situation. It is within the field of emergence the models of text, that can be potentially prognosticated in a certain communicative situation, are designated. Every text can be assumed as conceptual system structured on the base of certain communicative situation. However in the process of ‘structuring’ of a certain model of ‘conceptual system’ consciousness of a recipient is able act only within the border of the field of emergence for going out of this border indicates misunderstanding of the communicative situation. On the base of communicative situation we can witness the increment of meaning where the synergizing of the informative model of communication, formed by using of the invariant units of a language system, is a result of verbalization of the communicative situation. The potential of the models of a text, prognosticated within the field of emergence, also depends on the communicative situation. The conception ‘the field of emergence’ is interpreted as a unit of the language system, having poly-directed universal structure, implying the presence of the core, the center and the periphery, including different levels of means of a functioning system of language, both in terms of linguistic resources, and in terms of extra linguistic factors interaction of which results increment of a text. The conception ‘field of emergence’ is considered as the most promising in the analysis of texts: oral, written, printed and electronic. As a unit of the language system field of emergence has several properties that predict its use during the study of a text in different levels. This work is an attempt analysis of entropy in a text in the aspect of lingua-cultural code, prognosticated within the model of the field of emergence. The article describes the problem of entropy in the field of emergence, caused by influence of the extra-linguistic factors. The increasing of entropy is caused not only by the fact of intrusion of the language resources but by influence of the alien culture in a whole, and by appearance of non-typical for this very culture symbols in the field of emergence. The borrowing of alien lingua-cultural symbols into the lingua-culture of the author is a reason of increasing the entropy when constructing a text both in meaning and in structuring level. It is nothing but artificial formatting of lexical units that violate stylistic unity of a phrase. It is marked that one of the important characteristics descending the entropy in the field of emergence is a typical similarity of lexical and semantic resources of the different lingua-cultures in aspects of extra linguistic factors.
Frequent Itemset Mining Using Rough-Sets
Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and rough-sets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.
Influence of Mass Flow Rate on Forced Convective Heat Transfer through a Nanofluid Filled Direct Absorption Solar Collector
The convective and radiative heat transfer performance and entropy generation on forced convection through a direct absorption solar collector (DASC) is investigated numerically. Four different fluids, including Cu-water nanofluid, Al2O3-waternanofluid, TiO2-waternanofluid, and pure water are used as the working fluid. Entropy production has been taken into account in addition to the collector efficiency and heat transfer enhancement. Penalty finite element method with Galerkin&rsquo;s weighted residual technique is used to solve the governing non-linear partial differential equations. Numerical simulations are performed for the variation of mass flow rate. The outcomes are presented in the form of isotherms, average output temperature, the average Nusselt number, collector efficiency, average entropy generation, and Bejan number. The results present that the rate of heat transfer and collector efficiency enhance significantly for raising the values of m up to a certain range.
Energy Efficiency Index Applied to Reactive Systems
This paper focuses on the development of an energy efficiency index that will be applied to reactive systems, which is based in the First and Second Law of Thermodynamics, by giving particular consideration to the concept of maximum entropy. Among the requirements of such energy efficiency index, the practical feasibility must be essential. To illustrate the performance of the proposed index, such an index was used as decisive factor of evaluation for the optimization process of an industrial reactor. The results allow the conclusion to be drawn that the energy efficiency index applied to the reactive system is consistent because it extracts the information expected of an efficient indicator, and that it is useful as an analytical tool besides being feasible from a practical standpoint. Furthermore, it has proved to be much simpler to use than tools based on traditional methodologies.
The Relationship Study between Topological Indices in Contrast with Thermodynamic Properties of Amino Acids
In this study are computed some thermodynamic properties such as entropy and specific heat capacity, enthalpy, entropy and gibbs free energy in 10 type different Aminoacids using Gaussian software with DFT method and 6-311G basis set. Then some topological indices such as Wiener, shultz are calculated for mentioned molecules. Finaly is showed relationship between thermodynamic peoperties and above topological indices and with different curves is represented that there is a good correlation between some of the quantum properties with topological indices of them. The instructive example is directed to the design of the structure-property model for predicting the thermodynamic properties of the amino acids which are discussed here.
An Entropy Based Novel Algorithm for Internal Attack Detection in Wireless Sensor Network
Wireless Sensor Network (WSN) consists of low-cost and multi functional resources constrain nodes that communicate at short distances through wireless links. It is open media and underpinned by an application driven technology for information gathering and processing. It can be used for many different applications range from military implementation in the battlefield, environmental monitoring, health sector as well as emergency response of surveillance. With its nature and application scenario, security of WSN had drawn a great attention. It is known to be valuable to variety of attacks for the construction of nodes and distributed network infrastructure. In order to ensure its functionality especially in malicious environments, security mechanisms are essential. Malicious or internal attacker has gained prominence and poses the most challenging attacks to WSN. Many works have been done to secure WSN from internal attacks but most of it relay on either training data set or predefined threshold. Without a fixed security infrastructure a WSN needs to find the internal attacks is a challenge. In this paper we present an internal attack detection method based on maximum entropy model. The final experimental works showed that the proposed algorithm does work well at the designed level.
Standard Gibbs Energy of Formation and Entropy of Lanthanide-Iron Oxides of Garnet Crystal Structure
Standard Gibbs energy of formation ΔGfor(298.15) of lanthanide-iron double oxides of garnet-type crystal structure R3Fe5O12 - RIG (R – are rare earth ions) from initial oxides are evaluated. The calculation is based on the data of standard entropies S298.15 and standard enthalpies ΔH298.15 of formation of compounds which are involved in the process of garnets synthesis. Gibbs energy of formation is presented as temperature function ΔGfor(T) for the range 300-1600K. The necessary starting thermodynamic data were obtained from calorimetric study of heat capacity and by using the semi-empirical method for calculation of ΔH298.15 (formation). Thermodynamic functions for standard temperature – enthalpy, entropy and Gibbs energy - are recommended as reference data for technological evaluations. Through the isostructural series of rare earth-iron garnets the correlation between thermodynamic properties and characteristics of lanthanide ions are elucidated.
Tensile Properties of Aluminum Silicon Nickel Iron Vanadium High Entropy Alloys
Pure metals are not used in most cases for structural applications because of their limited properties. Presently, high entropy alloys (HEAs) are emerging by mixing comparative proportions of metals with the aim of maximizing the entropy leading to enhancement in structural and mechanical properties. Aluminum Silicon Nickel Iron Vanadium (AlSiNiFeV) alloy was developed using stir cast technique and analysed. Results obtained show that the alloy grade G0 contains 44 percentage by weight (wt%) Al, 32 wt% Si, 9 wt% Ni, 4 wt% Fe, 3 wt% V and 8 wt% for minor elements with tensile strength and elongation of 106 Nmm-2 and 2.68%, respectively. X-ray diffraction confirmed intermetallic compounds having hexagonal closed packed (HCP), orthorhombic and cubic structures in cubic dendritic matrix. This affirmed transformation from the cubic structures of elemental constituents of the HEAs to the precipitated structures of the intermetallic compounds. A maximum tensile strength of 188 Nmm-2 with 4% elongation was noticed at 10wt% of silica addition to the G0. An increase in tensile strength with an increment in silica content could be attributed to different phases and crystal geometries characterizing each HEA.
Sensor Monitoring of the Concentrations of Different Gases Present in Synthesis of Ammonia Based on Multi-Scale Entropy and Multivariate Statistics
The supervision of chemical processes is the subject of increased development because of the increasing demands on reliability and safety. An important aspect of the safe operation of chemical process is the earlier detection of (process faults or other special events) and the location and removal of the factors causing such events, than is possible by conventional limit and trend checks. With the aid of process models, estimation and decision methods it is possible to also monitor hundreds of variables in a single operating unit, and these variables may be recorded hundreds or thousands of times per day. In the absence of appropriate processing method, only limited information can be extracted from these data. Hence, a tool is required that can project the high-dimensional process space into a low-dimensional space amenable to direct visualization, and that can also identify key variables and important features of the data. Our contribution based on powerful techniques for development of a new monitoring method based on multi-scale entropy MSE in order to characterize the behaviour of the concentrations of different gases present in synthesis and soft sensor based on PCA is applied to estimate these variables.
Thermodynamic Study of Homo-Pairs in Molten Cd-Me, (Me=Ga,in) Binary Systems
The associative tendency between like atoms in molten Cd-Ga and Cd-In alloy systems has been studied by using the Quasi-Chemical Approximation Model (QCAM). The concentration dependence of the microscopic functions (the concentration-concentration fluctuations in the long-wavelength limits, Scc(0), the chemical short-range order (CSRO) parameter α1 as well as the chemical diffusion) and the mixing properties as the free energy of mixing, GM, enthalpy of mixing and entropy of mixing of the two molten alloys have been determined. Thermodynamic properties of both systems deviate positively from Raoult's law, while the systems are characterized by positive interaction energy. The role of atomic size ratio on the alloying properties was discussed.
Magnetocaloric Effect in Ho₂O₃ Nanopowder at Cryogenic Temperature
Magnetic refrigeration provides an attractive alternative cooling technology due to its potential advantages such as high cooling efficiency, environmental friendliness, low noise, and compactness over the conventional cooling techniques based on gas compression. Magnetocaloric effect (MCE) occurs by changes in entropy (ΔS) and temperature (ΔT) under external magnetic fields. We have been focused on identifying materials with large MCE in two temperature regimes, not only room temperature but also at cryogenic temperature for specific technological applications, such as space science and liquefaction of hydrogen in fuel industry. To date, the commonly used materials for cryogenic refrigeration are based on hydrated salts. In the present work, we report giant MCE in rare earth Ho2O3 nanopowder at cryogenic temperature. HoN nanoparticles with average size of 30 nm were prepared by using plasma arc discharge method with gas composition of N2/H2 (80%/20%). The prepared HoN was sintered in air atmosphere at 1200 oC for 24 hrs to convert it into oxide. Structural and morphological properties were studied by XRD and SEM. XRD confirms the pure phase and cubic crystal structure of Ho2O3 without any impurity within error range. It has been discovered that Holmium oxide exhibits giant MCE at low temperature without magnetic hysteresis loss with the second-order antiferromagnetic phase transition with Néels temperature around 2 K. The maximum entropy change was found to be 25.2 J/kgK at an applied field of 6 T.
The Analysis of a Reactive Hydromagnetic Internal Heat Generating Poiseuille Fluid Flow through a Channel
In this paper, the analysis of a reactive hydromagnetic Poiseuille fluid flow under each of sensitized, Arrhenius and bimolecular chemical kinetics through a channel in the presence of heat source is carried out. An exothermic reaction is assumed while the concentration of the material is neglected. Adomian Decomposition Method (ADM) together with Pade Approximation is used to obtain the solutions of the governing nonlinear non – dimensional differential equations. Effects of various physical parameters on the velocity and temperature fields of the fluid flow are investigated. The entropy generation analysis and the conditions for thermal criticality are also presented.
Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand
This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the Maximum Entropy Bootstrapping approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.
Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy
On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.
5iD Viewer: Observation of Fish School Behaviour in Labyrinths and Use of Semantic and Syntactic Entropy for School Structure Definition
In this article, a construction and some properties of the 5iD viewer, the system recording simultaneously five views of a given experimental object is reported. Properties of the system are demonstrated on the analysis of fish schooling behavior. It is demonstrated the method of instrument calibration which allows inclusion of image distortion and it is proposed and partly tested also the method of distance assessment in the case that only two opposite cameras are available. Finally, we demonstrate how the state trajectory of the behavior of the fish school may be constructed from the entropy of the system.
Phase Stability and Grain Growth Kinetics of Oxide Dispersed CoCrFeMnNi
The present study deals with phase evolution of oxide dispersed CoCrFeMnNi high entropy alloy as a function of amount of added Y2O3 during mechanical alloying and analysis of grain growth kinetics of CoCrFeMnNi high entropy alloy without and with oxide dispersion. Mechanical alloying of CoCrFeMnNi resulted in a single FCC phase. However, evolution of chromium carbide was observed after heat treatment between 1073 and 1473 K. Comparison of grain growth time exponents and activation energy barrier is also reported. Micro structural investigations, using electron microscopy and EBSD techniques, were carried out to confirm the enhanced grain growth resistance which is attributed to the presence oxide dispersoids.
Highly Accurate Target Motion Compensation Using Entropy Function Minimization
One of the defects of stepped frequency radar systems is their sensitivity to target motion. In such systems, target motion causes range cell shift, false peaks, Signal to Noise Ratio (SNR) reduction and range profile spreading because of power spectrum interference of each range cell in adjacent range cells which induces distortion in High Resolution Range Profile (HRRP) and disrupt target recognition process. Thus Target Motion Parameters (TMPs) effects compensation should be employed. In this paper, such a method for estimating TMPs (velocity and acceleration) and consequently eliminating or suppressing the unwanted effects on HRRP based on entropy minimization has been proposed. This method is carried out in two major steps: in the first step, a discrete search method has been utilized over the whole acceleration-velocity lattice network, in a specific interval seeking to find a less-accurate minimum point of the entropy function. Then in the second step, a 1-D search over velocity is done in locus of the minimum for several constant acceleration lines, in order to enhance the accuracy of the minimum point found in the first step. The provided simulation results demonstrate the effectiveness of the proposed method.
Entropy-Based Multichannel Stationary Measure for Characterization of Non-Stationary Patterns
In this work, we propose a novel approach for measuring the stationarity level of a multichannel time-series. This measure is based on a stationarity definition over time-varying spectrum, and it is aimed to quantify the relation between local stationarity (single-channel) and global dynamic behavior (multichannel dynamics). To assess the proposed approach validity, we use a well known EEG-BCI database, that was constructed for separate between motor/imagery tasks. Thus, based on the statement that imagination of movements implies an increase on the EEG dynamics, we use as discriminant features the proposed measure computed over an estimation of the non-stationary components of input time-series. As measure of separability we use a t-student test, and the obtained results evidence that such measure is able to accurately detect the brain areas projected on the scalp where motor tasks are realized.
Digital Watermarking Based on Visual Cryptography and Histogram
Nowadays, robust and secure watermarking algorithm and its optimization have been need of the hour. A watermarking algorithm is presented to achieve the copy right protection of the owner based on visual cryptography, histogram shape property and entropy. In this, both host image and watermark are preprocessed. Host image is preprocessed by using Butterworth filter, and watermark is with visual cryptography. Applying visual cryptography on water mark generates two shares. One share is used for embedding the watermark, and the other one is used for solving any dispute with the aid of trusted authority. Usage of histogram shape makes the process more robust against geometric and signal processing attacks. The combination of visual cryptography, Butterworth filter, histogram, and entropy can make the algorithm more robust, imperceptible, and copy right protection of the owner.
An Earth Mover’s Distance Algorithm Based DDoS Detection Mechanism in SDN
Software-defined networking (SDN) provides a solution for scalable network framework with decoupled control and data plane. However, this architecture also induces a particular distributed denial-of-service (DDoS) attack that can affect or even overwhelm the SDN network. DDoS attack detection problem has to date been mostly researched as entropy comparison problem. However, this problem lacks the utilization of SDN, and the results are not accurate. In this paper, we propose a DDoS attack detection method, which interprets DDoS detection as a signature matching problem and is formulated as Earth Mover&rsquo;s Distance (EMD) model. Considering the feasibility and accuracy, we further propose to define the cost function of EMD to be a generalized Kullback-Leibler divergence. Simulation results show that our proposed method can detect DDoS attacks by comparing EMD values with the ones computed in the case without attacks. Moreover, our method can significantly increase the true positive rate of detection.
Multiscale Entropy Analysis of Electroencephalogram (EEG) of Alcoholic and Control Subjects
Multiscale entropy analysis (MSE) is a useful technique recently developed to quantify the dynamics of physiological signals at different time scales. This study is aimed at investigating the electroencephalogram (EEG) signals to analyze the background activity of alcoholic and control subjects by inspecting various coarse-grained sequences formed at different time scales. EEG recordings of alcoholic and control subjects were taken from the publically available machine learning repository of University of California (UCI) acquired using 64 electrodes. The MSE analysis was performed on the EEG data acquired from all the electrodes of alcoholic and control subjects. Mann-Whitney rank test was used to find significant differences between the groups and result were considered statistically significant for p-values< 0.05. The area under receiver operator curve was computed to find the degree separation between the groups. The mean ranks of MSE values at all the times scales for all electrodes were higher control subject as compared to alcoholic subjects. Higher mean ranks represent higher complexity and vice versa. The finding indicated that EEG signals acquired through electrodes C3, C4, F3, F7, F8, O1, O2, P3, T7 showed significant differences between alcoholic and control subjects at time scales 1 to 5. Moreover, all electrodes exhibit significance level at different time scales. Likewise, the highest accuracy and separation was obtained at the central region (C3 and C4), front polar regions (P3, O1, F3, F7, F8 and T8) while other electrodes such asFp1, Fp2, P4 and F4 shows no significant results.
Real-Time Episodic Memory Construction for Optimal Action Selection in Cognitive Robotics
The three most important components in the cognitive architecture for cognitive robotics, is memory representation, memory recall, and action-selection performed by the executive. In this paper, action selection, performed by the executive, is defined as a memory quantification and optimization process. The methodology describes the real-time construction of episodic memory through semantic memory optimization. The optimization is performed by set-based particle swarm optimization, using an adaptive entropy memory quantification approach for fitness evaluation. The performance of the approach is experimentally evaluated by simulation, where a UAV is tasked with the collection and delivery of a medical package. The experiments show that the UAV dynamically uses the episodic memory to autonomously control its velocity, while successfully completing its mission.
The Bayesian Premium Under Entropy Loss
Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.
A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem
Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.
Influence of Sintering Temperature on Microhardness and Tribological Properties of Equi-Atomic Ti-Al-Mo-Si-W Multicomponent Alloy
Tribological failure of materials during application can lead to catastrophic events which also carry economic penalties. High entropy alloys (HEAs) have shown outstanding tribological properties in applications such as mechanical parts were moving parts under high friction are required. This work aims to investigate the effect of sintering temperature on microhardness properties and tribological properties of novel equiatomic TiAlMoSiW HEAs fabricated via spark plasma sintering. The effect of Spark plasma sintering temperature on morphological evolution and phase formation was also investigated. The microstructure and the phases formed for the developed HEAs were examined using scanning electron microscopy (SEM) and X-ray diffractometry (XRD) respectively. The microhardness and tribological properties were studied using a diamond base microhardness tester Rtec tribometer. The developed HEAs showed improved mechanical properties as the sintering temperature increases.
Utilizing Waste Heat from Thermal Power Plants to Generate Power by Modelling an Atmospheric Vortex Engine
Convective vortices are normal highlights of air that ingest lower-entropy-energy at higher temperatures than they dismiss higher-entropy-energy to space. By means of the thermodynamic proficiency, it has been anticipated that the force of convective vortices relies upon the profundity of the convective layer. The atmospheric vortex engine is proposed as a gadget for delivering mechanical energy by methods for artificially produced vortex. The task of the engine is in view of the certainties that the environment is warmed from the base and cooled from the top. By generation of the artificial vortex, it is planned to take out the physical solar updraft tower and decrease the capital of the solar chimney power plants. The study shows the essentials of the atmospheric vortex engine, furthermore, audits the cutting edge in subject. Moreover, the study talks about a thought on using the solar energy as heat source to work the framework. All in all, the framework is attainable and promising for electrical power production.
Utilizing Waste Heat from Thermal Power Plants to Generate Power by Modelling an Atmospheric Vortex Engine
Convective vortices are normal highlights of air that ingest lower-entropy-energy at higher temperatures than they dismiss higher-entropy-energy to space. By means of the thermodynamic proficiency, it has been anticipated that the force of convective vortices relies upon the profundity of the convective layer. The atmospheric vortex engine is proposed as a gadget for delivering mechanical energy by methods for artificially produced vortex. The task of the engine is in view of the certainties that the environment is warmed from the base and cooled from the top. By generation of the artificial vortex, it is planned to take out the physical solar updraft tower and decrease the capital of the solar chimney power plants. The study shows the essentials of the atmospheric vortex engine, furthermore, audits the cutting edge in subject. Moreover, the study talks about a thought on using the solar energy as heat source to work the framework. All in all, the framework is attainable and promising for electrical power production.
Entropy Generation Analyze Due to the Steady Natural Convection of Newtonian Fluid in a Square Enclosure
The thermal control in many systems is widely accomplished applying mixed convection process due to its low cost, reliability and easy maintenance. Typical applications include the aircraft electronic equipment, rotating-disc heat exchangers, turbo machinery, and nuclear reactors, etc. Natural convection in an inclined square enclosure heated via wall heater has been studied numerically. Finite volume method is used for solving momentum and energy equations in the form of stream function–vorticity. The right and left walls are kept at a constant temperature, while the other parts are adiabatic. The range of the inclination angle covers a whole revolution. The method is validated for a vertical cavity. A general power law dependence of the Nusselt number with respect to the Rayleigh number with the coefficient and exponent as functions of the inclination angle is presented. For a fixed Rayleigh number, the inclination angle increases or decreases is found.
Application of Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Multipoint Optimal Minimum Entropy Deconvolution in Railway Bearings Fault Diagnosis
Although the measured vibration signal contains rich information on machine health conditions, the white noise interferences and the discrete harmonic coming from blade, shaft and mash make the fault diagnosis of rolling element bearings difficult. In order to overcome the interferences of useless signals, a new fault diagnosis method combining Complete Ensemble Empirical Mode Decomposition with adaptive noise (CEEMDAN) and Multipoint Optimal Minimum Entropy Deconvolution (MOMED) is proposed for the fault diagnosis of high-speed train bearings. Firstly, the CEEMDAN technique is applied to adaptively decompose the raw vibration signal into a series of finite intrinsic mode functions (IMFs) and a residue. Compared with Ensemble Empirical Mode Decomposition (EEMD), the CEEMDAN can provide an exact reconstruction of the original signal and a better spectral separation of the modes, which improves the accuracy of fault diagnosis. An effective sensitivity index based on the Pearson's correlation coefficients between IMFs and raw signal is adopted to select sensitive IMFs that contain bearing fault information. The composite signal of the sensitive IMFs is applied to further analysis of fault identification. Next, for propose of identifying the fault information precisely, the MOMED is utilized to enhance the periodic impulses in composite signal. As a non-iterative method, the MOMED has better deconvolution performance than the classical deconvolution methods such Minimum Entropy Deconvolution (MED) and Maximum Correlated Kurtosis Deconvolution (MCKD). Third, the envelope spectrum analysis is applied to detect the existence of bearing fault. The simulated bearing fault signals with white noise and discrete harmonic interferences are used to validate the effectiveness of the proposed method. Finally, the superiorities of the proposed method are further demonstrated by high-speed train bearing fault datasets measured from test rig. The analysis results indicate that the proposed method has strong practicability.
Frank Norris’ McTeague: An Entropic Melodrama
According to Naturalistic principles, human destiny in the form of blind chance and determinism, entraps the individual, so man is a defenceless creature unable to escape from the ruthless paws of a stoical universe. In Naturalism; nonetheless, melodrama mirrors a conscious alternative with a peculiar function. A typical American Naturalistic character thus cannot be a subject for social criticism of American society since they are not victims of the ongoing virtual slavery, capitalist system, nor of a ruined milieu, but of their own volition, and more importantly, their character frailty. Through a Postmodern viewpoint, each Naturalistic work can encompass some entropic trends and changes culminating in an entire failure and devastation. Frank Norris in McTeague displays the futile struggles of ordinary men and how they end up brutes. McTeague encompasses intoxication, abuse, violation, and ruthless homicides. Norris&rsquo; depictions of the falling individual as a demon represent the entropic dimension of Naturalistic novels. McTeague&rsquo;s defeat is somewhat his own fault, the result of his own blunders and resolution, not the result of sheer accident. Throughout the novel, each character is a kind of insane quester indicating McTeague&rsquo;s decadence and, by inference, the decadence of Western civilisation. McTeague seems to designate Norris&rsquo; solicitude for a community fabricated by the elements of human negative demeanours and conducts hauling acute symptoms of infectious dehumanisation. The aim of this article is to illustrate how one specific negative human disposition gradually, like a running fire, can spread everywhere and burn everything in itself. The author applies the concept of entropy metaphorically to describe the individual devolutions that necessarily comprise community entropy in McTeague, a dying universe.
The Relationship between Rhythmic Complexity and Listening Engagement as a Proxy for Perceptual Interest
Although it has been confirmed by multiple studies, the inverted-U relationship between stimulus complexity and preference (liking) remains contentious. Research aimed at substantiating the model are largely reliant upon anecdotal self-assessments of subjects and basic measures of complexity, leaving potential confounds unresolved. This study attempts to address the topic by assessing listening time as a behavioral correlate of liking (with the assumption that engagement prolongs listening time) and by looking for latent factors underlying several measures of rhythmic complexity. Participants listened to groups of rhythms, stopping each one when they started to lose interest and were asked to rate each rhythm in each group in terms of interest, complexity, and preference. Subjects were not informed that the time spent listening to each rhythm was the primary measure of interest. The hypothesis that listening time does demonstrate the same inverted-U relationship with complexity as verbal reports of liking was confirmed using a variety of metrics for rhythmic complexity, including meter-dependent measures of syncopation and meter-independent measures of entropy.
The Analysis of Changes in Urban Hierarchy of Isfahan Province in the Fifty-Year Period (1956-2006)
The appearance of city and urbanism is one of the important processes which have affected social communities. Being industrialized urbanism developed along with each other in the history. In addition, they have had simple relationship for more than six thousand years, that is, from the appearance of the first cities. In 18th century by coming out of industrial capitalism, progressive development took place in urbanism in the world. In Iran, the city of each region made its decision by itself and the capital of region (downtown) was the only central part and also the regional city without any hierarchy, controlled its realm. However, this method of ruling during these three decays, because of changing in political, social and economic issues that have caused changes in rural and urban relationship. Moreover, it has changed the variety of performance of cities and systematic urban network in Iran. Today, urban system has very vast imbalanced apace and performance. In Isfahan, the trend of urbanism is like the other part of Iran and systematic urban hierarchy is not suitable and normal. This article is a quantitative and analytical. The statistical communities are Isfahan Province cities and the changes in urban network and its hierarchy during the period of fifty years (1956 -2006) has been surveyed. In addition, those data have been analyzed by model of Rank and size and Entropy index. In this article Iran cities and also the factor of entropy of primate city and urban hierarchy of Isfahan Province have been introduced. Urban residents of this Province have been reached from 55 percent to 83% (2006). As we see the analytical data reflects that there is mismatching and imbalance between cities. Because the entropy index was.91 in 1956.And it decreased to.63 in 2006. Isfahan city is the primate city in the whole of these periods. Moreover, the second and the third cities have population gap with regard to the other cities and finally, they do not follow the system of rank-size.
Image Encryption Using Eureqa to Generate an Automated Mathematical Key
Applying traditional symmetric cryptography algorithms while computing encryption and decryption provides immunity to secret keys against different attacks. One of the popular techniques generating automated secret keys is evolutionary computing by using Eureqa API tool, which got attention in 2013. In this paper, we are generating automated secret keys for image encryption and decryption using Eureqa API (tool which is used in evolutionary computing technique). Eureqa API models pseudo-random input data obtained from a suitable source to generate secret keys. The validation of generated secret keys is investigated by performing various statistical tests (histogram, chi-square, correlation of two adjacent pixels, correlation between original and encrypted images, entropy and key sensitivity). Experimental results obtained from methods including histogram analysis, correlation coefficient, entropy and key sensitivity, show that the proposed image encryption algorithms are secure and reliable, with the potential to be adapted for secure image communication applications.
Gray Level Image Encryption
The aim of this paper is image encryption using Genetic Algorithm (GA). The proposed encryption method consists of two phases. In modification phase, pixels locations are altered to reduce correlation among adjacent pixels. Then, pixels values are changed in the diffusion phase to encrypt the input image. Both phases are performed by GA with binary chromosomes. For modification phase, these binary patterns are generated by Local Binary Pattern (LBP) operator while for diffusion phase binary chromosomes are obtained by Bit Plane Slicing (BPS). Initial population in GA includes rows and columns of the input image. Instead of subjective selection of parents from this initial population, a random generator with predefined key is utilized. It is necessary to decrypt the coded image and reconstruct the initial input image. Fitness function is defined as average of transition from 0 to 1 in LBP image and histogram uniformity in modification and diffusion phases, respectively. Randomness of the encrypted image is measured by entropy, correlation coefficients and histogram analysis. Experimental results show that the proposed method is fast enough and can be used effectively for image encryption.
Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning
Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic &ndash;based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.
Determination of the Cooling Rate Dependency of High Entropy Alloys Using a High-Temperature Drop-on-Demand Droplet Generator
High entropy alloys (HEAs), having adjustable properties and enhanced stability compared with intermetallic compounds, are solid solution alloys that contain more than five principal elements with almost equal atomic percentage. The concept of producing such alloys pave the way for developing advanced materials with unique properties. However, the synthesis of such alloys may require advanced processes with high cooling rates depending on which alloy elements are used. In this study, the micro spheres of different diameters of HEAs were generated via a drop-on-demand droplet generator and subsequently solidified during free-fall in an argon atmosphere. Such droplet generators can generate individual droplets with high reproducibility regarding droplet diameter, trajectory and cooling while avoiding any interparticle momentum or thermal coupling. Metallography as well as X-ray diffraction investigations for each diameter of the generated metallic droplets where then carried out to obtain information about the microstructural state. To calculate the cooling rate of the droplets, a droplet cooling model was developed and validated using model alloys such as CuSn%6 and AlCu%4.5 for which a correlation of secondary dendrite arm spacing (SDAS) and cooling rate is well-known. Droplets were generated from these alloys and their SDAS was determined using quantitative metallography. The cooling rate was then determined from the SDAS and used to validate the cooling rates obtained from the droplet cooling model. The application of that model on the HEA then leads to the cooling rate dependency and hence to the identification of process windows for the synthesis of these alloys. These process windows were then compared with cooling rates obtained in processes such as powder production, spray forming, selective laser melting and casting to predict if a synthesis is possible with these processes.
Conjugate Mixed Convection Heat Transfer and Entropy Generation of Cu-Water Nanofluid in an Enclosure with Thick Wavy Bottom Wall
Mixed convection of Cu-water nanofluid in an enclosure with thick wavy bottom wall has been investigated numerically. A co-ordinate transformation method is used to transform the computational domain into an orthogonal co-ordinate system. The governing equations in the computational domain are solved through a pressure correction based iterative algorithm. The fluid flow and heat transfer characteristics are analyzed for a wide range of Richardson number (0.1 &le; Ri &le; 5), nanoparticle volume concentration (0.0 &le; ϕ &le; 0.2), amplitude (0.0 &le; &alpha; &le; 0.1) of the wavy thick- bottom wall and the wave number (&omega;) at a fixed Reynolds number. Obtained results showed that heat transfer rate increases remarkably by adding the nanoparticles. Heat transfer rate is dependent on the wavy wall amplitude and wave number and decreases with increasing Richardson number for fixed amplitude and wave number. The Bejan number and the entropy generation are determined to analyze the thermodynamic optimization of the mixed convection.
Thermodynamic Approach of Lanthanide-Iron Double Oxides Formation
Standard Gibbs energy of formation ΔGfor(298.15) of lanthanide-iron double oxides of garnet-type crystal structure R3Fe5O12 - RIG (R – are rare earth ions) from initial oxides are evaluated. The calculation is based on the data of standard entropies S298.15 and standard enthalpies ΔH298.15 of formation of compounds which are involved in the process of garnets synthesis. Gibbs energy of formation is presented as temperature function ΔGfor(T) for the range 300-1600K. The necessary starting thermodynamic data were obtained from calorimetric study of heat capacity – temperature functions and by using the semi-empirical method for calculation of ΔH298.15 of formation. Thermodynamic functions for standard temperature – enthalpy, entropy and Gibbs energy - are recommended as reference data for technological evaluations. Through the isostructural series of rare earth-iron garnets the correlation between thermodynamic properties and characteristics of lanthanide ions are elucidated.
Content-Based Image Retrieval Using HSV Color Space Features
In this paper, a method is provided for content-based image retrieval. Content-based image retrieval system searches query an image based on its visual content in an image database to retrieve similar images. In this paper, with the aim of simulating the human visual system sensitivity to image&#39;s edges and color features, the concept of color difference histogram (CDH) is used. CDH includes the perceptually color difference between two neighboring pixels with regard to colors and edge orientations. Since the HSV color space is close to the human visual system, the CDH is calculated in this color space. In addition, to improve the color features, the color histogram in HSV color space is also used as a feature. Among the extracted features, efficient features are selected using entropy and correlation criteria. The final features extract the content of images most efficiently. The proposed method has been evaluated on three standard databases Corel 5k, Corel 10k and UKBench. Experimental results show that the accuracy of the proposed image retrieval method is significantly improved compared to the recently developed methods.
A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery
In this study, the patients that have undergone total knee replacement surgery from the 2010 National Health Insurance database were adopted as the study participants. The important factors were screened and selected through literature collection and interviews with physicians. Through the Cross Entropy Method (CE), Genetic Algorithm Logistic Regression (GALR), and Particle Swarm Optimization (PSO), the weights of the factors were obtained. In addition, the weights of the respective algorithms, coupled with the Excel VBA were adopted to construct the Case Based Reasoning (CBR) system. The results through statistical tests show that the GALR and PSO produced no significant differences, and the accuracy of both models were above 97%. Moreover, the area under the curve of ROC for these two models also exceeded 0.87. This study shall serve as a reference for medical staff as an assistance for clinical assessment of infections in order to effectively enhance medical service quality and efficiency, avoid unnecessary medical waste, and substantially contribute to resource allocations in medical institutions.
Structural, Magnetic and Magnetocaloric Properties of Iron-Doped Nd₀.₆Sr₀.₄MnO₃ Perovskite
The influence of Fe-doping on the structural, magnetic and magnetocaloric properties of Nd₀.₆Sr₀.₄FeₓMn₁₋ₓO₃ (0≤ x ≤0.5) were investigated. The samples were synthesized by auto-combustion Sol-Gel method. The phase purity, crystallinity, and the structural properties for all prepared samples were examined by X-ray diffraction. XRD refinement indicates that the samples are crystallized in the orthorhombic single-phase with Pnma space group. Temperature dependence of magnetization measurements under a magnetic applied field of 0.02 T reveals that the samples with (x=0.0, 0.1, 0.2 and 0.3) exhibit a paramagnetic (PM) to ferromagnetic (FM) transition with decreasing temperature. The Curie temperature decreased with increasing Fe content from 256 K for x =0.0 to 80 K for x =0.3 due to increasing of antiferromagnetic superexchange (SE) interaction coupling. Moreover, the magnetization as a function of applied magnetic field (M-H) curves was measured at 2 K, and 300 K. the results of such measurements confirm the temperature dependence of magnetization measurements. The magnetic entropy change|∆SM | was evaluated using Maxwell's relation. The maximum values of the magnetic entropy change |-∆SMax |for x=0.0, 0.1, 0.2, 0.3 are found to be 15.35, 5.13, 3.36, 1.08 J/kg.K for an applied magnetic field of 9 T. Our result on magnetocaloric properties suggests that the parent sample Nd₀.₆Sr₀.₄MnO₃ could be a good refrigerant for low-temperature magnetic refrigeration.
Multi-Criteria Test Case Selection Using Ant Colony Optimization
Test case selection is to select the subset of only the fit test cases and remove the unfit, ambiguous, redundant, unnecessary test cases which in turn improve the quality and reduce the cost of software testing. Test cases optimization is the problem of finding the best subset of test cases from a pool of the test cases to be audited. It will meet all the objectives of testing concurrently. But most of the research have evaluated the fitness of test cases only on single parameter fault detecting capability and optimize the test cases using a single objective. In the proposed approach, nine parameters are considered for test case selection and the best subset of parameters for test case selection is obtained using Interval Type-2 Fuzzy Rough Set. Test case selection is done in two stages. The first stage is the fuzzy entropy-based filtration technique, used for estimating and reducing the ambiguity in test case fitness evaluation and selection. The second stage is the ant colony optimization-based wrapper technique with a forward search strategy, employed to select test cases from the reduced test suite of the first stage. The results are evaluated using the Coverage parameters, Precision, Recall, F-Measure, APSC, APDC, and SSR. The experimental evaluation demonstrates that by this approach considerable computational effort can be avoided.
Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand
This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand&rsquo;s tourism sectors are in a period capable of stimulating the economy.
Accelerated Molecular Simulation: A Convolution Approach
Computational Drug Design is often based on Molecular Dynamics simulations of molecular systems. Molecular Dynamics can be used to simulate, e.g., the binding and unbinding event of a small drug-like molecule with regard to the active site of an enzyme or a receptor. However, the time-scale of the overall binding event is many orders of magnitude longer than the time-scale of simulation. Thus, there is a need to speed-up molecular simulations. In order to speed up simulations, the molecular dynamics trajectories have to be "steared" out of local minimizers of the potential energy surface &ndash; the so-called metastabilities &ndash; of the molecular system. Increasing the kinetic energy (temperature) is one possibility to accelerate simulated processes. However, with temperature the entropy of the molecular system increases, too. But this kind "stearing" is not directed enough to stear the molecule out of the minimum toward the saddle point. In this article, we give a new mathematical idea, how a potential energy surface can be changed in such a way, that entropy is kept under control while the trajectories are still steared out of the metastabilities. In order to compute the unsteared transition behaviour based on a steared simulation, we propose to use extrapolation methods. In the end we mathematically show, that our method accelerates the simulations along the direction, in which the curvature of the potential energy surface changes the most, i.e., from local minimizers towards saddle points.
Energy Conservation in Heat Exchangers
Energy conservation is one of the major concerns in the modern high tech era due to the limited amount of energy resources and the increasing cost of energy. Predicting an efficient use of energy in thermal systems like heat exchangers can only be achieved if the second law of thermodynamics is accounted for. The performance of heat exchangers can be substantially improved by many passive heat transfer augmentation techniques. These letters permit to improve heat transfer rate and to increase exchange surface, but on the other side, they also increase the friction factor associated with the flow. This raises the question of how to employ these passive techniques in order to minimize the useful energy. The objective of this present study is to use a porous substrate attached to the walls as a passive enhancement technique in heat exchangers and to find the compromise between the hydrodynamic and thermal performances under turbulent flow conditions, by using a second law approach. A modified k- ε model is used to simulating the turbulent flow in the porous medium and the turbulent shear flow is accounted for in the entropy generation equation. A numerical modeling, based on the finite volume method is employed for discretizing the governing equations. Effects of several parameters are investigated such as the porous substrate properties and the flow conditions. Results show that under certain conditions of the porous layer thickness, its permeability, and its effective thermal conductivity the minimum rate of entropy production is obtained.
Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects
Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.
Key Parameters Analysis of the Stirring Systems in the Optmization Procedures
The inclusion of stirring systems in the calculation and optimization procedures has been undergone a significant lack of attention, what it can reflect in the results because such systems provide an additional energy to the process, besides promote a better distribution of mass and energy. This is meaningful for the reactive systems, particularly for the Continuous Stirred Tank Reactor (CSTR), for which the key variables and parameters, as well as the operating conditions of stirring systems, can play a pivotal role and it has been showed in the literature that neglect these factors can lead to sub-optimal results. It is also well known that the sole use of the First Law of Thermodynamics as an optimization tool cannot yield satisfactory results, since the joint use of the First and Second Laws condensed into a procedure so-called entropy generation minimization (EGM) has shown itself able to drive the system towards better results. Therefore, the main objective of this paper is to determine the effects of key parameters of the stirring system in the optimization procedures by means of EGM applied to the reactive systems. Such considerations have been possible by dimensional analysis according to Rayleigh and Buckingham's method, which takes into account the physical and geometric parameters and the variables of the reactive system. For the simulation purpose based on the production of propylene glycol, the results have shown a significant increase in the conversion rate from 36% (not-optimized system) to 95% (optimized system) with a consequent reduction of by-products. In addition, it has been possible to establish the influence of the work of the stirrer in the optimization procedure, in which can be described as a function of the fluid viscosity and consequently of the temperature. The conclusions to be drawn also indicate that the use of the entropic analysis as optimization tool has been proved to be simple, easy to apply and requiring low computational effort.
Catalytic Thermodynamics of Nanocluster Adsorbates from Informational Statistical Mechanics
We use an informational statistical mechanics approach to study the catalytic thermodynamics of platinum and palladium cuboctahedral nanoclusters. Nanoclusters and their adatoms are viewed as chemical graphs with a nearest neighbor adjacency matrix. We use the Morse potential to determine bond energies between cluster atoms in a coordination type calculation. We use adsorbate energies calculated from density functional theory (DFT) to study the adatom effects on the thermodynamic quantities, which are derived from a Hamiltonian. Oxygen radical and molecular adsorbates are studied on platinum clusters and hydrogen on palladium clusters. We calculate the entropy, free energy, and total energy as the coverage of adsorbates increases from bridge and hollow sites on the surface. Thermodynamic behavior versus adatom coverage is related to the structural distribution of adatoms on the nanocluster surfaces. The thermodynamic functions are characterized using a simple adsorption model, with linear trends as the coverage of adatoms increases. The data exhibits size effects for the measured thermodynamic properties with cluster diameters between 2 and 5 nm. Entropy and enthalpy calculations of Pt-O2 compare well with previous theoretical data for Pt(111)-O2, and our Pd-H results show similar trends as experimental measurements for Pd-H2 nanoclusters. Our methods are general and may be applied to wide variety of nanocluster adsorbate systems.
Extended Intuitionistic Fuzzy VIKOR Method in Group Decision Making: The Case of Vendor Selection Decision
Vendor (supplier) selection is a group decision-making (GDM) process, in which, based on some predetermined criteria, the experts’ preferences are provided in order to rank and choose the most desirable suppliers. In the real business environment, our attitudes or our choices would be made in an uncertain and indecisive situation could not be expressed in a crisp framework. Intuitionistic fuzzy sets (IFSs) could handle such situations in the best way. VIKOR method was developed to solve multi-criteria decision-making (MCDM) problems. This method, which is used to determine the compromised feasible solution with respect to the conflicting criteria, introduces a multi-criteria ranking index based on the particular measure of 'closeness' to the 'ideal solution'. Until now, there has been a little investigation of VIKOR with IFS, therefore we extended the intuitionistic fuzzy (IF) VIKOR to solve vendor selection problem under IF group decision making (GDM) environment. The present study intends to develop an IF VIKOR method in GDM situation. Therefore a model is presented to calculate the criterion weights based on entropy measure. Then, the interval-valued intuitionistic fuzzy weighted geometric (IFWG) operator utilized to obtain the total decision matrix. In the next stage, an approach based on the positive idle intuitionistic fuzzy number (PIIFN) and negative idle intuitionistic fuzzy number (NIIFN) was developed. Finally, the application of the proposed method to solve a vendor selection problem illustrated.
Comprehensive Analysis of Electrohysterography Signal Features in Term and Preterm Labor
Premature birth, defined as birth before 37 completed weeks of gestation is a leading cause of neonatal morbidity and mortality and has long-term adverse consequences for health. It has recently been reported that the worldwide preterm birth rate is around 10%. The existing measurement techniques for diagnosing preterm delivery include tocodynamometer, ultrasound and fetal fibronectin. However, they are subjective, or suffer from high measurement variability and inaccurate diagnosis and prediction of preterm labor. Electrohysterography (EHG) method based on recording of uterine electrical activity by electrodes attached to maternal abdomen, is a promising method to assess uterine activity and diagnose preterm labor. The purpose of this study is to analyze the difference of EHG signal features between term labor and preterm labor. Free access database was used with 300 signals acquired in two groups of pregnant women who delivered at term (262 cases) and preterm (38 cases). Among them, EHG signals from 38 term labor and 38 preterm labor were preprocessed with band-pass Butterworth filters of 0.08–4Hz. Then, EHG signal features were extracted, which comprised classical time domain description including root mean square and zero-crossing number, spectral parameters including peak frequency, mean frequency and median frequency, wavelet packet coefficients, autoregression (AR) model coefficients, and nonlinear measures including maximal Lyapunov exponent, sample entropy and correlation dimension. Their statistical significance for recognition of two groups of recordings was provided. The results showed that mean frequency of preterm labor was significantly smaller than term labor (p < 0.05). 5 coefficients of AR model showed significant difference between term labor and preterm labor. The maximal Lyapunov exponent of early preterm (time of recording < the 26th week of gestation) was significantly smaller than early term. The sample entropy of late preterm (time of recording > the 26th week of gestation) was significantly smaller than late term. There was no significant difference for other features between the term labor and preterm labor groups. Any future work regarding classification should therefore focus on using multiple techniques, with the mean frequency, AR coefficients, maximal Lyapunov exponent and the sample entropy being among the prime candidates. Even if these methods are not yet useful for clinical practice, they do bring the most promising indicators for the preterm labor.
Analyzing the Results of Buildings Energy Audit by Using Grey Set Theory
Grey set theory has the advantage of using fewer data to analyze many factors, and it is therefore more appropriate for system study rather than traditional statistical regression which require massive data, normal distribution in the data and few variant factors. So, in this paper grey clustering and entropy of coefficient vector of grey evaluations are used to analyze energy consumption in buildings of the Oil Ministry in Tehran. In fact, this article intends to analyze the results of energy audit reports and defines most favorable characteristics of system, which is energy consumption of buildings, and most favorable factors affecting these characteristics in order to modify and improve them. According to the results of the model, ‘the real Building Load Coefficient’ has been selected as the most important system characteristic and ‘uncontrolled area of the building’ has been diagnosed as the most favorable factor which has the greatest effect on energy consumption of building. Grey clustering in this study has been used for two purposes: First, all the variables of building relate to energy audit cluster in two main groups of indicators and the number of variables is reduced. Second, grey clustering with variable weights has been used to classify all buildings in three categories named ‘no standard deviation’, ‘low standard deviation’ and ‘non- standard’. Entropy of coefficient vector of Grey evaluations is calculated to investigate greyness of results. It shows that among the 38 buildings surveyed in terms of energy consumption, 3 cases are in standard group, 24 cases are in ‘low standard deviation’ group and 11 buildings are completely non-standard. In addition, clustering greyness of 13 buildings is less than 0.5 and average uncertainly of clustering results is 66%.
Numerical Investigation of the Transverse Instability in Radiation Pressure Acceleration
The Radiation Pressure Acceleration (RPA) mechanism is very promising in laser-driven ion acceleration because of high laser-ion energy conversion efficiency. Although some experiments have shown the characteristics of RPA, the energy of ions is quite limited. The ion energy obtained in experiments is only several MeV/u, which is much lower than theoretical prediction. One possible limiting factor is the transverse instability incited in the RPA process. The transverse instability is basically considered as the Rayleigh-Taylor (RT) instability, which is a kind of interfacial instability and occurs when a light fluid pushes against a heavy fluid. Multi-dimensional particle-in-cell (PIC) simulations show that the onset of transverse instability will destroy the acceleration process and broaden the energy spectrum of fast ions during the RPA dominant ion acceleration processes. The evidence of the RT instability driven by radiation pressure has been observed in a laser-foil interaction experiment in a typical RPA regime, and the dominant scale of RT instability is close to the laser wavelength. The development of transverse instability in the radiation-pressure-acceleration dominant laser-foil interaction is numerically examined by two-dimensional particle-in-cell simulations. When a laser interacts with a foil with modulated surface, the internal instability is quickly incited and it develops. The linear growth and saturation of the transverse instability are observed, and the growth rate is numerically diagnosed. In order to optimize interaction parameters, a method of information entropy is put forward to describe the chaotic degree of the transverse instability. With moderate modulation, the transverse instability shows a low chaotic degree and a quasi-monoenergetic proton beam is produced.
Efficacy of Conservation Strategies for Endangered Garcinia gummi gutta under Climate Change in Western Ghats
Climate change is continuously affecting the ecosystem, species distribution as well as global biodiversity. The assessment of the species potential distribution and the spatial changes under various climate change scenarios is a significant step towards the conservation and mitigation of habitat shifts, and species' loss and vulnerability. In this context, the present study aimed to predict the influence of current and future climate on an ecologically vulnerable medicinal species, Garcinia gummi-gutta, of the southern Western Ghats using Maximum Entropy (MaxEnt) modeling. The future projections were made for the period of 2050 and 2070 with RCP (Representative Concentration Pathways) scenario of 4.5 and 8.5 using 84 species occurrence data, and climatic variables from three different models of Intergovernmental Panel for Climate Change (IPCC) fifth assessment. Climatic variables contributions were assessed using jackknife test and AOC value 0.888 indicates the model perform with high accuracy. The major influencing variables will be annual precipitation, precipitation of coldest quarter, precipitation seasonality, and precipitation of driest quarter. The model result shows that the current high potential distribution of the species is around 1.90% of the study area, 7.78% is good potential; about 90.32% is moderate to very low potential for species suitability. Finally, the results of all model represented that there will be a drastic decline in the suitable habitat distribution by 2050 and 2070 for all the RCP scenarios. The study signifies that MaxEnt model might be an efficient tool for ecosystem management, biodiversity protection, and species re-habitation planning under climate change.
Biophysical Study of the Interaction of Harmalol with Nucleic Acids of Different Motifs: Spectroscopic and Calorimetric Approaches
Binding of small molecules to DNA and recently to RNA, continues to attract considerable attention for developing effective therapeutic agents for control of gene expression. This work focuses towards understanding interaction of harmalol, a dihydro beta-carboline alkaloid, with different nucleic acid motifs viz. double stranded CT DNA, single stranded A-form poly(A), double-stranded A-form of poly(C)·poly(G) and clover leaf tRNAphe by different spectroscopic, calorimetric and molecular modeling techniques. Results of this study converge to suggest that (i) binding constant varied in the order of CT DNA > poly(C)·poly(G) > tRNAphe > poly(A), (ii) non-cooperative binding of harmalol to poly(C)·poly(G) and poly(A) and cooperative binding with CT DNA and tRNAphe, (iii) significant structural changes of CT DNA, poly(C)·poly(G) and tRNAphe with concomitant induction of optical activity in the bound achiral alkaloid molecules, while with poly(A) no intrinsic CD perturbation was observed, (iv) the binding was predominantly exothermic, enthalpy driven, entropy favoured with CT DNA and poly(C)·poly(G) while it was entropy driven with tRNAphe and poly(A), (v) a hydrophobic contribution and comparatively large role of non-polyelectrolytic forces to Gibbs energy changes with CT DNA, poly(C)·poly(G) and tRNAphe, and (vi) intercalated state of harmalol with CT DNA and poly(C)·poly(G) structure as revealed from molecular docking and supported by the viscometric data. Furthermore, with competition dialysis assay it was shown that harmalol prefers hetero GC sequences. All these findings unequivocally pointed out that harmalol prefers binding with ds CT DNA followed by ds poly(C)·poly(G), clover leaf tRNAphe and least with ss poly(A). The results highlight the importance of structural elements in these natural beta-carboline alkaloids in stabilizing different DNA and RNA of various motifs for developing nucleic acid based better therapeutic agents.
Bioclimatic Niches of Endangered Garcinia indica Species on the Western Ghats: Predicting Habitat Suitability under Current and Future Climate
In recent years, climate change has become a major threat and has been widely documented in the geographic distribution of many plant species. However, the impacts of climate change on the distribution of ecologically vulnerable medicinal species remain largely unknown. The identification of a suitable habitat for a species under climate change scenario is a significant step towards the mitigation of biodiversity decline. The study, therefore, aims to predict the impact of current, and future climatic scenarios on the distribution of the threatened Garcinia indica across the northern Western Ghats using Maximum Entropy (MaxEnt) modelling. The future projections were made for the year 2050 and 2070 with all Representative Concentration Pathways (RCPs) scenario (2.6, 4.5, 6.0, and 8.5) using 56 species occurrence data, and 19 bioclimatic predictors from the BCC-CSM1.1 model of the Intergovernmental Panel for Climate Change’s (IPCC) 5th assessment. The bioclimatic variables were minimised to a smaller number of variables after a multicollinearity test, and their contributions were assessed using jackknife test. The AUC value of 0.956 ± 0.023 indicates that the model performs with excellent accuracy. The study identified that temperature seasonality (39.5 ± 3.1%), isothermality (19.2 ± 1.6%), and annual precipitation (12.7 ± 1.7%) would be the major influencing variables in the current and future distribution. The model predicted 10.5% (19318.7 sq. km) of the study area as moderately to very highly suitable, while 82.60% (151904 sq. km) of the study area was identified as ‘unsuitable’ or ‘very low suitable’. Our predictions of climate change impact on habitat suitability suggest that there will be a drastic reduction in the suitability by 5.29% and 5.69% under RCP 8.5 for 2050 and 2070, respectively. Finally, the results signify that the model might be an effective tool for biodiversity protection, ecosystem management, and species re-habitation planning under future climate change scenarios.
Instructional Information Resources
This article discusses institute information resources. Information, in its most restricted technical sense, is a sequence of symbols that can be interpreted as message information can be recorded as signs, or transmitted as signals. Information is any kind of event that affects the state of a dynamic system. Conceptually, information is the message being conveyed. This concept has numerous other meanings in different contexts. Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, representation, and especially entropy.
Properties of Magnesium-Based Hydrogen Storage Alloy Added with Palladium and Titanium Hydride
Nowadays, the great majority believe that there is great potentiality in hydrogen storage alloy storing hydrogen by physical and chemical absorption. However, the hydrogen storage alloy is limited by high operation temperature. Scientists find that adding transition elements can improve the properties of hydrogen storage alloy. In this research, outstanding improvements of kinetic and thermal properties are given by the addition of Palladium and Titanium hydride to Magnesium-based hydrogen storage alloy. Magnesium-based alloy is the main material, into which TiH2 / Pd are added separately. Following that, materials are milled by a Planetary Ball Miller at 650 rpm. TGA/DSC and PCT measure the capacity, spending time and temperature of abs/des-orption. Additionally, SEM and XRD analyze the structures and components of material. It is clearly shown that Pd is beneficial to kinetic properties. 2MgH2-0.1Pd has the highest capacity of all the alloys listed, approximately 5.5 wt%. Secondly, there are not any new Ti-related compounds found from XRD analysis. Thus, TiH2, considered as the catalyst, leads to the condition of 2MgH2-TiH2 and 2MgH2-TiH2-0.1Pd efficiently absorbing hydrogen in low temperature. 2MgH2-TiH2 can reach roughly 3.0 wt% in 82.4 minutes at 50°C and 8 minutes at 100°C, while2MgH2-TiH2-0.1Pd can reach 2.0 wt% in 400 minutes at 50°C and in 48 minutes at 100°C. The lowest temperature of 2MgH2-0.1Pd and 2MgH2-TiH2 is similar (320°C), otherwise the lowest temperature of 2MgH2-TiH2-0.1Pd decrease by 20°C. From XRD, it can be observed that PdTi2 and Pd3Ti are produced by mechanical alloying when adding Pd as well as TiH2 into MgH2. Due to the synergistic effects between Pd and TiH2, 2MgH2-TiH2-0.1Pd owns the lowest dehydrogenation temperature. Furthermore, the Pressure-Composition-Temperature (PCT) curve of 2MgH2-TiH2-0.1Pd is measured at different temperature, 370°C, 350°C, 320°C and 300°C separately. The plateau pressure is given form the PCT curves above. In accordance to different plateau pressures, enthalpy and entropy in the Van’t Hoff equation can be solved. In 2MgH2-TiH2-0.1Pd, the enthalpy is 74.9 KJ/mol and the entropy is 122.9 J/mol. Activation means that hydrogen storage alloy undergoes repeat abs/des-orpting processes. It plays an important role in the abs/des-orption. Activation shortens the abs/des-orption time because of the increase in surface area. From SEM, it is clear that the grain size and surface become smaller and rougher
Effects of Subsidy Reform on Consumption and Income Inequalities in Iran
In this paper, we use data on Household Income and Expenditure survey of Statistics Centre of Iran, conducted from 2005-2014, to calculate several inequality measures and to estimate the effects of Iran’s targeted subsidy reform act on consumption and income inequality. We first calculate Gini coefficients for income and consumption in order to study the relation between the two and also the effects of subsidy reform. Results show that consumption inequality has not been always mirroring changes in income inequality. However, both Gini coefficients indicate that subsidy reform caused improvement in inequality. Then we calculate Generalized Entropy Index based on consumption and income for years before and after the Subsidy Reform Act of 2010 in order to have a closer look into the changes in internal structure of inequality after subsidy reforms. We find that the improvement in income inequality is mostly caused by the decrease in inequality of lower income individuals. At the same time consumption inequality has been decreased as a result of more equal consumption in both lower and higher income groups. Moreover, the increase in Engle coefficient after the subsidy reform shows that a bigger portion of income is allocated to consumption on food which is a sign of lower living standard in general. This increase in Engle coefficient is due to rise in inflation rate and relative increase in price of food which partially is another consequence of subsidy reform. We have conducted some experiments on effect of subsidy payments and possible effects of change on distribution pattern and amount of cash subsidy payments on income inequality. Result of the effect of cash payments on income inequality shows that it leads to a definite decrease in income inequality and had a bigger share in improvement of rural areas compared to those of urban households. We also examine the possible effect of constant payments on the increasing income inequality for years after 2011. We conclude that reduction in value of payments as a result of inflation plays an important role regardless of the fact that there may be other reasons. We finally experiment with alternative allocations of transfers while keeping the total amount of cash transfers constant or make it smaller through eliminating three higher deciles from the cash payment program, the result shows that income equality would be improved significantly.
A New Distribution and Application on the Lifetime Data
We introduce a new model called the Marshall-Olkin Rayleigh distribution which extends the Rayleigh distribution using Marshall-Olkin transformation and has increasing and decreasing shapes for the hazard rate function. Various structural properties of the new distribution are derived including explicit expressions for the moments, generating and quantile function, some entropy measures, and order statistics are presented. The model parameters are estimated by the method of maximum likelihood and the observed information matrix is determined. The potentiality of the new model is illustrated by means of real life data set.
Exergy Model for a Solar Water Heater with Flat Plate Collector
The objective of this paper is to derive an exergy model for a solar water heater with honey comb structure in order to identify the element which has larger irreversibility in the system. This will help us in finding the means to reduce the wasted work potential so that the overall efficiency of the system can be improved by finding the ways to reduce those wastages.
Issues in Travel Demand Forecasting
Travel demand forecasting including four travel choices, i.e., trip generation, trip distribution, modal split and traffic assignment constructs the core of transportation planning. In its current application, travel demand forecasting has associated with three important issues, i.e., interface inconsistencies among four travel choices, inefficiency of commonly used solution algorithms, and undesirable multiple path solutions. In this paper, each of the three issues is extensively elaborated. An ideal unified framework for the combined model consisting of the four travel choices and variable demand functions is also suggested. Then, a few remarks are provided in the end of the paper.
A Novel Combination Method for Computing the Importance Map of Image
The importance map is an image-based measure and is a core part of the resizing algorithm. Importance measures include image gradients, saliency and entropy, as well as high level cues such as face detectors, motion detectors and more. In this work we proposed a new method to calculate the importance map, the importance map is generated automatically using a novel combination of image edge density and Harel saliency measurement. Experiments of different type images demonstrate that our method effectively detects prominent areas can be used in image resizing applications to aware important areas while preserving image quality.
The Normal-Generalized Hyperbolic Secant Distribution: Properties and Applications
In this paper, a new four-parameter univariate continuous distribution called the Normal-Generalized Hyperbolic Secant Distribution (NGHS) is defined and studied. Some general and structural distributional properties are investigated and discussed, including: central and non-central n-th moments and incomplete moments, quantile and generating functions, hazard function, Rényi and Shannon entropies, shapes: skewed right, skewed left, and symmetric, modality regions: unimodal and bimodal, maximum likelihood (MLE) estimators for the parameters. Finally, two real data sets are used to demonstrate empirically its flexibility and prove the strength of the new distribution.
Consumer Load Profile Determination with Entropy-Based K-Means Algorithm
With the continuous increment of smart meter installations across the globe, the need for processing of the load data is evident. Clustering-based load profiling is built upon the utilization of unsupervised machine learning tools for the purpose of formulating the typical load curves or load profiles. The most commonly used algorithm in the load profiling literature is the K-means. While the algorithm has been successfully tested in a variety of applications, its drawback is the strong dependence in the initialization phase. This paper proposes a novel modified form of the K-means that addresses the aforementioned problem. Simulation results indicate the superiority of the proposed algorithm compared to the K-means.
Makhraj Recognition Using Convolutional Neural Network
This paper focuses on a machine learning that learn the correct pronunciation of Makhraj Huroofs. Usually, people need to find an expert to pronounce the Huroof accurately. In this study, the researchers have developed a system that is able to learn the selected Huroofs which are ha, tsa, zho, and dza using the Convolutional Neural Network. The researchers present the chosen type of the CNN architecture to make the system that is able to learn the data (Huroofs) as quick as possible and produces high accuracy during the prediction. The researchers have experimented the system to measure the accuracy and the cross entropy in the training process.
Makhraj Recognition Using Convolutional Neural Network
This paper focuses on a machine learning that learn the correct pronunciation of Makhraj Huroofs. Usually, people need to find an expert to pronounce the Huroof accurately. In this study, the researchers have developed a system that is able to learn the selected Huroofs which are ha, tsa, zho, and dza using the Convolutional Neural Network. The researchers present the chosen type of the CNN architecture to make the system that is able to learn the data (Huroofs) as quick as possible and produces high accuracy during the prediction. The researchers have experimented the system to measure the accuracy and the cross entropy in the training process.
Makhraj Recognition Using Convolutional Neural Network
This paper focuses on a machine learning that learn the correct pronunciation of Makhraj Huroofs. Usually, people need to find an expert to pronounce the Huroof accurately. In this study, the researchers have developed a system that is able to learn the selected Huroofs which are ha, tsa, zho, and dza using the Convolutional Neural Network. The researchers present the chosen type of the CNN architecture to make the system that is able to learn the data (Huroofs) as quick as possible and produces high accuracy during the prediction. The researchers have experimented the system to measure the accuracy and the cross entropy in the training process.
Analysis of Creative City Indicators in Isfahan City, Iran
This paper investigates the indices of a creative city in Isfahan. Its main aim is to evaluate quantitative status of the creative city indices in Isfahan city, analyze the dispersion and distribution of these indices in Isfahan city. Concerning these, this study tries to analyze the creative city indices in fifteen area of Isfahan through secondary data, questionnaire, TOPSIS model, Shannon entropy and SPSS. Based on this, the fifteen areas of Isfahan city have been ranked with 12 factors of creative city indices. The results of studies show that fifteen areas of Isfahan city are not equally benefiting from creative indices and there is much difference between the areas of Isfahan city.
The Relationship between Ruins and Vegetation: Different Approaches during the Centuries and within the Various Disciplinary Fields, Investigation of Writings and Projects
The charm of a ruin colonised by wild plants and flowers is part of Western culture. The relationship between ruins and vegetation involves a wide range of different fields of research. During the first phase of the research the most important writings and projects about this argument were investigated, to understand how the perception of the co-existence of ruins and vegetation has changed over time and to investigate the various different approaches that these different fields have adopted when tackling this issue. The paper presents some practical examples of projects carried out from the early 1900s on. The major result is that specifically regards conservation, the best attitude is the management of change, an inevitable process when it comes to the co-existence of ruins and nature and, particularly, ruins and vegetation. Limiting ourselves to adopting measures designed to stop, or rather slow down, the increasing level of entropy (and therefore disorder) may not be enough.
Phase Transitions of Cerium and Neodymium
Phase transitions of cerium and neodymium are investigated by using high-temperature scanning calorimeter (HT-1500 Seteram). For cerium two types of transformation are detected: at 350-372 K - hexagonal close packing (hcp) - face-centered cubic lattice (fcc) transition, and at 880-960K the face-centered cubic lattice (fcc) transformation into body-centered cubic lattice (bcc). For neodymium changing of hexagonal close packing (hcp) into the body-centered cubic lattice (bcc) is detected at 1093-1113K. The thermal characteristics of transitions – enthalpy, entropy, temperature domains – are reported.
Image Steganography Using Predictive Coding for Secure Transmission
In this paper, steganographic strategy is used to hide the text file inside an image. To increase the storage limit, predictive coding is utilized to implant information. In the proposed plan, one can exchange secure information by means of predictive coding methodology. The predictive coding produces high stego-image. The pixels are utilized to insert mystery information in it. The proposed information concealing plan is powerful as contrasted with the existing methodologies. By applying this strategy, a provision helps clients to productively conceal the information. Entropy, standard deviation, mean square error and peak signal noise ratio are the parameters used to evaluate the proposed methodology. The results of proposed approach are quite promising.
Mechanical Properties of Recycled Plasticized PVB/PVC Blends
The mechanical properties of blends consisting of plasticized poly(vinyl butyral) (PVB) and plasticized poly(vinyl chloride) (PVC) are studied, in order to evaluate the possibility of using recycled PVB waste derived from windshields. PVC was plasticized with 38% of diisononyl phthalate (DINP), while PVB was plasticized with 28% of triethylene glycol, bis(2-ethylhexanoate) (3GO). The optimal process conditions for the PVB/PVC blend in 1:1 ratio were determined. Entropy was used in order to theoretically predict the blends miscibility. The PVB content of each blend composition used was ranging from zero to 100%. Tensile strength and strain were tested. In addition, a comparison between recycled and original PVB, used as constituents of the blend, was performed.
Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients
Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (
Size-Reduction Strategies for Iris Codes
Iris codes contain bits with different entropy. This work investigates different strategies to reduce the size of iris code templates with the aim of reducing storage requirements and computational demand in the matching process. Besides simple sub-sampling schemes, also a binary multi-resolution representation as used in the JBIG hierarchical coding mode is assessed. We find that iris code template size can be reduced significantly while maintaining recognition accuracy. Besides, we propose a two stage identification approach, using small-sized iris code templates in a pre-selection satge, and full resolution templates for final identification, which shows promising recognition behaviour.
Thermal Effects of Phase Transitions of Cerium and Neodymium
Phase transitions of cerium and neodymium are investigated by using high temperature scanning calorimeter (HT-1500 Seteram). For cerium two types of transformation are detected: at 350-372 K - hexagonal close packing (hcp) - face-centered cubic lattice (fcc) transition, and in 880-960K the face-centered cubic lattice (fcc) transformation into body-centered cubic lattice (bcc). For neodymium changing of hexagonal close packing (hcp) into body-centered cubic lattice (bcc) is detected at 1093-1113K. The thermal characteristics of transitions – enthalpy, entropy, temperature domains – are reported.
Removal of Tartrazine Dye Form Aqueous Solutions by Adsorption on the Surface of Polyaniline/Iron Oxide Composite
In this work, a polyaniline/Iron oxide (PANI/Fe2O3) composite was chemically prepared by oxidative polymerization of aniline in acid medium, in presence of ammonium persulphate as an oxidant and amount of Fe2O3. The composite was characterized by a scanning electron microscopy (SEM). The prepared composite has been used as adsorbent to remove Tartrazine dye form aqueous solutions. The effects of initial dye concentration and temperature on the adsorption capacity of PANI/Fe2O3 for Tartrazine dye have been studied in this paper. The Langmuir and Freundlich adsorption models have been used for the mathematical description of adsorption equilibrium data. The best fit is obtained using the Freundlich isotherm with an R2 value of 0.998. The change of Gibbs energy, enthalpy, and entropy of adsorption has been also evaluated for the adsorption of Tartrazine onto PANI/ Fe2O3. It has been proved according the results that the adsorption process is endothermic in nature.
Facial Recognition on the Basis of Facial Fragments
There are many articles that attempt to establish the role of different facial fragments in face recognition. Various approaches are used to estimate this role. Frequently, authors calculate the entropy corresponding to the fragment. This approach can only give approximate estimation. In this paper, we propose to use a more direct measure of the importance of different fragments for face recognition. We propose to select a recognition method and a face database and experimentally investigate the recognition rate using different fragments of faces. We present two such experiments in the paper. We selected the PCNC neural classifier as a method for face recognition and parts of the LFW (Labeled Faces in the Wild) face database as training and testing sets. The recognition rate of the best experiment is comparable with the recognition rate obtained using the whole face.
Suitability of Black Box Approaches for the Reliability Assessment of Component-Based Software
Although, reliability is an important attribute of quality, especially for mission critical systems, yet, there does not exist any versatile model even today for the reliability assessment of component-based software. The existing Black Box models are found to make various assumptions which may not always be realistic and may be quite contrary to the actual behaviour of software. They focus on observing the manner in which the system behaves without considering the structure of the system, the components composing the system, their interconnections, dependencies, usage frequencies, etc.As a result, the entropy (uncertainty) in assessment using these models is much high.Though, there are some models based on operation profile yet sometimes it becomes extremely difficult to obtain the exact operation profile concerned with a given operation. This paper discusses the drawbacks, deficiencies and limitations of Black Box approaches from the perspective of various authors and finally proposes a conceptual model for the reliability assessment of software.
Tuning Cubic Equations of State for Supercritical Water Applications
Cubic equations of state (EoS), popular due to their simple mathematical form, ease of use, semi-theoretical nature and, reasonable accuracy are normally fitted to vapor-liquid equilibrium P-v-T data. As a result, They often show poor accuracy in the region near and above the critical point. In this study, the performance of the renowned Peng-Robinson (PR) and Patel-Teja (PT) EoS’s around the critical area has been examined against the P-v-T data of water. Both of them display large deviations at critical point. For instance, PR-EoS exhibits discrepancies as high as 47% for the specific volume, 28% for the enthalpy departure and 43% for the entropy departure at critical point. It is shown that incorporating P-v-T data of the supercritical region into the retuning of a cubic EoS can improve its performance above the critical point dramatically. Adopting a retuned acentric factor of 0.5491 instead of its genuine value of 0.344 for water in PR-EoS and a new F of 0.8854 instead of its original value of 0.6898 for water in PT-EoS reduces the discrepancies to about one third or less.
Patient-Specific Modeling Algorithm for Medical Data Based on AUC
Patient-specific models are instance-based learning algorithms that take advantage of the particular features of the patient case at hand to predict an outcome. We introduce two patient-specific algorithms based on decision tree paradigm that use AUC as a metric to select an attribute. We apply the patient specific algorithms to predict outcomes in several datasets, including medical datasets. Compared to the patient-specific decision path (PSDP) entropy-based and CART methods, the AUC-based patient-specific decision path models performed equivalently on area under the ROC curve (AUC). Our results provide support for patient-specific methods being a promising approach for making clinical predictions.
Kinetic and Thermodynamics of Sorption of 5-Fluorouracil (5-Fl) on Carbon Nanotubes
The aim of this study was to understand the interaction between multi-walled carbon nano tubes (MCNTs) and anticancer agents and evaluate the drug-loading ability of MCNTs. Batch adsorption experiments were carried out for adsorption of 5-Fluorouracil (5-FL) using MCNTs. The effect of various operating variables, viz., adsorbent dosage, pH, contact time and temperature for adsorption of 5-Fluorouracil (5-FL) has been studied. The Freundlich adsorption model was successfully employed to describe the adsorption process. It was found that the pseudo-second-order mechanism is predominant and the overall rate of the 5-Fluorouracil (5-FL) adsorption process appears to be controlled by the more than one-step. Thermodynamic parameters such as free energy change (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°) have been calculated respectively, revealed the spontaneous, endothermic and feasible nature of adsorption process. The results showed that carbon nano tubes were able to form supra molecular complexes with 5-Fluorouracil (5-FL) by π-π stacking and possessed favorable loading properties as drug carriers.
A Similarity Measure for Classification and Clustering in Image Based Medical and Text Based Banking Applications
Text processing plays an important role in information retrieval, data-mining, and web search. Measuring the similarity between the documents is an important operation in the text processing field. In this project, a new similarity measure is proposed. To compute the similarity between two documents with respect to a feature the proposed measure takes the following three cases into account: (1) The feature appears in both documents; (2) The feature appears in only one document and; (3) The feature appears in none of the documents. The proposed measure is extended to gauge the similarity between two sets of documents. The effectiveness of our measure is evaluated on several real-world data sets for text classification and clustering problems, especially in banking and health sectors. The results show that the performance obtained by the proposed measure is better than that achieved by the other measures.
Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC) within Operational Research (OR) with Sustainability and Phenomenology
Supply chain (SC) is an operational research (OR) approach and technique which acts as catalyst within central nervous system of business today. Without SC, any type of business is at doldrums, hence entropy. SC is the lifeblood of business today because it is the pivotal hub which provides imperative competitive advantage. The paper present a conceptual framework dubbed as Homomorphic Conceptual Framework for Effective Supply Chain Strategy (HCEFSC).The term homomorphic is derived from abstract algebraic mathematical term homomorphism (same shape) which also embeds the following mathematical application sets: monomorphism, isomorphism, automorphisms, and endomorphism. The HCFESC is intertwined and integrated with wide and broad sets of elements.
A Survey on Lossless Compression of Bayer Color Filter Array Images
Although most digital cameras acquire images in a raw format, based on a Color Filter Array that arranges RGB color filters on a square grid of photosensors, most image compression techniques do not use the raw data; instead, they use the rgb result of an interpolation algorithm of the raw data. This approach is inefficient and by performing a lossless compression of the raw data, followed by pixel interpolation, digital cameras could be more power efficient and provide images with increased resolution given that the interpolation step could be shifted to an external processing unit. In this paper, we conduct a survey on the use of lossless compression algorithms with raw Bayer images. Moreover, in order to reduce the effect of the transition between colors that increase the entropy of the raw Bayer image, we split the image into three new images corresponding to each channel (red, green and blue) and we study the same compression algorithms applied to each one individually. This simple pre-processing stage allows an improvement of more than 15% in predictive based methods.
Performance Study of Cascade Refrigeration System Using Alternative Refrigerants
Cascade refrigeration systems employ series of single stage vapor compression units which are thermally coupled with evaporator/condenser cascades. Different refrigerants are used in each of the circuit depending on the optimum characteristics shown by the refrigerant for a particular application. In the present research study, a steady state thermodynamic model is developed which simulates the working of an actual cascade system. The model provides COP and all other system parameters like total compressor work, temperature, pressure, enthalpy and entropy at different state points. The working fluid in Low Temperature Circuit (LTC) is CO2 (R744) while ammonia (R717), propane (R290), propylene (R1270), R404A and R12 are the refrigerants in High Temperature Circuit (HTC). The performance curves of ammonia, propane, propylene, and R404A are compared with R12 to find its nearest substitute. Results show that ammonia is the best substitute of R12.
Quantitative Comparisons of Different Approaches for Rotor Identification
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.
Metric Suite for Schema Evolution of a Relational Database
Requirement of stakeholders for adding more details to the database is the main cause of the schema evolution in the relational database. Further, this schema evolution causes the instability to the database. Hence, it is aimed to define a metric suite for schema evolution of a relational database. The metric suite will calculate the metrics based on the features of the database, analyse the queries on the database and measures the coupling, cohesion and component dependencies of the schema for existing and evolved versions of the database. This metric suite will also provide an indicator for the problems related to the stability and usability of the evolved database. The degree of change in the schema of a database is presented in the forms of graphs that acts as an indicator and also provides the relations between various parameters (metrics) related to the database architecture. The acquired information is used to defend and improve the stability of database architecture. The challenges arise in incorporating these metrics with varying parameters for formulating a suitable metric suite are discussed. To validate the proposed metric suite, an experimentation has been performed on publicly available datasets.
On Generalized Cumulative Past Inaccuracy Measure for Marginal and Conditional Lifetimes
Recently, the notion of past cumulative inaccuracy (CPI) measure has been proposed in the literature as a generalization of cumulative past entropy (CPE) in univariate as well as bivariate setup. In this paper, we introduce the notion of CPI of order α (alpha) and study the proposed measure for conditionally specified models of two components failed at different time instants called generalized conditional CPI (GCCPI). We provide some bounds using usual stochastic order and investigate several properties of GCCPI. The effect of monotone transformation on this proposed measure has also been examined. Furthermore, we characterize some bivariate distributions under the assumption of conditional proportional reversed hazard rate model. Moreover, the role of GCCPI in reliability modeling has also been investigated for a real-life problem.
Eco-Index for Assessing Ecological Disturbances at Downstream of a Hydropower Project
In the North Eastern part of India several hydro power projects are being proposed and execution for some of them are already initiated. There are controversies surrounding these constructions. Impact of these dams in the downstream part of the rivers needs to be assessed so that eco-system and people living downstream are protected by redesigning the projects if it becomes necessary. This may result in reducing the stresses to the affected ecosystem and people living downstream. At present many index based ecological methods are present to assess impact on ecology. However, none of these methods are capable of assessing the affect resulting from dam induced diurnal variation of flow in the downstream. We need environmental flow methodology based on hydrological index which can address the affect resulting from dam induced diurnal variation of flow and play an important role in a riverine ecosystem management and be able to provide a qualitative idea about changes in the habitat for aquatic and riparian species.
Implementation and Comparative Analysis of PET and CT Image Fusion Algorithms
Medical imaging modalities are becoming life saving components. These modalities are very much essential to doctors for proper diagnosis, treatment planning and follow up. Some modalities provide anatomical information such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), X-rays and some provides only functional information such as Positron Emission Tomography (PET). Therefore, single modality image does not give complete information. This paper presents the fusion of structural information in CT and functional information present in PET image. This fused image is very much essential in detecting the stages and location of abnormalities and in particular very much needed in oncology for improved diagnosis and treatment. We have implemented and compared image fusion techniques like pyramid, wavelet, and principal components fusion methods along with hybrid method of DWT and PCA. The performances of the algorithms are evaluated quantitatively and qualitatively. The system is implemented and tested by using MATLAB software. Based on the MSE, PSNR and ENTROPY analysis, PCA and DWT-PCA methods showed best results over all experiments.
Synthesis and Characterization of Thiourea-Formaldehyde Coated Fe3O4 ([email protected]) and Its Application for Adsorption of Methylene Blue
Thiourea-Formaldehyde Pre-Polymer (TUF) was prepared by the reaction thiourea and formaldehyde in basic medium and used as a coating materials for magnetite Fe3O4. The synthesized polymer coated microspheres ([email protected]) was characterized using FTIR, TGA SEM and TEM. Its BET surface area was up to 1680 m2 g_1. The adsorption capacity of this ACF product was evaluated in its adsorption of Methylene Blue (MB) in water under different pH values and different temperature. We found that the adsorption process was well described both by the Langmuir and Freundlich isotherm model. The kinetic processes of MB adsorption onto [email protected] were described in order to provide a more clear interpretation of the adsorption rate and uptake mechanism. The overall kinetic data was acceptably explained by a pseudo second-order rate model. Evaluated ∆Go and ∆Ho specify the spontaneous and exothermic nature of the reaction. The adsorption takes place with a decrease in entropy (∆So is negative). The monolayer capacity for MB was up to 450 mg g_1 and was one of the highest among similar polymeric products. It was due to its large BET surface area.
Towards Establishing a Universal Theory of Project Management
Project management (PM) as a concept has evolved from the early 20th Century into a recognized academic and professional discipline, and indications are that it has come to stay in the 21st Century as a world-wide paradigm shift for managing successful construction projects. However, notwithstanding the strong inroads that PM has made in legitimizing its academic and professional status in construction management practice, the underlining philosophies are still based on cases and conventional practices. An important theoretical issue yet to be addressed is the lack of a universal theory that offers philosophical legitimacy for the PM concept as a uniquely specialized management concept. Here, it is hypothesized that the law of entropy, the theory of uncertainties and the theory of risk management offer plausible explanations for addressing the lacuna of what constitute PM theory. The theoretical bases of these plausible underlying theories are argued and attempts made to establish the functional relationships that exist between these theories and the PM concept. The paper then draws on data related to the success and/or failure of a number of construction projects to validate the theory.
Removal of Lead from Aqueous Solutions by Biosorption on Pomegranate Skin: Kinetics, Equilibrium and Thermodynamics
In this study, pomegranate skin, a material suitable for the conditions in Algeria, was chosen as adsorbent material for removal of lead in an aqueous solution. Biosorption studies were carried out under various parameters such as mass adsorbent particle, pH, contact time, the initial concentration of metal, and temperature. The experimental results show that the percentage of biosorption increases with an increase in the biosorbent mass (0.25 g, 0.035 mg/g; 1.25 g, 0.096 mg/g). The maximum biosorption occurred at pH value of 8 for the lead. The equilibrium uptake was increased with an increase in the initial concentration of metal in solution (Co = 4 mg/L, qt = 1.2 mg/g). Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The best fit was obtained by the Langmuir model with high correlation coefficients (R2 &gt; 0.995) and a maximum monolayer adsorption capacity of 0.85 mg/g for lead. The adsorption of the lead was exothermic in nature (&Delta;H&deg; = -17.833 kJ/mol for Pb (II). The reaction was accompanied by a decrease in entropy (&Delta;S&deg; = -0.056 kJ/K. mol). The Gibbs energy (&Delta;G&deg;) increased from -1.458 to -0.305 kJ/mol, respectively for Pb (II) when the temperature was increased from 293 to 313 K.
Adsorption Isotherm, Kinetic and Mechanism Studies of Some Substituted Phenols from Aqueous Solution by Jujuba Seeds Activated Carbon
Activated carbon was prepared from Jujube seeds by chemical activation with potassium hydroxide (KOH), followed by pyrolysis at 800°C. Batch studies were conducted for kinetic, thermodynamic and equilibrium studies on the adsorption of phenol (P) and 2-4 dichlorophenol (2-4 DCP) from aqueous solution, than the adsorption capacities followed the order of 2-4 dichlorophenol > phenol. The operating variables studied were initial phenols concentration, contact time, temperature and solution pH. Results show that the pH value of 7 is favorable for the adsorption of phenols. The sorption data have been analyzed using Langmuir and Freundlich isotherms. The isotherm data followed Langmuir Model. The adsorption processes conformed to the pseudo-second-order rate kinetics. Thermodynamic parameters such as enthalpy, entropy and Gibb’s free energy changes were also calculated and it was found that the sorption of phenols by Jujuba seeds activated carbon was a spontaneous process The maximum adsorption efficiency of phenol and 2-4 dichlorophenol was 142.85 mg.g−1 and 250 mg.g−1, respectively.
Dye Removal from Aqueous Solution by Regenerated Spent Bleaching Earth
Spent bleaching earth (SBE) recycling and utilization as an adsorbent to eliminate dyes from aqueous solution was studied. Organic solvents and subsequent thermal treatment were carried out to recover and reactivate the SBE. The effect of pH, temperature, dye&rsquo;s initial concentration, and contact time on the dye removal using recycled spent bleaching earth (RSBE) was investigated. Recycled SBE showed better removal affinity of cationic than anionic dyes. The maximum removal was achieved at pH 2 and 8 for anionic and cationic dyes, respectively. Kinetic data matched with the pseudo second-order model. The adsorption phenomenon governing this process was identified by the Langmuir and Freundlich isotherms for anionic dye while Freundlich model represented the sorption process for cationic dye. The changes of Gibbs free energy (&Delta;G&deg;), enthalpy (&Delta;H&deg;), and entropy&nbsp;(&Delta;S&deg;)&nbsp;were computed and compared through thermodynamic study for both dyes.
Adsorption of Cd2+ from Aqueous Solutions Using Chitosan Obtained from a Mixture of Littorina littorea and Achatinoidea Shells
Adsorption of Cd2+ ions from aqueous solution by Chitosan, a natural polymer, obtained from a mixture of the exoskeletons of Littorina littorea (Periwinkle) and Achatinoidea (Snail) was studied at varying adsorbent dose, contact time, metal ion concentrations, temperature and pH using batch adsorption method. The equilibrium adsorption isotherms were determined between 298 K and 345 K. The adsorption data were adjusted to Langmuir, Freundlich and the pseudo second order kinetic models. It was found that the Langmuir isotherm model most fitted the experimental data, with a maximum monolayer adsorption of 35.1 mgkg⁻¹ at 308 K. The entropy and enthalpy of adsorption were -0.1121 kJmol⁻¹K⁻¹ and -11.43 kJmol⁻¹ respectively. The Freundlich adsorption model, gave Kf and n values consistent with good adsorption. The pseudo-second order reaction model gave a straight line plot with rate constant of 1.291x 10⁻³ kgmg⁻¹ min⁻¹. The qe value was 21.98 mgkg⁻¹, indicating that the adsorption of Cadmium ion by the chitosan composite followed the pseudo-second order kinetic model.
Cobalt Ions Adsorption by Quartz and Illite and Calcite from Waste Water
Adsorption of cobalt ions on quartz and illite and calcite from waste water was investigated. The effect of pH on the adsorption of cobalt ions was studied. The maximum capacities of cobalt ions of the three adsorbents increase with increasing cobalt solution temperature. The maximum capacities were (4.66) mg/g for quartz, (3.94) mg/g for illite, and (3.44) mg/g for calcite. The enthalpy, Gibbs free energy, and entropy for adsorption of cobalt ions on the three adsorbents were calculated. It was found that the adsorption process of the cobalt ions of the adsorbent was an endothermic process. consequently increasing the temperature causes the increase of the cobalt ions adsorption of the adsorbents. Therefore, the adsorption process is preferred at high temperature levels. The equilibrium adsorption data were correlated using Langmuir model, Freundlich model. The experimental data of cobalt ions of the adsorbents correlated well with Freundlich model.
Production Line Layout Planning Based on Complexity Measurement
Mass customization production increases the difficulty of the production line layout planning. The material distribution process for variety of parts is very complex, which greatly increases the cost of material handling and logistics. In response to this problem, this paper presents an approach of production line layout planning based on complexity measurement. Firstly, by analyzing the influencing factors of equipment layout, the complexity model of production line is established by using information entropy theory. Then, the cost of the part logistics is derived considering different variety of parts. Furthermore, the function of optimization including two objectives of the lowest cost, and the least configuration complexity is built. Finally, the validity of the function is verified in a case study. The results show that the proposed approach may find the layout scheme with the lowest logistics cost and the least complexity. Optimized production line layout planning can effectively improve production efficiency and equipment utilization with lowest cost and complexity.
Design of Advanced Materials for Alternative Cooling Devices
More efficient cooling systems are needed to reduce building energy consumption and environmental impact. At present researchers focus mainly on environmentally-friendly magnetic materials and the potential application in cooling devices. The magnetic materials presented in this project belong to a group known as Heusler alloys. These compounds are characterized by a strong coupling between their structure and magnetic properties. Usually, a change in one of them can alter the other, which implies changes in other electronic or structural properties, such as, shape magnetic memory response or the magnetocaloric effect. Those properties and its dependence with external fields make these materials interesting, both from a fundamental point of view, as well as on their different possible applications. In this work, first principles and Monte Carlo simulations have been used to calculate exchange couplings and magnetic properties as a function of an applied magnetic field on Heusler alloys. As a result, we found a large dependence of the magnetic susceptibility, entropy and heat capacity, indicating that the magnetic field can be used in experiments to trigger particular magnetic properties in materials, which are necessary to develop solid-state refrigeration devices.
Digital Image Steganography with Multilayer Security
In this paper, a new method is developed for hiding image in a digital image with multilayer security. In the proposed method, the secret image is encrypted in the first instance using a flexible matrix based symmetric key to add first layer of security. Then another layer of security is added to the secret data by encrypting the ciphered data using Pythagorean Theorem method. The ciphered data bits (4 bits) produced after double encryption are then embedded within digital image in the spatial domain using Least Significant Bits (LSBs) substitution. To improve the image quality of the stego-image, an improved form of pixel adjustment process is proposed. To evaluate the effectiveness of the proposed method, image quality metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), entropy, correlation, mean value and Universal Image Quality Index (UIQI) are measured. It has been found experimentally that the proposed method provides higher security as well as robustness. In fact, the results of this study are quite promising.
The Co-Simulation Interface SystemC/Matlab Applied in JPEG and SDR Application
Functional verification is a major part of today’s system design task. Several approaches are available for verification on a high abstraction level, where designs are often modeled using MATLAB/Simulink. However, different approaches are a barrier to a unified verification flow. In this paper, we propose a co-simulation interface between SystemC and MATLAB and Simulink to enable functional verification of multi-abstraction levels designs. The resulting verification flow is tested on JPEG compression algorithm. The required synchronization of both simulation environments, as well as data type conversion is solved using the proposed co-simulation flow. We divided into two encoder jpeg parts. First implemented in SystemC which is the DCT is representing the HW part. Second, consisted of quantization and entropy encoding which is implemented in Matlab is the SW part. For communication and synchronization between these two parts we use S-Function and engine in Simulink matlab. With this research premise, this study introduces a new implementation of a Hardware SystemC of DCT. We compare the result of our simulation compared to SW / SW. We observe a reduction in simulation time you have 88.15% in JPEG and the design efficiency of the supply design is 90% in SDR.
Improvement of Bone Scintography Image Using Image Texture Analysis
Image enhancement allows the observer to see details in images that may not be immediately observable in the original image. Image enhancement is the transformation or mapping of one image to another. The enhancement of certain features in images is accompanied by undesirable effects. To achieve maximum image quality after denoising, a new, low order, local adaptive Gaussian scale mixture model and median filter were presented, which accomplishes nonlinearities from scattering a new nonlinear approach for contrast enhancement of bones in bone scan images using both gamma correction and negative transform methods. The usual assumption of a distribution of gamma and Poisson statistics only lead to overestimation of the noise variance in regions of low intensity but to underestimation in regions of high intensity and therefore to non-optional results. The contrast enhancement results were obtained and evaluated using MatLab program in nuclear medicine images of the bones. The optimal number of bins, in particular the number of gray-levels, is chosen automatically using entropy and average distance between the histogram of the original gray-level distribution and the contrast enhancement function’s curve.
Feature Extractions of EMG Signals during a Constant Workload Pedaling Exercise
Electromyography (EMG) is one of the important indicators during exercise, as it is closely related to the level of muscle activations. This work quantifies the muscle conditions of the lower limbs in a constant workload exercise. Surface EMG signals of the vastus laterals (VL), vastus medialis (VM), rectus femoris (RF), gastrocnemius medianus (GM), gastrocnemius lateral (GL) and Soleus (SOL) were recorded from fourteen healthy males. The EMG signals were segmented in two phases: activation segment (AS) and relaxation segment (RS). Period entropy (PE), peak count (PC), zero crossing (ZC), wave length (WL), mean power frequency (MPF), median frequency (MDF) and root mean square (RMS) are calculated to provide the quantitative information of the measured EMG segments. The outcomes reveal that the PE, PC, ZC and RMS have significantly changed (p&lt;.001); WL presents moderately changed (p&lt;.01); MPF and MDF show no changed (p&gt;.05) during exercise. The results also suggest that the RS is also preferred for performance evaluation, while the results of the extracted features in AS are usually affected directly by the amplitudes. It is further found that the VL exhibits the most significant changes within six muscles during pedaling exercise. The proposed work could be applied to quantify the stamina analysis and to predict the instant muscle status in athletes.
Numerical Simulations of Electronic Cooling with In-Line and Staggered Pin Fin Heat Sinks
Three-dimensional incompressible turbulent fluid flow and heat transfer of pin fin heat sinks using air as a cooling fluid are numerically studied in this study. Two different kinds of pin fins are compared in the thermal performance, including circular and square cross sections, both are in-line and staggered arrangements. The turbulent governing equations are solved using a control-volume- based finite-difference method. Subsequently, numerical computations are performed with the realizable k - ԑ turbulence for the parameters studied, the fin height H, fin diameter D, and Reynolds number (Re) in the range of 7 &le; H &le; 10, 0.75 &le; D &le; 2, 2000 &le; Re &le; 126000 respectively. The numerical results are validated with available experimental data in the literature and good agreement has been found. It indicates that circular pin fins are streamlined in comparing with the square pin fins, the pressure drop is small than that of square pin fins, and heat transfer is not as good as the square pin fins. The thermal performance of the staggered pin fins is better than that of in-line pin fins because the staggered arrangements produce large disturbance. Both in-line and staggered arrangements show the same behavior for thermal resistance, pressure drop, and the entropy generation.
Nonlinear Analysis of Postural Sway in Multiple Sclerosis
Multiple sclerosis (MS) is a disease, which affects the central nervous system, and causes balance problem. In clinical, this disorder is usually evaluated using static posturography. Some linear or nonlinear measures, extracted from the posturographic data (i.e. center of pressure, COP) recorded during a balance test, has been used to analyze postural control of MS patients. In this study, the trend (TREND) and the sample entropy (SampEn), two nonlinear parameters were chosen to investigate their relationships with the expanded disability status scale (EDSS) score. Forty volunteers with different EDSS scores participated in our experiments with eyes open (EO) and closed (EC). TREND and two types of SampEn (SampEn1 and SampEn2) were calculated for each combined COP’s position signal. The results have shown that TREND had a weak negative correlation to EDSS while SampEn2 had a strong positive correlation to EDSS. Compared to TREND and SampEn1, SampEn2 showed a better significant correlation to EDSS and an ability to discriminate the MS patients in the EC case. In addition, the outcome of the study suggests that the multi-dimensional nonlinear analysis could provide some information about the impact of disability progression in MS on dynamics of the COP data.
Adsorption of Malachite Green Dye on Graphene Oxide Nanosheets from Aqueous Solution: Kinetics and Thermodynamics Studies
In this study, graphene oxide (GO) nanosheets have been synthesized and characterized using different spectroscopic tools such as X-ray diffraction spectroscopy, infrared Fourier transform (FT-IR) spectroscopy, BET specific surface area and Transmission Electronic Microscope (TEM). The prepared GO was investigated for the removal of malachite green, a cationic dye from aqueous solution. The removal methods of malachite green has been proceeded via adsorption process. GO nanosheets can be predicted as a good adsorbent material for the adsorption of cationic species. The adsorption of the malachite green onto the GO nanosheets has been carried out at different experimental conditions such as adsorption kinetics, concentration of adsorbate, pH, and temperature. The kinetics of the adsorption data were analyzed using four kinetic models such as the pseudo first-order model, pseudo second-order model, intraparticle diffusion, and the Boyd model to understand the adsorption behavior of malachite green onto the GO nanosheets and the mechanism of adsorption. The adsorption isotherm of adsorption of the malachite green onto the GO nanosheets has been investigated at 25, 35 and 45 °C. The equilibrium data were fitted well to the Langmuir model. Various thermodynamic parameters such as the Gibbs free energy (ΔG°), enthalpy (ΔH°), and entropy (ΔS°) change were also evaluated. The interaction of malachite green onto the GO nanosheets has been investigated by infrared Fourier transform (FT-IR) spectroscopy.
Gender Based Variability Time Series Complexity Analysis
Nonlinear methods of heart rate variability (HRV) analysis are becoming more popular. It has been observed that complexity measures quantify the regularity and uncertainty of cardiovascular RR-interval time series. In the present work, SampEn has been evaluated in healthy Normal Sinus Rhythm (NSR) male and female subjects for different data lengths and tolerance level r. It is demonstrated that SampEn is small for higher values of tolerance r. Also SampEn value of healthy female group is higher than that of healthy male group for short data length and with increase in data length both groups overlap each other and it is difficult to distinguish them. The SampEn gives inaccurate results by assigning higher value to female group, because male subject have more complex HRV pattern than that of female subjects. Therefore, this traditional algorithm exhibits higher complexity for healthy female subjects than for healthy male subjects, which is misleading observation. This may be due to the fact that SampEn do not account for multiple time scales inherent in the physiologic time series and the hidden spatial and temporal fluctuations remains unexplored.
Identify the Factors Affecting Employment and Prioritize in the Economic Sector Jobs of Increased Employment MADM approach of using SAW and TOPSIS and POSET: Ministry of Cooperatives, Do Varamin City Social Welfare
Negative consequences of unemployment are: increasing age at marriage, addiction, depression, drug trafficking, divorce, immigration, elite, frustration, delinquency, theft, murder, etc., has led to addressing the issue of employment by economic planners, public authorities, chief executive economic conditions in different countries and different time is important. All countries are faced with the problem of unemployment. By identifying the influential factors of occupational employment and employing strengths in the basic steps can be taken to reduce unemployment. In this study, the most significant factors affecting employment has identified 12 variables based on interviews conducted Choose Vtasyrafzaysh engaged in three main business is discussed. DRGAM next question the 8 expert ministry to respond to it is distributed and for weight Horns AZFN Shannon entropy and the ranking criteria of the (SAW, TOPSIS) used. According to the results of the above methods are not compatible with each other, to reach a general consensus on the rating criteria of the technique of integrating (POSET) involving average, Borda, copeland is used. Ultimately, there is no difference between the employments in the economic sector jobs of increased employment.
Sorption of Crystal Violet from Aqueous Solution Using Chitosan−Charcoal Composite
The study investigated the removal efficiency of crystal violet from aqueous solution using chitosan-charcoal composite as adsorbent. Deproteination was carried out by placing 200g of powdered snail shell in 4% w/v NaOH for 2hours. The sample was then placed in 1% HCl for 24 hours to remove CaCO3. Deacetylation was done by boiling in 50% NaOH for 2hours. 10% Oxalic acid was used to dissolve the chitosan before mixing with charcoal at 55°C to form the composite. The composite was characterized by Fourier Transform Infra-Red and Scanning Electron Microscopy measurements. The efficiency of adsorption was evaluated by varying pH of the solution, contact time, initial concentration and adsorbent dose. Maximum removal of crystal violet by composite and activated charcoal was attained at pH10 while maximum removal of crystal violet by chitosan was achieved at pH 8. The results showed that adsorption of both dyes followed the pseudo-second-order rate equation and fit the Langmuir and Freundlich isotherms. The data showed that composite was best suited for crystal violet removal and also did relatively well in the removal of alizarin red. Thermodynamic parameters such as enthalpy change (ΔHº), free energy change (ΔGº) and entropy change (ΔSº) indicate that adsorption process of Crystal Violet was endothermic, spontaneous and feasible respectively.
Enhanced Biosorption of Copper Ions by Luffa Cylindrica: Biosorbent Characterization and Batch Experiments
The adsorption ability of a powdered activated carbons (PAC) derived from Luffa cylindrica investigated in an attempt to produce more economic and effective sorbents for the control of Cu(II) ion from industrial liquid streams. Carbonaceous sorbents derived from local luffa cylindrica, were prepared by chemical activation methods using ZnCl2 as activating reagents. Adsorption of Cu (II) from aqueous solutions was investigated. The effects of pH, initial adsorbent concentration, the effect of particle size, initial metal ion concentration and temperature were studied in batch experiments. The maximum adsorption capacity of copper onto grafted Luffa cylindrica fiber was found to be 14.23 mg/g with best fit for Langmuir adsorption isotherm. The values of thermodynamic parameters such as enthalpy change, ∆H (-0.823 kJ/mol), entropy change, ∆S (-9.35 J/molK) and free energy change, ∆G (−1.56 kJ/mol) were also calculated. Adsorption process was found spontaneous and exothermic in nature. Finally, the luffa cylindrica has been evaluated by FTIR, MO and x-ray diffraction in order to determine if the biosorption process modifies its chemical structure and morphology, respectively. Luffa cylindrica has been proven to be an efficient biomaterial useful for heavy metal separation purposes that is not altered by the process.
Hierarchical Filtering Method of Threat Alerts Based on Correlation Analysis
Nowadays, the threats of the internet are enormous and increasing; however, the classification of huge alert messages generated in this environment is relatively monotonous. It affects the accuracy of the network situation assessment, and also brings inconvenience to the security managers to deal with the emergency. In order to deal with potential network threats effectively and provide more effective data to improve the network situation awareness. It is essential to build a hierarchical filtering method to prevent the threats. In this paper, it establishes a model for data monitoring, which can filter systematically from the original data to get the grade of threats and be stored for using again. Firstly, it filters the vulnerable resources, open ports of host devices and services. Then use the entropy theory to calculate the performance changes of the host devices at the time of the threat occurring and filter again. At last, sort the changes of the performance value at the time of threat occurring. Use the alerts and performance data collected in the real network environment to evaluate and analyze. The comparative experimental analysis shows that the threat filtering method can effectively filter the threat alerts effectively.
Methyltrioctylammonium Chloride as a Separation Solvent for Binary Mixtures: Evaluation Based on Experimental Activity Coefficients
An ammonium based ionic liquid (methyltrioctylammonium chloride) [N₈ ₈ ₈ ₁] [Cl] was investigated as an extraction potential solvent for volatile organic solvents (in this regard, solutes), which includes alkenes, alkanes, ketones, alkynes, aromatic hydrocarbons, THF, alcohols, thiophene, water and acetonitrile based on the experimental activity coefficients at infinite dilution. The measurements were conducted by the use of gas-liquid chromatography at four different temperatures (313.15 to 343.15) K. Experimental data of activity coefficients at infinite dilution obtained across the examined temperatures was used in order to calculate the physicochemical properties at infinite dilution such as partial molar excess enthalpy, Gibbs free energy, and entropy term. Capacity and selectivity data for selected petrochemical extraction problems (heptane/thiophene, heptane/benzene, cyclohaxane/cyclohexene, hexane/toluene, hexane/hexene) were computed from activity coefficients at infinite dilution data and compared to the literature values with other ionic liquids. Evaluation of activity coefficients at infinite dilution expands the knowledge and provides a good understanding related to the interactions between the ionic liquid and the investigated compounds.
Geomorphology Evidence of Climate Change in Gavkhouni Lagoon, South East Isfahan, Iran
Gavkhouni lagoon, in the South East of Isfahan (Iran), is one of the pluvial lakes and legacy of Quaternary era which has emerged during periods with more precipitation and less evaporation. Climate change, lack of water resources and dried freshwater of Zayandehrood resulted in increased entropy and activated a dynamic which in turn is converted to Playa. The morphometry of 61 polygonal clay microforms in wet zone soil, 52 polygonal clay microforms in pediplain zone soil and 63 microforms in sulfate soil, is evaluated by fractal model. After calculating the microforms’ area–perimeter fractal dimension, their turbulence level was analyzed. Fractal dimensions (DAP) obtained from the microforms’ analysis of pediplain zone, wet zone, and sulfate soils are 1/21-1/39, 1/27-1/44 and 1/29-1/41, respectively, which is indicative of turbulence in these zones. Logarithmic graph drawn for each region also shows that there is a linear relationship between logarithm of the microforms’ area and perimeter so that correlation coefficient (R2) obtained for wet zone is larger than 0.96, for pediplain zone is larger than 0.99 and for sulfated zone is 0.9. Increased turbulence in this region suggests morphological transformation of the system and lagoon’s conversion to a new ecosystem which can be accompanied with serious risks.
Removal of Basic Yellow 28 Dye from Aqueous Solutions Using Plastic Wastes
The removal of Basic Yellow 28 (BY28) from aqueous solutions by plastic wastes PMMA was investigated. The characteristics of plastic wastes PMMA were determined by SEM, FTIR and chemical composition analysis. The effects of solution pH, initial Basic Yellow 28 (BY28) concentration C, solid/liquid ratio R, and temperature T were studied in batch experiments. The Freundlich and the Langmuir models have been applied to the adsorption process, and it was found that the equilibrium followed well Langmuir adsorption isotherm. A comparison of kinetic models applied to the adsorption of BY28 on the PMMA was evaluated for the pseudo-first-order and the pseudo-second-order kinetic models. It was found that used models were correlated with the experimental data. Intraparticle diffusion model was also used in these experiments. The thermodynamic parameters namely the enthalpy ∆H°, entropy ∆S° and free energy ∆G° of adsorption of BY28 on PMMA were determined. From the obtained results, the negative values of Gibbs free energy ∆G° indicated the spontaneity of the adsorption of BY28 by PMMA. The negative values of ∆H° revealed the exothermic nature of the process and the negative values of ∆S° suggest the stability of BY28 on the surface of SW PMMA.
Energy and Exergy Analyses of Thin-Layer Drying of Pineapple Slices
Energy and exergy analyses of thin-layer drying of pineapple slices (Ananas comosus L.) were conducted in a laboratory tunnel dryer. Drying experiments were carried out at three temperatures (100, 115 and 130 °C) and an air velocity of 1.45 m/s. The effects of drying variables on energy utilisation, energy utilisation ratio, exergy loss and exergy efficiency were studied. The enthalpy difference of the gas increased as the inlet gas temperature increase. It is observed that at the 75 minutes of the drying process the outlet gas enthalpy achieves a maximum value that is very close to the inlet value and remains constant until the end of the drying process. This behaviour is due to the reduction of the total enthalpy within the system, or in other words, the reduction of the effective heat transfer from the hot gas flow to the vegetable being dried. Further, the outlet entropy exhibits a significant increase that is not only due to the temperature variation, but also to the increase of water vapour phase contained in the hot gas flow. The maximum value of the exergy efficiency curve corresponds to the maximum value observed within the drying rate curves. This maximum value represents the stage when the available energy is efficiently used in the removal of the moisture within the solid. As the drying rate decreases, the available energy is started to be less employed. The exergetic efficiency was directly dependent on the evaporation flux and since the convective drying is less efficient that other types of dryer, it is likely that the exergetic efficiency has relatively low values.
Investigation of Complexity Dynamics in a DC Glow Discharge Magnetized Plasma Using Recurrence Quantification Analysis
Recurrence is a ubiquitous feature of any real dynamical system. The states in phase space trajectory of a system have an inherent tendency to return to the same state or its close state after certain time laps. Recurrence quantification analysis technique, based on this fundamental feature of a dynamical system, detects evaluation of state under variation of control parameter of the system. The paper presents the investigation of nonlinear dynamical behavior of plasma floating potential fluctuations obtained by using a Langmuir probe in different magnetic field under the variation of discharge voltages. The main measures of recurrence quantification analysis are considered as determinism, linemax and entropy. The increment of the DET and linemax variables asserts that the predictability and periodicity of the system is increasing. The variable linemax indicates that the chaoticity is being diminished with the slump of magnetic field while increase of magnetic field enhancing the chaotic behavior. Fractal property of the plasma time series estimated by DFA technique (Detrended fluctuation analysis) reflects that long-range correlation of plasma fluctuations is decreasing while fractal dimension is increasing with the enhancement of magnetic field which corroborates the RQA analysis.
Preparation of Activated Carbon from Lignocellulosic Precursor for Dyes Adsorption
The synthesis and characterization of activated carbon from local lignocellulosic precursor (Algerian alfa) was carried out for the removal of cationic dyes from aqueous solutions. The effect of the production variables such as impregnation chemical agents, impregnation ratio, activation temperature and activation time were investigated. Carbon obtained using the optimum conditions (CaCl2/ 1:1/ 500°C/2H) was characterized by various analytical techniques scanning electron microscopy (SEM), infrared spectroscopic analysis (FTIR) and zero-point-of-charge (pHpzc). Adsorption tests of methylene blue on the optimal activated carbon were conducted. The effects of contact time, amount of adsorbent, initial dye concentration and pH were studied. The adsorption equilibrium examined using Langmuir, Freundlich, Temkin and Redlich–Peterson models reveals that the Langmuir model is most appropriate to describe the adsorption process. The kinetics of MB sorption onto activated carbon follows the pseudo-second order rate expression. The examination of the thermodynamic analysis indicates that the adsorption process is spontaneous (ΔG ° < 0) and endothermic (ΔH ° > 0), the positive value of the standard entropy shows the affinity between the activated carbon and the dye. The present study showed that the produced optimal activated carbon prepared from Algerian alfa is an effective low-cost adsorbent and can be employed as alternative to commercial activated carbon for removal of MB dye from aqueous solution.
Determination of Biomolecular Interactions Using Microscale Thermophoresis
Characterization of biomolecular interactions, such as protein-protein, protein-nucleic acid or protein-small molecule, provides critical insights into cellular processes and is essential for the development of drug diagnostics and therapeutics. Here we present a novel, label-free, and tether-free technology to analyze picomolar to millimolar affinities of biomolecular interactions by Microscale Thermophoresis (MST). The entropy of the hydration shell surrounding molecules determines thermophoretic movement. MST exploits this principle by measuring interactions using optically generated temperature gradients. MST detects changes in the size, charge and hydration shell of molecules and measures biomolecule interactions under close-to-native conditions: immobilization-free and in bioliquids of choice, including cell lysates and blood serum. Thus, MST measures interactions under close-to-native conditions, and without laborious sample purification. We demonstrate how MST determines the picomolar affinities of antibody::antigen interactions, and protein::protein interactions measured from directly from cell lysates. MST assays are highly adaptable to fit to the diverse requirements of different and complex biomolecules. NanoTemper´s unique technology is ideal for studies requiring flexibility and sensitivity at the experimental scale, making MST suitable for basic research investigations and pharmaceutical applications.
Major Depressive Disorder: Diagnosis based on Electroencephalogram Analysis
In this paper, a technique based on electroencephalogram (EEG) analysis is presented, aiming for diagnosing major depressive disorder (MDD) among a potential population of MDD patients and healthy controls. EEG is recognized as a clinical modality during applications such as seizure diagnosis, index for anesthesia, detection of brain death or stroke. However, its usability for psychiatric illnesses such as MDD is less studied. Therefore, in this study, for the sake of diagnosis, 2 groups of study participants were recruited, 1) MDD patients, 2) healthy people as controls. EEG data acquired from both groups were analyzed involving inter-hemispheric asymmetry and composite permutation entropy index (CPEI). To automate the process, derived quantities from EEG were utilized as inputs to classifier such as logistic regression (LR) and support vector machine (SVM). The learning of these classification models was tested with a test dataset. Their learning efficiency is provided as accuracy of classifying MDD patients from controls, their sensitivities and specificities were reported, accordingly (LR =81.7 % and SVM =81.5 %). Based on the results, it is concluded that the derived measures are indicators for diagnosing MDD from a potential population of normal controls. In addition, the results motivate further exploring other measures for the same purpose.
Exergy Based Analysis of Parabolic Trough Collector Using Twisted-Tape Inserts
In this paper, an analytical investigation based on energy and exergy analysis of the parabolic trough collector (PTC) with alternate clockwise and counter-clockwise twisted tape inserts in the absorber tube has been presented. For fully developed flow under quasi-steady state conditions, energy equations have been developed in order to analyze the rise in fluid temperature, thermal efficiency, entropy generation and exergy efficiency. Also the effect of system and operating parameters on performance have been studied. A computer program, based on mathematical models is developed in C++ language to estimate the temperature rise of fluid for evaluation of performances under specified conditions. For numerical simulations four different twist ratio, x = 2,3,4,5 and mass flow rate 0.06 kg/s to 0.16 kg/s which cover the Reynolds number range of 3000 - 9000 is considered. This study shows that twisted tape inserts when used shows great promise for enhancing the performance of PTC. Results show that for x=1, Nusselt number/heat transfer coefficient is found to be 3.528 and 3.008 times over plain absorber of PTC at mass flow rate of 0.06 kg/s and 0.16 kg/s respectively; while corresponding enhancement in thermal efficiency is 12.57% and 5.065% respectively. Also the exergy efficiency has been found to be 10.61% and 10.97% and enhancement factor is 1.135 and 1.048 for same set of conditions.
Correlations between Wear Rate and Energy Dissipation Mechanisms in a Ti6Al4V–WC/Co Sliding Pair
The prediction of the wear rate of rubbing pairs has attracted the interest of many researchers for years. It has been recently proposed that the sliding wear rate can be inferred from the calculation of the energy rate dissipated by the tribological pair. In this paper some of the dissipative mechanisms present in a pin-on-disc configuration are discussed and both analytical and numerical calculations are carried out. Three dissipative mechanisms were studied: First, the energy release due to temperature gradients within the solid; second, the heat flow from the solid to the environment, and third, the energy loss due to abrasive damage of the surface. The Finite Element Method was used to calculate the dynamics of heat transfer within the solid, with the aid of commercial software. Validation the FEM model was assisted by virtual and laboratory experimentation using different operating points (sliding velocity and geometry contact). The materials for the experiments were Ti6Al4V alloy and Tungsten Carbide (WC-Co). The results showed that the sliding wear rate has a linear relationship with the energy dissipation flow. It was also found that energy loss due to micro-cutting is relevant for the system. This mechanism changes if the sliding velocity and pin geometry are modified though the degradation coefficient continues to present a linear behavior. We found that the less relevant dissipation mechanism for all the cases studied is the energy release by temperature gradients in the solid.
Typology of Customers in Fitness Centres
The main purpose of our study is to state the basic types of fitness customers. This paper aims to create a specific customer typology in today’s fitness centres in the region of Prague. Our suggested typology of Prague fitness centres customers is based on answers to the questions: What are the customers like, what are their preferences, and what kinds of services do they use more often in Prague fitness centres? These are the main aspects of the presented typology. A survey was conducted on a sample of 1004 respondents from 48 fitness centres, which ran during May 2012. We used questionnaires and latent class analysis for the assessment and interpretation of data. Gender was especially the main filter criterion. In the population, there were 522 males and 482 females. Data were analysed using the LCA method. We identified 6 segments of typical customers, of which three are male and three are female. Each segment is influenced primarily by the age of customers, from which we can develop further characteristics, such as education, income, marital status, etc. Male segments use the main workout area above all, whilst female segments use a much wider range of services offered, for example, group exercises, personal training, and cardio theatres. LCA method was found to be the most suitable tool, because cluster analysis is very limited in the forms and numbers of variables and indicators. Models of 3 latent classes for each gender are optimal, as it is demonstrated by entropy indices and matrices of the likelihood of the membership to the classes. A probable weak point of the survey is the selection of fitness centres, because of the market in Prague is really specific.
Species Distribution Modelling for Assessing the Effect of Land Use Changes on the Habitat of Endangered Proboscis Monkey (Nasalis larvatus) in Kalimantan, Indonesia
The proboscis monkey is an endemic species to the island of Borneo with conservation status IUCN (The International Union for Conservation of Nature) of endangered. The population of the monkey has a specific habitat and sensitive to habitat disturbances. As a consequence of increasing rates of land-use change in the last four decades, its population was reported significantly decreased. We quantified the effect of land use change on the proboscis monkey’s habitat through the species distribution modeling (SDM) approach with Maxent Software. We collected presence data and environmental variables, i.e., land cover, topography, bioclimate, distance to the river, distance to the road, and distance to the anthropogenic disturbance to generate predictive distribution maps of the monkeys. We compared two prediction maps for 2000 and 2015 data to represent the current habitat of the monkey. We overlaid the monkey’s predictive distribution map with the existing protected areas to investigate whether the habitat of the monkey is protected under the protected areas networks. The results showed that almost 50% of the monkey’s habitat reduced as the effect of land use change. And only 9% of the current proboscis monkey’s habitat within protected areas. These results are important for the master plan of conservation of the endangered proboscis monkey and provide scientific guidance for the future development incorporating biodiversity issue.
Reentrant Spin-Glass State Formation in Polycrystalline Er₂NiSi₃
Magnetically frustrated systems are of great interest and one of the most adorable topics for the researcher of condensed matter physics, due to their various interesting properties, viz. ground state degeneracy, finite entropy at zero temperature, lowering of ordering temperature, etc. Ternary intermetallics with the composition RE₂TX₃ (RE = rare-earth element, T= d electron transition metal and X= p electron element) crystallize in hexagonal AlB₂ type crystal structure (space group P6/mmm). In a hexagonal crystal structure with the antiferromagnetic interaction between the moments, the center moment is geometrically frustrated. Magnetic frustration along with disorder arrangements of non-magnetic ions are the building blocks for metastable spin-glass ground state formation for most of the compounds of this stoichiometry. The newly synthesized compound Er₂NiSi₃ compound forms in single phase in AlB₂ type structure with space group P6/mmm. The compound orders antiferromagnetically below 5.4 K and spin freezing of the frustrated magnetic moments occurs below 3 K for the compound. The compound shows magnetic relaxation behavior and magnetic memory effect below its freezing temperature. Neutron diffraction patterns for temperatures below the spin freezing temperature have been analyzed using FULLPROF software package. Diffuse magnetic scattering at low temperatures yields spin glass state formation for the compound.
Pressure-Controlled Dynamic Equations of the PFC Model: A Mathematical Formulation
The phase-field-crystal, PFC, approach is a density-functional-type material model with an atomic resolution on a diffusive timescale. Spatially, the model incorporates periodic nature of crystal lattices and can naturally exhibit elasticity, plasticity and crystal defects such as grain boundaries and dislocations. Temporally, the model operates on a diffusive timescale which bypasses the need to resolve prohibitively small atomic-vibration time steps. The PFC model has been used to study many material phenomena such as grain growth, elastic and plastic deformations and solid-solid phase transformations. In this study, the pressure-controlled dynamic equation for the PFC model was developed to simulate a single-component system under externally applied pressure; these coupled equations are important for studies of deformable systems such as those under constant pressure. The formulation is based on the non-equilibrium thermodynamics and the thermodynamics of crystalline solids. To obtain the equations, the entropy variation around the equilibrium point was derived. Then the resulting driving forces and flux around the equilibrium were obtained and rewritten as conventional thermodynamic quantities. These dynamics equations are different from the recently-proposed equations; the equations in this study should provide more rigorous descriptions of the system dynamics under externally applied pressure.
Improving Fault Tolerance and Load Balancing in Heterogeneous Grid Computing Using Fractal Transform
The popularity of the Internet and the availability of powerful computers and high-speed networks as low-cost commodity components are changing the way we use computers today. These technical opportunities have led to the possibility of using geographically distributed and multi-owner resources to solve large-scale problems in science, engineering, and commerce. Recent research on these topics has led to the emergence of a new paradigm known as Grid computing. To achieve the promising potentials of tremendous distributed resources, effective and efficient load balancing algorithms are fundamentally important. Unfortunately, load balancing algorithms in traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources, cannot work well in the new circumstances. In this paper, the concept of a fast fractal transform in heterogeneous grid computing based on R-tree and the domain-range entropy is proposed to improve fault tolerance and load balancing algorithm by improve connectivity, communication delay, network bandwidth, resource availability, and resource unpredictability. A novel two-dimension figure of merit is suggested to describe the network effects on load balance and fault tolerance estimation. Fault tolerance is enhanced by adaptively decrease replication time and message cost while load balance is enhanced by adaptively decrease mean job response time. Experimental results show that the proposed method yields superior performance over other methods.
Project Knowledge Harvesting: The Case of Improving Project Performance through Project Knowledge Sharing Framework
In a project-centric organization like KOC, managing the knowledge of the project is of critical importance to the success of the project and the organization. However, due to the very nature and complexity involved, each project engagement generates a lot of 'learnings' that need to be factored into while new projects are initiated and thus avoid repeating the same mistake. But, many a time these learnings are localized and remains as ‘tacit knowledge’ leading to scope re-work, schedule overrun, adjustment orders, concession requests and claims. While KOC follows an asset based organization structure, with a multi-cultural and multi-ethnic workforce and larger chunk of the work is carried out through complex, long term project engagement, diffusion of ‘learnings’ across assets while dealing with the natural entropy of the organization is of great significance. Considering the relatively higher number of mega projects, it's important that the issues raised during the project life cycle are centrally harvested, analyzed and the ‘learnings’ from these issues are shared, absorbed and are in-turn utilized to enhance and refine the existing process and practices, leading to improve the project performance. One of the many factors contributing to the successful completion of a project on time is the reduction in the number of variations or concessions triggered during the project life cycle. The project process integrated knowledge sharing framework discusses the knowledge harvesting methodology adopted, the challenges faced, learnings acquired and its impact on project performance. The framework facilitates the proactive identification of issues that may have an impact on the overall quality of the project and improve performance.
The Analysis of Spatial Development: Malekan City
The leading goal of all planning is to attain sustainable development, regional balance, suitable distribution of activities, and maximum use of environmental capabilities in the process of development of regions. Intensive concentration of population and activities in one or some limited geographical locality is of main characteristics of most developing countries, especially Iran. Not considering the long-term programs and relying on temporary and superficial plans by people in charge of decision-making to attain their own objectives causes obstacles, resulting in unbalance development. The basic reason for these problems is to establish the development planning while economic aspects are merely considered and any attentions are not paid to social and regional feedbacks, what have been ending up to social and economic inequality, unbalanced distribution of development among the regions as well. In addition to study of special planning and structure of the county of Malekan, this research tries to achieve some other aims, i.e. recognition and introduction of approaches in order to utilize resources optimally, to distribute the population, activities, and facilities in optimum fashion, and to investigate and identify the spatial development potentials of the County. Based on documentary, descriptive, analytical, and field studies, this research employs maps to analyze the data, investigates the variables, and applies SPSS, Auto CAD, and Arc View software. The results show that the natural factors have a significant influence on spatial layout of settlements; distribution of facilities and functions are not equal among the rural districts of the county; and there is a spatial equivalence in the region area between population and number of settlements.
Comparative Study of Sorption of Cr Ions and Dye Bezaktiv Yellow HE-4G with the Use of Adsorbents Natural Mixture of Olive Stone and Date Pits from Aqueous Solution
In this paper, a comparative study of the adsorption of Chromium and dyes, onto mixture biosorbents, olive stones and date pits at different percentage was investigated in aqueous solution. The study of various parameters: Effect of contact time, pH, temperature and initial concentration shows that these materials possess a high affinity for the adsorption of chromium for the adsorption of dye bezaktiv yellow HE-4G. To deepen the comparative study of the adsorption of chromium and dye with the use of different blends of olive stones and date pits, the following models are studied: Langmuir, Freundlich isotherms and Dubinin- Radushkvich (D-R) were used as the adsorption equilibrium data model. Langmuir isotherm model was the most suitable for the adsorption of the dye bezaktiv HE-4G and the D-R model is most suitable for adsorption Chrome. The pseudo-first-order model, pseudo-second order and intraparticle diffusion were used to describe the adsorption kinetics. The apparent activation energy was found to be less than 8KJ/mol, which is characteristic of a controlled chemical reaction for the adsorption of two materials. t was noticed that adsorption of chromium and dye BEZAKTIV HE-YELLOW 4G follows the kinetics of the pseudo second order. The study of the effect of temperature was quantified by calculating various thermodynamic parameters such as Gibbs free energy, enthalpy and entropy changes. The resulting thermodynamic parameters indicate the endothermic nature of the adsorption of Cr (VI) ions and the dye Bezaktiv HE-4G. But these materials are very good adsorbents, as they represent a low cost. in addition, it has been noticed that the greater the quantity of olive stone in the mixture increases, the adsorption ability of the dye or chromium increases.
Synthesis and Physiochemical Properties of 3-Propanenitrile Imidazolium - Based Dual Functionalized Ionic Liquids Incorporating Dioctyl Sulfosuccinate Anion
In the present work, a new series of 3-propanenitrile imidazolium-based Room Temperature Ionic Liquids (RTILs), incorporating dioctyl sulfosuccinate (DOSS) were prepared by reacting imidazole with acrylonitrile and then reacting the product with allyl chloride, 2-chloroethanol, and benzyl chloride. After the reaction had been completed, metathesis reaction was carried out using sodium dioctyl sulfosuccinate. The densities and viscosities of the present RTILs were measured at atmospheric pressure at T=293.15 to 353.15 K, the refractive index was measured at T=293.15 to 333.15 K, whereas, the start and decomposition temperatures were determined at heating rate 10°C. min^-1. The thermal expansion coefficient, densities at a range of temperatures and pressures, molecular volume, molar refraction, standard entropy and the lattice energy of these RTILs were also estimated. The present RTILs showed higher densities, similar refractive indices, and higher viscosities compared to the other 1-alkyl-3-propanenitrile imidazolium-based RTILs. The densities of the present synthesized RTILs are lower compared to the other nitrile-functionalized ILs. These present RTILs showed a weak temperature dependence on the thermal expansion coefficients, αp=5.0 × 10^−4 to 7.50 × 10−4 K^-1. Empirical correlations were proposed to represent the present data on the physical properties. The lattice energy for the present RTILs was similar to other nitrile–based imidazolium RTILs. The present RTILs showed very high molar refraction when compared similar RTILs incorporating other anions.
Mathematical Analysis of Variation in Inlet Shock Wave Angle on Specific Impulse of Scramjet Engine
Study of shock waves generated in the Scramjet engine is typically restricted to pressure, temperature, density, entropy and Mach number variation across the shock wave. The present work discusses the impact of inlet shock wave angles on the specific impulse of the Scramjet engine. A mathematical analysis has done for the isentropic hypersonic flow of air flowing through a Scramjet with hydrogen fuel at an altitude of 30 km. Analysis has been done in order to get optimum shock wave angle to achieve maximum impulse. Since external drag has excluded from the analysis, the losses due to friction are not considered for the present analysis. When Mach number of the airflow at the entry of the nozzle reaches unity, then that flow is choked. This condition puts limitations on increasing the inlet shock wave angle. As inlet shock wave angle increases, speed of the flow entering into the nozzle decreases, which results in an increase in the specific impulse of the engine. When the speed of the flow at the entry of the nozzle reduces below sonic speed, then there is no further increase in the specific impulse of the engine. Here the Conclusion is the thrust and specific impulse of a scramjet engine, which increases gradually with an increase in inlet shock wave angle up to the condition when airflow speed reaches sonic velocity at the exit of the combustor. In addition to that, variation in drag force at the inlet of the scramjet and variation in hypersonic flow conditions at every stage of the scramjet also studied in order to understand variation on flow characteristics with respect to flow deflection angle. Essentially, it helps in designing inlet profile for the Scramjet engine to achieve optimum specific impulse.
The Evolution of Spatio-Temporal Patterns of New-Type Urbanization in the Central Plains Economic Region in China
This paper establishes an evaluation index system for spatio-temporal patterns of urbanization, with the county as research unit. We use the Entropy Weight method, coefficient variance, the Theil index and ESDA-GIS to analyze spatial patterns and evolutionary characteristics of New-Type Urbanization in the Central Plains Economic Region (CPER) between 2000 and 2011. Results show that economic benefit, non-agricultural employment level and level of market development are the most important factors influencing the level of New-Type Urbanization in the CPER; overall regional differences in New-Type Urbanization have declined while spatial correlations have increased from 2000 to 2011. The overall spatial pattern has changed little, however; differences between the western and eastern areas of the CPER are clear, and the pattern of a strong west and weak east did not change significantly over the study period. Areas with high levels of New-Type Urbanization were mostly distributed along the Beijing-Guangzhou and LongHai Railways on both sides, a new influx of urbanization was tightly clustered around ZhengZhou in the Central Henan Urban Agglomeration, but this trend was found to be weakening slightly. The level of New-Type Urbanization in municipal districts was found to be much higher than it was in the county generally. Provincial borders experienced a lower rate of growth and a lower level of New-Type Urbanization than did any other areas, consistently forming clusters of cold spots and sub-cold spots. The analysis confirms that historical development, location, and diffusion effects of urban agglomeration are the main drivers of changes in New-Type Urbanization patterns in CPER.
Conservation Planning of Paris Polyphylla Smith, an Important Medicinal Herb of the Indian Himalayan Region Using Predictive Distribution Modelling
Paris polyphylla Smith (Family- Liliaceae; English name-Love apple: Local name- Satuwa) is an important folk medicinal herb of the Indian subcontinent, being a source of number of bioactive compounds for drug formulation. The rhizomes are widely used as antihelmintic, antispasmodic, digestive stomachic, expectorant and vermifuge, antimicrobial, anti-inflammatory, heart and vascular malady, anti-fertility and sedative. Keeping in view of this, the species is being constantly removed from nature for trade and various pharmaceuticals purpose, as a result, the availability of the species in its natural habitat is decreasing. In this context, it would be pertinent to conserve this species and reintroduce them in its natural habitat. Predictive distribution modelling of this species was performed in Western Himalayan Region. One such recent method is Ecological Niche Modelling, also popularly known as Species distribution modelling, which uses computer algorithms to generate predictive maps of species distributions in a geographic space by correlating the point distributional data with a set of environmental raster data. In case of P. polyphylla, and to understand its potential distribution zones and setting up of artificial introductions, or selecting conservation sites, and conservation and management of their native habitat. Among the different districts of Uttarakhand (28°05ˈ-31°25ˈ N and 77°45ˈ-81°45ˈ E) Uttarkashi, Rudraprayag, Chamoli, Pauri Garhwal and some parts of Bageshwar, 'Maximum Entropy' (Maxent) has predicted wider potential distribution of P. polyphylla Smith. Distribution of P. polyphylla is mainly governed by Precipitation of Driest Quarter and Mean Diurnal Range i.e., 27.08% and 18.99% respectively which indicates that humidity (27%) and average temperature (19°C) might be suitable for better growth of Paris polyphylla.
Exergy Analysis of a Vapor Absorption Refrigeration System Using Carbon Dioxide as Refrigerant
Vapor absorption refrigeration systems can replace vapor compression systems in many applications as they can operate on a low-grade heat source and are environment-friendly. Widely used refrigerants such as CFCs and HFCs cause significant global warming. Natural refrigerants can be an alternative to them, among which carbon dioxide is promising for use in automotive air conditioning systems. Its inherent safety, ability to withstand high pressure and high heat transfer coefficient coupled with easy availability make it a likely choice for refrigerant. Various properties of the ionic liquid [bmim][PF₆], such as non-toxicity, stability over a wide temperature range and ability to dissolve gases like carbon dioxide, make it a suitable absorbent for a vapor absorption refrigeration system. In this paper, an absorption chiller consisting of a generator, condenser, evaporator and absorber was studied at an operating temperature of 70⁰C. A thermodynamic model was set up using the Peng-Robinson equations of state to predict the behavior of the refrigerant and absorbent pair at different points in the system. A MATLAB code was used to obtain the values of enthalpy and entropy at selected points in the system. The exergy destruction in each component and exergetic coefficient of performance (ECOP) of the system were calculated by performing an exergy analysis based on the second law of thermodynamics. Graphs were plotted between varying operating conditions and the ECOP obtained in each case. The effect of every component on the ECOP was examined. The exergetic coefficient of performance was found to be lesser than the coefficient of performance based on the first law of thermodynamics.
Removal of Pb²⁺ from Waste Water Using Nano Silica Spheres Synthesized on CaCO₃ as a Template: Equilibrium and Thermodynamic Studies
The availability and access to fresh water is today a serious global challenge. This has been a direct result of factors such as the current rapid industrialization and industrial growth, persistent droughts in some parts of the world, especially in the sub-Saharan Africa as well as population growth. Growth of the chemical processing industry has also seen an increase in the levels of pollutants in our water bodies which include heavy metals among others. Heavy metals are known to be dangerous to both human and aquatic life. As such, they have been linked to several diseases. This is mainly because they are highly toxic. They are also known to be bio accumulative and non-biodegradable. Lead for example, has been linked to a number of health problems which include damage of vital internal body systems like the nervous and reproductive system as well as the kidneys. From this background therefore, the removal of the toxic heavy metal, Pb2+ from waste water was investigated using nano silica hollow spheres (NSHS) as the adsorbent. Synthesis of NSHS was done using a three-stage process in which CaCO3 nanoparticles were initially prepared as a template. This was followed by treatment of the formed oxide particles with NaSiO3 to give a nanocomposite. Finally, the template was destroyed using 2.0M HCl to give NSHS. Characterization of the nanoparticles was done using analytical techniques like XRD, SEM, and TGA. For the adsorption process, both thermodynamic and equilibrium studies were carried out. Thermodynamic studies were carried out and the Gibbs free energy, Enthalpy and Entropy of the adsorption process were determined. The results revealed that the adsorption process was both endothermic and spontaneous. Equilibrium studies were also carried out in which the Langmuir and Freundlich isotherms were tested. The results showed that the Langmuir model best described the adsorption equilibrium.
Hedgerow Detection and Characterization Using Very High Spatial Resolution SAR DATA
Hedgerow has an important role for a wide range of ecological habitats, landscape, agriculture management, carbon sequestration, wood production. Hedgerow detection accurately using satellite imagery is a challenging problem in remote sensing techniques, because in the special approach it is very similar to line object like a road, from a spectral viewpoint, a hedge is very similar to a forest. Remote sensors with very high spatial resolution (VHR) recently enable the automatic detection of hedges by the acquisition of images with enough spectral and spatial resolution. Indeed, recently VHR remote sensing data provided the opportunity to detect the hedgerow as line feature but still remain difficulties in monitoring the characterization in landscape scale. In this research is used the TerraSAR-x Spotlight and Staring mode with 3-5 m resolution in wet and dry season in the test site of Fermoy County, Ireland to detect the hedgerow by acquisition time of 2014-2015. Both dual polarization of Spotlight data in HH/VV is using for detection of hedgerow. The varied method of SAR image technique with try and error way by integration of classification algorithm like texture analysis, support vector machine, k-means and random forest are using to detect hedgerow and its characterization. We are applying the Shannon entropy (ShE) and backscattering analysis in single and double bounce in polarimetric analysis for processing the object-oriented classification and finally extracting the hedgerow network. The result still is in progress and need to apply the other method as well to find the best method in study area. Finally, this research is under way to ahead to get the best result and here just present the preliminary work that polarimetric image of TSX potentially can detect the hedgerow.
Examining Patterns in Ethnoracial Diversity in Los Angeles County Neighborhoods, 2016, Using Geographic Information System Analysis and Entropy Measure of Diversity
This study specifically examines patterns that define ethnoracially diverse neighborhoods. Ethnoracial diversity is important as it facilitates cross-racial interactions within neighborhoods which have been theorized to be associated with such outcomes as intergroup harmony, the reduction of racial and ethnic prejudice and discrimination, and increases in racial tolerance. Los Angeles (LA) is an ideal location to study ethnoracial spatial patterns as it is one of the most ethnoracially diverse cities in the world. A large influx of Latinos, as well as Asians, have contributed to LA’s urban landscape becoming increasingly diverse over several decades. Our dataset contains all census tracts in Los Angeles County in 2016 and incorporates Census and ACS demographic and spatial data. We quantify ethnoracial diversity using a derivative of Simpson’s Diversity Index and utilize this measure to test previous literature that suggests Latinos are one of the key drivers of changing ethnoracial spatial patterns in Los Angeles. Preliminary results suggest that there has been an overall increase in ethnoracial diversity in Los Angeles neighborhoods over the past sixteen years. Patterns associated with this trend include decreases in predominantly white and black neighborhoods, increases in predominantly Latino and Asian neighborhoods, and a general decrease in the white populations of the most diverse neighborhoods. A similar pattern is seen in neighborhoods with large Latino increases- a decrease in white population, but with an increase in Asian and black populations. We also found support for previous research that suggests increases in Latino and Asian populations act as a buffer, allowing for black population increases without a sizeable decrease in the white population. Future research is needed to understand the underlying causes involved in many of the patterns and trends highlighted in this study.
Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis
Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.
Thermal Instability in Solid under Irradiation
Construction materials for nuclear facilities are operated under extreme thermal and radiation conditions. First of all, they are nuclear fuel, fuel assemblies, and reactor vessel. It places high demands on the control of their state, stability of their state, and their operating conditions. An irradiated material is a typical example of an open non-equilibrium system with nonlinear feedbacks between its elements. Fluxes of energy, matter and entropy maintain states which are far away from thermal equilibrium. The links that arise under irradiation are inherently nonlinear. They form the mechanisms of feed-backs that can lead to instability. Due to this instability the temperature of the sample, heat transfer, and the defect density can exceed the steady-state value in several times. This can lead to change of typical operation and an accident. Therefore, it is necessary to take into account the thermal instability to avoid the emergency situation. The point is that non-thermal energy can be accumulated in materials because irradiation produces defects (first of all these are vacancies and interstitial atoms), which are metastable. The stored energy is about energy of defect formation. Thus, an annealing of the defects is accompanied by releasing of non-thermal stored energy into thermal one. Temperature of the material grows. Increase of temperature results in acceleration of defect annealing. Density of the defects drops and temperature grows more and more quickly. The positive feed-back is formed and self-reinforcing annealing of radiation defects develops. To describe these phenomena a theoretical approach to thermal instability is developed via formalism of complex systems. We consider system of nonlinear differential equations for different components of microstructure and temperature. The qualitative analysis of this non-linear dynamical system is carried out. Conditions for development of instability have been obtained. Points of bifurcation have been found. Convenient way to represent obtained results is a set of phase portraits. It has been shown that different regimes of material state under irradiation can develop. Thus degradation of irradiated material can be limited by means of choice appropriate kind of evolution of materials under irradiation.
Dynamic Web-Based 2D Medical Image Visualization and Processing Software
In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP &lsquo;apache server&rsquo; is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error &lsquo;MSE&rsquo;, peak signal to noise ratio &lsquo;PSNR&rsquo; and compression ratio &lsquo;CR&rsquo; that achieved (83.86%) when &lsquo;coif3&rsquo; wavelet filter is used.
An Investigation into the Crystallization Tendency/Kinetics of Amorphous Active Pharmaceutical Ingredients: A Case Study with Dipyridamole and Cinnarizine
Amorphous drug formulations have great potential to enhance solubility and thus bioavailability of BCS class II drugs. However, the higher free energy and molecular mobility of the amorphous form lowers the activation energy barrier for crystallization and thermodynamically drives it towards the crystalline state which makes them unstable. Accurate determination of the crystallization tendency/kinetics is the key to the successful design and development of such systems. In this study, dipyridamole (DPM) and cinnarizine (CNZ) has been selected as model compounds. Thermodynamic fragility (m_T) is measured from the heat capacity change at the glass transition temperature (Tg) whereas dynamic fragility (m_D) is evaluated using methods based on extrapolation of configurational entropy to zero 〖(m〗_(D_CE )), and heating rate dependence of Tg 〖(m〗_(D_Tg)). The mean relaxation time of amorphous drugs was calculated from Vogel-Tammann-Fulcher (VTF) equation. Furthermore, the correlation between fragility and glass forming ability (GFA) of model drugs has been established and the relevance of these parameters to crystallization of amorphous drugs is also assessed. Moreover, the crystallization kinetics of model drugs under isothermal conditions has been studied using Johnson-Mehl-Avrami (JMA) approach to determine the Avrami constant ‘n’ which provides an insight into the mechanism of crystallization. To further probe into the crystallization mechanism, the non-isothermal crystallization kinetics of model systems was also analysed by statistically fitting the crystallization data to 15 different kinetic models and the relevance of model-free kinetic approach has been established. In addition, the crystallization mechanism for DPM and CNZ at each extent of transformation has been predicted. The calculated fragility, glass forming ability (GFA) and crystallization kinetics is found to be in good correlation with the stability prediction of amorphous solid dispersions. Thus, this research work involves a multidisciplinary approach to establish fragility, GFA and crystallization kinetics as stability predictors for amorphous drug formulations.
Polymer Mediated Interaction between Grafted Nanosheets
Polymer-particle interactions can be effectively utilized to produce composites that possess physicochemical properties superior to that of neat polymer. The incorporation of fillers with dimensions comparable to polymer chain size produces composites with extra-ordinary properties owing to very high surface to volume ratio. The dispersion of nanoparticles is achieved by inducing steric repulsion realized by grafting particles with polymeric chains. A comprehensive understanding of the interparticle interaction between these functionalized nanoparticles plays an important role in the synthesis of a stable polymer nanocomposite. With the focus on incorporation of clay sheets in a polymer matrix, we theoretically construct the polymer mediated interparticle potential for two nanosheets grafted with polymeric chains. The self-consistent field theory (SCFT) is employed to obtain the inhomogeneous composition field under equilibrium. Unlike the continuum models, SCFT is built from the microscopic description taking in to account the molecular interactions contributed by both intra- and inter-chain potentials. We present the results of SCFT calculations of the interaction potential curve for two grafted nanosheets immersed in the matrix of polymeric chains of dissimilar chemistry to that of the grafted chains. The interaction potential is repulsive at short separation and shows depletion attraction for moderate separations induced by high grafting density. It is found that the strength of attraction well can be tuned by altering the compatibility between the grafted and the mobile chains. Further, we construct the interaction potential between two nanosheets grafted with diblock copolymers with one of the blocks being chemically identical to the free polymeric chains. The interplay between the enthalpic interaction between the dissimilar species and the entropy of the free chains gives rise to a rich behavior in interaction potential curve obtained for two separate cases of free chains being chemically similar to either the grafted block or the free block of the grafted diblock chains.