Open Science Research Excellence

Open Science Index

Commenced in January 2007 Frequency: Monthly Edition: International Abstract Count: 66741

Closed-Form Sharma-Mittal Entropy Rate for Gaussian Processes
The entropy rate of a stochastic process is a fundamental concept in information theory. It provides a limit to the amount of information that can be transmitted reliably over a communication channel, as stated by Shannon's coding theorems. Recently, researchers have focused on developing new measures of information that generalize Shannon's classical theory. The aim is to design more efficient information encoding and transmission schemes. This paper continues the study of generalized entropy rates, by deriving a closed-form solution to the Sharma-Mittal entropy rate for Gaussian processes. Using the squeeze theorem, we solve the limit in the definition of the entropy rate, for different values of alpha and beta, which are the parameters of the Sharma-Mittal entropy. In the end, we compare it with Shannon and Rényi's entropy rates for Gaussian processes.
Chinese Remainder Theorem and Decidability
The Chinese remainder theorem deals with systems of modular equations. It has many applications. The Chinese remainder theorem requires that modules be pairwise coprime. In this paper, we discuss the general Chinese remainder theorem, which does not require this restriction on modules. We also show interesting applications of the general Chinese remainder theorem in proving decidability.
An Experimental Analysis of Squeeze Casting Parameters for 2017 a Wrought Al Alloy
A Taguchi design investigation has been made into the relationship between the ductility and process variables in a squeeze cast 2017A wrought aluminium alloy. The considered process parameters were: squeeze pressure, melt temperature and die preheating temperature. An orthogonal array (OA), main effect, signal-to-noise (S/N) ratio, and the analysis of variance (ANOVA) are employed to analyze the effect of casting parameters. The results have shown that the selected parameters significantly affect the ductility of 2017A wrought Al alloy castings. Optimal squeeze cast process parameters were provided to illustrate the proposed approach and the results were proven to be trustworthy through practical experiments.
A Study on Weddernburn – Artin Theorem for Rings
The study depicts that a Wedderburn Artin – theorem for rings is considered to be a semisimple ring R which is isomorphic to a product of finitely many mi by mi matrix rings over division rings Di, for some integers n_i, both of which are uniquely determined up to permutation of the index i. It has been concluded that when R is simple the Wedderburn – Artin theorem is known as Wedderburn’s theorem.
Evaluation of Digital Assessment of Anal Sphincter Muscle Strength
Examination of the external anal sphincter muscle strength of voluntary contraction is essential in initial assessment and assessment of efficacy of rehabilitation of patients with faecal incontinence (FI) and obstructed defecation (OD). The present study was conducted to evaluate the digital assessment of the external anal sphincter muscle strength of voluntary contraction by using Modified Oxford Scale (MOS) in comparison to anal manometry squeeze pressure. The present cross-sectional study included 65 patients. There were 40 patients (61.5 %) with FI and 25 patients (38.5 %) with OD. All patients were subjected to history taking, clinical examination including assessment of the external anal sphincter muscle strength of voluntary contraction by using MOS and anal manometry (mean squeeze pressure and maximal squeeze pressure). There was a statistically significant positive correlation between MOS and anal manometry squeeze pressures including mean squeeze pressure and maximal squeeze pressure among FI group and OD group. In conclusion, assessment of the external anal sphincter muscle strength of voluntary contraction by using MOS is a valid method and can substitute anal manometry assessment.
Comparing Two Non-Contact Squeeze Film Levitation Designs
Transportation and handling of delicate and lightweight objects is a significant issue in some industries. Two levitation prototype designs, a horn transducer design and surface-mounted piezoelectric actuator vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation (SFL) and the aim of this study is to evaluate the characteristics and performance of each. To this end, physical experiments are conducted and are demonstrated that the horn-type transducer prototype design produces better levitation performance but it design complexity and operating characteristics make it less suitable than the vibrating plate design for practical applications.
Application of Chinese Remainder Theorem to Find The Messages Sent in Broadcast
Improper application of the RSA algorithm scheme can cause vulnerability to attacks. The attack utilizes the relationship between broadcast messages sent to the user with some fixed polynomial functions that belong to each user. Scheme attacks carried out by applying the Chinese Remainder Theorem to obtain a general polynomial equation with the same modulus. The formation of the general polynomial becomes a first step to get back the original message. Furthermore, to solve these equations can use Coppersmith's theorem.
Experimental Squeeze Flow of Bitumen: Rheological Properties
The squeeze flow tests were studied by many authors to measure the rheological properties of fluid. Experimental squeezing flow test with constant area between two parallel disks of bitumen is investigated in the present work. The effect of the temperature, the process of preparing the sample and the gap between the discs were discussed. The obtained results were compared with the theoretical models. The behavior of bitumen depends on the viscosity and the yield stress. Thus, the bitumen was presented as a power law for a small power law exponent and as a biviscous fluid when the viscosity ratio was smaller than one. Also, the influence of the ambient temperature is required for the compression test. Therefore, for a high temperature the yield stress decrease.
Effects of Applied Pressure and Heat Treatment on the Microstructure of Squeeze Cast Al-Si Alloy Were Examined
The present contribution consists of a purely experimental investigation on the effect of Squeeze casting on the micro structural and mechanical propriety of Al-Si alloys destined to automotive industry. Accordingly, we have proceeding, by ourselves, to all the thermal treatment consisting of solution treatment at 540°C for 8h and aging at 160°C for 4h. The various thermal treatment, have been carried out in order to monitor the processes of formation and dissolution accompanying the solid state phase transformations as well as the resulting changes in the mechanical proprieties. The examination of the micrographs of the aluminum alloys reveals the dominant presence of dendrite. Concerning the mechanical characteristic the Vickers micro-hardness curve an increase as a function of the pressure. As well as the heat treatment increase mechanical propriety such that pressure and micro hardness. The curves have been explained in terms of structural hardening resulting from the various compounds formation.
Bifurcations of a System of Rotor-Ball Bearings with Waviness and Squeeze Film Dampers
Squeeze film damper systems (SFD) are often used in machines with high rotational speed to reduce non-periodic behavior by creating external damping. These types of systems are frequently used in aircraft gas turbine engines. There are some structural parameters which are of great importance in designing these kinds of systems, such as oil film thickness, C, and outer race mass, mo. Moreover, there is a crucial parameter associated with manufacturing process, under the title of waviness. Geometric imperfections are often called waviness if its wavelength is much longer than Hertzian contact width which is a considerable source of vibration in ball bearings. In this paper, a system of a flexible rotor and two ball bearings with floating ring squeeze film dampers and consideration of waviness has been modeled and solved by a numerical integration method, namely Runge-Kutta method to investigate the dynamic response of the system. The results show that by increasing the number of wave lobes, which is due to inappropriate manufacturing, non- periodic and chaotic behavior increases. This result reveals the importance of manufacturing accuracy. Moreover, as long as C< 1.5×10-4 m, by increasing the oil film thickness, unwanted vibrations and non-periodic behavior of the system have been reduced, On the other hand, when C>1.5×10-4 m, increasing the outer oil film thickness results in the increasing chaotic and non-periodic responses. This result shows that although the presence of oil film results in reduction the non-periodic and chaotic behaviors, but the oil film has an optimal thickness. In addition, with increasing mo, the disc displacement amplitude increases. This result reveals the importance of utilizing light materials in manufacturing the squeeze film dampers.
Development of Al-5%Cu/Si₃N₄, B₄C or BN Composites for Piston Applications
The purpose of this research is to provide a competitive alternative to aluminum silicon alloys used in automotive applications. This alternative was created by developing three types of composites Al-5%Cu- (B₄C, BN or Si₃N₄) particulates with a low coefficient of thermal expansion. Stir casting was used to synthesis composites containing 2, 5 and 7 wt. % of B₄C, Si₃N₄ and 2, 5 of BN followed by squeeze casting. The squeeze casting process decreased the porosity of the final composites. The composites exhibited a fairly uniform particle distribution throughout the matrix alloy. The microstructure and XRD results of the composites suggested a significant reaction occurred at the interface between the particles and alloy. Increasing the aging temperature from 200 to 250°C decreased the hardness values of the matrix and the composites and decreased the time required to reach the peak. Turner model was used to calculate the expected values of thermal expansion coefficient CTE of matrix and its composites. Deviations between calculated and experimental values of CTE were not exceeded 10%. Al-5%Cu-B₄C composites experimentally showed the lowest values of CTE (17-19)·10-6 °С-1 and (19-20) ·10-6 °С-1 in the temperature range 20-100 °С and 20-200 °С respectively.
Weyl Type Theorem and the Fuglede Property
Given H a Hilbert space and B(H) the algebra of bounded linear operator in H, let &delta;AB denote the generalized derivation defined by A and B. The main objective of this article is to study Weyl type theorems for generalized derivation for (A,B) satisfying a couple of Fuglede.
Metallograpy of Remelted A356 Aluminium following Squeeze Casting
The demand for lightweight parts with high mechanical strength(s) and integrity, in sectors such as the aerospace and automotive is ever increasing, motivated by the need for weight reduction in order to increase fuel efficiency with components usually manufactured using a high grade primary metal or alloy. For components manufactured using the squeeze casting process, this alloy is usually A356 aluminium (Al), it is one of the most versatile Al alloys; and is used extensively in castings for demanding environments. The A356 castings provide good strength to weight ratio making it an attractive option for components where strength has to be maintained, with the added advantage of weight reduction. In addition, the versatility in castabilitiy, weldability and corrosion resistance are other attributes that provide for the A356 cast alloy to be used in a large array of industrial applications. Conversely, it is rare to use remelted Al in these cases, due the nature of the applications of components in demanding environments, were material properties must be defined to meet certain specifications for example a known strength or ductility. However the use of remelted Al, especially primary grade Al such as A356, would offer significant cost and energy savings for manufacturers using primary alloys, provided that remelted aluminium can offer similar benefits in terms of material microstructure and mechanical properties. This study presents the results of the material microstructure and properties of 100% primary A356 Al and 100% remelt Al cast, manufactured via the direct squeeze cast method. The microstructures of the castings made from remelted A356 Al were then compared with the microstructures of primary A356 Al. The outcome of using remelting Al on the microstructure was examined via different analytical techniques, optical microscopy of polished and etched surfaces, and scanning electron microscopy. Microstructural analysis of the 100% remelted Al when compared with primary Al show similar α-Al phase, primary Al dendrites, particles and eutectic constituents. Mechanical testing of cast samples will elucidate further information as to the suitability of utilising 100% remelt for casting.
Research on Reservoir Lithology Prediction Based on Residual Neural Network and Squeeze-and- Excitation Neural Network
Conventional reservoir prediction methods ar not sufficient to explore the implicit relation between seismic attributes, and thus data utilization is low. In order to improve the predictive classification accuracy of reservoir lithology, this paper proposes a deep learning lithology prediction method based on ResNet (Residual Neural Network) and SENet (Squeeze-and-Excitation Neural Network). The neural network model is built and trained by using seismic attribute data and lithology data of Shengli oilfield, and the nonlinear mapping relationship between seismic attribute and lithology marker is established. The experimental results show that this method can significantly improve the classification effect of reservoir lithology, and the classification accuracy is close to 70%. This study can effectively predict the lithology of undrilled area and provide support for exploration and development.
Cryptographic Attack on Lucas Based Cryptosystems Using Chinese Remainder Theorem
Lenstra&rsquo;s attack uses Chinese remainder theorem as a tool and requires a faulty signature to be successful. This paper reports on the security responses of fourth and sixth order Lucas based (LUC4,6) cryptosystem under the Lenstra&rsquo;s attack as compared to the other two Lucas based cryptosystems such as LUC and LUC3 cryptosystems. All the Lucas based cryptosystems were exposed mathematically to the Lenstra&rsquo;s attack using Chinese Remainder Theorem and Dickson polynomial. Result shows that the possibility for successful Lenstra&rsquo;s attack is less against LUC4,6 cryptosystem than LUC3 and LUC cryptosystems. Current study concludes that LUC4,6 cryptosystem is more secure than LUC and LUC3 cryptosystems in sustaining against Lenstra&rsquo;s attack.
Existence Solutions for Three Point Boundary Value Problem for Differential Equations
In this paper, under weak assumptions, we study the existence and uniqueness of solutions for a nonlinear fractional boundary value problem. New existence and uniqueness results are established using Banach contraction principle. Other existence results are obtained using scheafer and krasnoselskii's fixed point theorem. At the end, some illustrative examples are presented.
Existence of Positive Solutions for Second-Order Difference Equation with Discrete Boundary Value Problem
We study the existence of positive solutions to the three points difference summation boundary value problem. We show the existence of at least one positive solution if f is either superlinear or sublinear by applying the fixed point theorem due to Krasnoselskii in cones.
A Comparative Study of Microstructure, Thermal and Mechanical Properties of A359 Composites Reinforced with SiC, Si3N4 and AlN Particles
A comparative study of the thermal and mechanical behavior of squeezed A359 composites containing 5, 10 and 15 wt.% SiC, (SiC+ Si3N4) and AlN particulates was investigated. Stir followed by squeeze casting techniques are used to produce A359 composites. It was noticed that, A359/AlN composites have high thermal conductivity as compared to A359 alloy and even to A359/SiC or A359/(SiC+Si3N4) composites. Microstructures of the composites have shown homogeneous and even distribution of reinforcements within the matrix. Interfacial reactions between particles and matrix were investigated using X-ray diffraction and energy dispersive X-ray analysis. The presence of particles led not only to increase peak hardness of the composites but also to accelerate the aging kinetics. As compared with A359 matrix alloy, compression test of the composites has exhibited a significant increase in the yield and the ultimate compressive strengths with a relative reduction in the failure strain. Those light weight composites have a high potential to be used for automotive and aerospace applications.
From the Recursive Definition of Refutability to the Invalidity of Gödel’s 1931 Incompleteness
According to Gödel’s first incompleteness argument it is possible to construct a formally undecidable proposition in Principia mathematica, a statement that, although true, turns out to be neither provable nor refutable for the system, making therefore incomplete any formal system suitable for the arithmetic of integers. Its features and limitation effects are today widespread basics throughout whole scientific thought. This article brings Gödel’s achievement into question by the definition of the refutability predicate as a number-theoretical statement. We develop proof of invalidity of Theorem VI in Gödel’s 1931, the so-called Gödel’s first incompleteness theorem, in two steps: defining refutability within the same recursive status as provability and showing that as a consequence propositions (15) and (16), derived from definition 8.1 in Gödel’s 1931, are false and unacceptable for the system. The achievement of their falsity blocks the derivation of Theorem VI, which turns out to be therefore invalid, together with all the depending theorems. This article opens up thus new perspectives for mathematical research and for the overall scientific reasoning.
Improving Detection of Illegitimate Scores and Assessment in Most Advantageous Tenders
The Most Advantageous Tender (MAT) has been criticized for its susceptibility to dictatorial situations and for its processing of same score, same rank issues. This study applies the four criteria from Arrow's Impossibility Theorem to construct a mechanism for revealing illegitimate scores in scoring methods. While commonly be used to improve on problems resulting from extreme scores, ranking methods hide significant defects, adversely affecting selection fairness. To address these shortcomings, this study relies mainly on the overall evaluated score method, using standardized scores plus normal cumulative distribution function conversion to calculate the evaluation of vender preference. This allows for free score evaluations, which reduces the influence of dictatorial behavior and avoiding same score, same rank issues. Large-scale simulations confirm that this method outperforms currently used methods using the Impossibility Theorem.
Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach
Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.
Analysis of Senior Secondary II Students Performance/Approaches Exhibited in Solving Circle Geometry
The paper will examine the approaches and solutions that will be offered by Senior Secondary School II Students (Demonstration Secondary School, Azare Bauchi State Northern Nigeria – Hausa/ Fulani predominant area) toward solving exercises related to the circle theorem. The angle that an arc of a circle subtends at the center is twice that which it subtends at any point on the remaining part of the circumference. The Students will be divided in to 2 groups by given them numbers 1, 2; 1, 2; 1, 2, then all 1s formed group I and all 2s formed group II. Group I will be considered as control group in which the traditional method will be applied during instructions. Thus, the researcher will revise the concept of circle, state the theorem, prove the theorem and then solve examples. Group II, experimental group in which the concept of circle will be revised to the students and then the students will be asked to draw different circles, mark arcs, draw angle at the center, angle at the circumference then measure the angles constructed. The students will be asked to explain what they can infer/deduce from the angles measured and lastly, examples will be solved. During the next contact day, both groups will be subjected to solving exercises in the classroom related to the theorem. The angle that an arc of a circle subtends at the center is twice that which it subtends at any point on the remaining part of circumference. The solution to the exercises will be marked, the scores compared/analysed using relevant statistical tool. It is expected that group II will perform better because of the method/ technique followed during instructions is more learner-centered. By exploiting the talents of the individual learners through listening to the views and asking them how they arrived at a solution will really improve learning and understanding.
Nadler's Fixed Point Theorem on Partial Metric Spaces and its Application to a Homotopy Result
In 1994, Matthews (S.G. Matthews, Partial metric topology, in: Proc. 8th Summer Conference on General Topology and Applications, in: Ann. New York Acad. Sci., vol. 728, 1994, pp. 183-197) introduced the concept of a partial metric as a part of the study of denotational semantics of data flow networks. He gave a modified version of the Banach contraction principle, more suitable in this context. In fact, (complete) partial metric spaces constitute a suitable framework to model several distinguished examples of the theory of computation and also to model metric spaces via domain theory. In this paper, we introduce the concept of almost partial Hausdorff metric. We prove a fixed point theorem for multi-valued mappings on partial metric space using the concept of almost partial Hausdorff metric and prove an analogous to the well-known Nadler’s fixed point theorem. In the sequel, we derive a homotopy result as an application of our main result.
A Study of Evolutional Control Systems
Controllability is one of the fundamental issues in control systems. In this paper, we study the controllability of second order evolutional control systems in Hilbert spaces with memory and boundary controls, which model dynamic behaviors of some viscoelastic materials. Transferring the control problem into a moment problem and showing the Riesz property of a family of functions related to Cauchy problems for some integrodifferential equations, we obtain a general boundary controllability theorem for these second order evolutional control systems. This controllability theorem is applicable to various concrete 1D viscoelastic systems and recovers some previous related results. It is worth noting that Riesz sequences can be used for numerical computations of the control functions and the identification of new Riesz sequence is of independent interest for the basis-function theory. Moreover, using the Riesz sequences, we obtain the existence and uniqueness of (weak) solutions to these second order evolutional control systems in Hilbert spaces. Finally, we derive the exact boundary controllability of a viscoelastic beam equation, as an application of our abstract theorem.
[Keynote Talk]: Existence of Random Fixed Point Theorem for Contractive Mappings
Random fixed point theory has received much attention in recent years, and it is needed for the study of various classes of random equations. The study of random fixed point theorems was initiated by the Prague school of probabilistic in the 1950s. The existence and uniqueness of fixed points for the self-maps of a metric space by altering distances between the points with the use of a control function is an interesting aspect in the classical fixed point theory. In a new category of fixed point problems for a single self-map with the help of a control function that alters the distance between two points in a metric space which they called an altering distance function. In this paper, we prove the results of existence of random common fixed point and its uniqueness for a pair of random mappings under weakly contractive condition for generalizing alter distance function in polish spaces using Random Common Fixed Point Theorem for Generalized Weakly Contractions.
Energy Conservation and H-Theorem for the Enskog-Vlasov Equation
The Enskog-Vlasov (EV) equation is a widely used semi-phenomenological model of gas/liquid phase transitions. We show that it does not generally conserve energy, although there exists a restriction on its coefficients for which it does. Furthermore, if an energy-preserving version of the EV equation satisfies an H-theorem as well, it can be used to rigorously derive the so-called Maxwell construction which determines the parameters of liquid-vapor equilibria. Finally, we show that the EV model provides an accurate description of the thermodynamics of noble fluids, and there exists a version simple enough for use in applications.
A New Fixed Point Theorem for Almost θ-Contraction
In this work, we introduce a new type of contractive maps and we establish a new fixed point theorem for the class of almost θ-contractions (more general than the class of almost contractions) in a complete generalized metric space. The major novelty of our work is to prove a new fixed point result by weakening some hypotheses imposed on the function θ which will change completely the classical technique used in the literature review to prove fixed point theorems for almost θ-contractions in a complete generalized metric space.
Existence of Positive Solutions to a Dirichlet Second Order Boundary Value Problem
In this paper, we investigate the existence of positive solutions for a Dirichlet second order boundary value problem by applying the Krasnosel'skii fixed point theorem on compression and expansion of cones.
Simplified Equations for Rigidity and Lateral Deflection for Reinforced Concrete Cantilever Shear Walls
Reinforced concrete shear walls are the most frequently used forms of lateral resisting structural elements. These walls may take many forms due to their functions and locations in the building. In Palestine, the most lateral resisting forces construction forms is the cantilever shear walls system. It is thus of prime importance to study the rigidity of these walls. The virtual work theorem is used to derive the total lateral deflection of cantilever shear walls due to flexural and shear deformation. The case of neglecting the shear deformation in the walls is also studied, and it is found that the wall height to length aspect ratio (H/B) plays a major role in calculating the lateral deflection and the rigidity of such walls. When the H/B is more than or equal to 3.7, the shear deformation may be neglected from the calculation of the lateral deflection. Moreover, the walls with the same material properties, same lateral load value, and same aspect ratio, shall have the same of both the lateral deflection and the rigidity. Finally, an equation to calculate the total rigidity and total deflection of such walls is derived by using the virtual work theorem for a cantilever beam.
Mathematical and Numerical Analysis of a Reaction Diffusion System of Lambda-Omega Type
In this study we consider a nonlinear in time finite element approximation of a reaction diffusion system of lambda-omega type. We use a fixed point theorem to prove existence of the approximations. Then, we derive some essential stability estimates and discuss the uniqueness of the approximations. Also, we prove an optimal error bound in time for d=1, 2 and 3 space dimensions. Finally, we present some numerical experiments to verify the theoretical results.
Existence of positive periodic solutions for certain delay differential equations
In this article, we study the existence of positive periodic solutions of certain delay differential equations. In the process we convert the differential equation into an equivalent integral equation after which appropriate mappings are constructed. We then employ Krasnoselskii's fixed point theorem to obtain sufficient conditions for the existence of a positive periodic solution of the differential equation. The obtained results improve and extend the results in the literature. Finally, an example is given to illustrate our results.
Using Indigenous Games to Demystify Probability Theorem in Ghanaian Classrooms: Mathematical Analysis of Ampe
Similar to many colonized nations in the world, one indelible mark left by colonial masters after Ghana’s independence in 1957 has been the fact that many contexts used to teach statistics and probability concepts are often alien and do not resonate with the social domain of our indigenous Ghanaian child. This has seriously limited the understanding, discoveries, and applications of mathematics for national developments. With the recent curriculum demands of making the Ghanaian child mathematically literate, this qualitative study involved video recordings and mathematical analysis of play sessions of an indigenous girl game called Ampe with the aim to demystify the concepts in probability theorem, which is applied in mathematics related fields of study. The mathematical analysis shows that the game of Ampe, which is widely played by school girls in Ghana, is suitable for learning concepts of the probability theorems. It was also revealed that as a girl game, the use of Ampe provides good lessons to educators, textbook writers, and teachers to rethink about the selection of mathematics tasks and learning contexts that are sensitive to gender. As we undertake to transform teacher education and student learning, the use of indigenous games should be critically revisited.
Validation of the Formal Model of Web Services Applications for Digital Reference Service of Library Information System
The web services applications for digital reference service (WSDRS) of LIS model is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient way of satisfying users’ needs in the reference section of libraries. The formal WSDRS model consists of the Z specifications of all the informal specifications of the model. This paper discusses the formal validation of the Z specifications of WSDRS model. The authors formally verify and thus validate the properties of the model using Z/EVES theorem prover.
A Proof of the N. Davydov Theorem for Douglis Algebra Valued Functions
The classical Beltrami system of elliptic equations generalizes the Cauchy Riemann equation in the complex plane and offers the possibility to consider homogeneous system with no terms of zero order. The theory of Douglis-valued functions, called Hyper-analytic functions, is special case of the above situation. In this note, we prove an analogue of the N. Davydov theorem in the framework of the theory of hyperanalytic functions. The used methodology contemplates characteristic methods of the hypercomplex analysis as well as the singular integral operators and elliptic systems of the partial differential equations theories.
Total Controllability of the Second Order Nonlinear Differential Equation with Delay and Non-Instantaneous Impulses
A stronger concept of exact controllability which is called Total Controllability is introduced in this manuscript. Sufficient conditions have been established for the total controllability of a control problem, governed by second order nonlinear differential equation with delay and non-instantaneous impulses in a Banach space X. The results are obtained using the strongly continuous cosine family and Banach fixed point theorem. Also, the total controllability of an integrodifferential problem is investigated. At the end, some numerical examples are provided to illustrate the analytical findings.
Numerical Method for Heat Transfer Problem in a Block Having an Interface
A finite volume method for quadrilaterals unstructured mesh is developed to predict the two dimensional steady-state solutions of conduction equation. In this scheme, based on the integration around the polygonal control volume, the derivatives of conduction equation must be converted into closed line integrals using same formulation of the Stokes theorem. To valid the accuracy of the method two numerical experiments s are used: conduction in a regular block (with known analytical solution) and conduction in a rotated block (case with curved boundaries).The numerical results show good agreement with analytical results. To demonstrate the accuracy of the method, the absolute and root-mean square errors versus the grid size are examined quantitatively.
The Construction of the Semigroup Which Is Chernoff Equivalent to Statistical Mixture of Quantizations for the Case of the Harmonic Oscillator
We obtain explicit formulas of finitely multiple approximations of the equilibrium density matrix for the case of the harmonic oscillator using Chernoff's theorem and the notion of semigroup which is Chernoff equivalent to average semigroup. Also we found explicit formulas for the corresponding approximate Wigner functions and average values of the observable. We consider a superposition of τ -quantizations representing a wide class of linear quantizations. We show that the convergence of the approximations of the average values of the observable is not uniform with respect to the Gibbs parameter. This does not allow to represent approximate expression as the sum of the exact limits and small deviations evenly throughout the temperature range with a given order of approximation.
Sliding Mode Position Control for Permanent Magnet Synchronous Motors Based on Passivity Approach
In this paper, a sliding mode control method based on the passivity approach is proposed to control the position of surface-mounted permanent magnet synchronous motors (PMSMs). Firstly, the dynamics of a PMSM was proved to be strictly passive. The position controller with an adaptive law was used to estimate the load torque to eliminate the chattering effects associated with the conventional sliding mode controller. The stability analysis of the overall position control system was carried out by adopting the passivity theorem instead of Lyapunov-type arguments. Finally, experimental results were provided to show that the good position tracking can be obtained, and exhibit robustness in the variations of the motor parameters and load torque disturbances.
A Proof for Goldbach's Conjecture
In 1937, Vinograd of Russian Mathematician proved that each odd large number can be shown by three primes. In 1973, Chen Jingrun proved that each odd number can be shown by one prime plus a number that has maximum two primes. In this article, we state one proof for Goldbach’conjecture. Introduction: Bertrand’s postulate state for every positive integer n, there is always at least one prime p, such that n < p < 2n. This was first proved by Chebyshev in 1850, which is why postulate is also called the Bertrand-Chebyshev theorem. Legendre’s conjecture states that there is a prime between n2 and (n+1)2 for every positive integer n, which is one of the four Landau’s problems. The rest of these four basic problems are; (i) Twin prime conjecture: There are infinitely many primes p such that p+2 is a prime. (ii) Goldbach’s conjecture: Every even integer n > 2 can be written asthe sum of two primes. (iii) Are there infinitely many primes p such that p−1 is a perfect square? Problems (i), (ii), and (iii) are open till date.
Positive Bias and Length Bias in Deep Neural Networks for Premises Selection
Premises selection, the task of selecting a set of axioms for proving a given conjecture, is a major bottleneck in automated theorem proving. An array of deep-learning-based methods has been established for premises selection, but a perfect performance remains challenging. Our study examines the inaccuracy of deep neural networks in premises selection. Through training network models using encoded conjecture and axiom pairs from the Mizar Mathematical Library, two potential biases are found: the network models classify more premises as necessary than unnecessary, referred to as the ‘positive bias’, and the network models perform better in proving conjectures that paired with more axioms, referred to as ‘length bias’. The ‘positive bias’ and ‘length bias’ discovered could inform the limitation of existing deep neural networks.
Positive Bias and Length Bias in Deep Neural Networks for Premises Selection
Premises selection, the task of selecting a set of axioms for proving a given conjecture, is a major bottleneck in automated theorem proving. An array of deep-learning-based methods has been established for premises selection, but a perfect performance remains challenging. Our study examines the inaccuracy of deep neural networks in premises selection. Through training network models using encoded conjecture and axiom pairs from the Mizar Mathematical Library, two potential biases are found: the network models classify more premises as necessary than unnecessary, referred to as the ‘positive bias’, and the network models perform better in proving conjectures that paired with more axioms, referred to as ‘length bias’. The ‘positive bias’ and ‘length bias’ discovered could inform the limitation of existing deep neural networks.
A Folk Theorem with Public Randomization Device in Repeated Prisoner’s Dilemma under Costly Observation
An infinitely repeated prisoner’s dilemma is a typical model that represents teamwork situation. If both players choose costly actions and contribute to the team, then both players are better off. However, each player has an incentive to choose a selfish action. We analyze the game under costly observation. Each player can observe the action of the opponent only when he pays an observation cost in that period. In reality, teamwork situations are often costly observation. Members of some teams sometimes work in distinct rooms, areas, or countries. In those cases, they have to spend their time and money to see other team members if they want to observe it. The costly observation assumption makes the cooperation difficult substantially because the equilibrium must satisfy the incentives not only on the action but also on the observational decision. Especially, it is the most difficult to cooperate each other when the stage-game is prisoner's dilemma because players have to communicate through only two actions. We examine whether or not players can cooperate each other in prisoner’s dilemma under costly observation. Specifically, we check whether symmetric Pareto efficient payoff vectors in repeated prisoner’s dilemma can be approximated by sequential equilibria or not (efficiency result). We show the efficiency result without any randomization device under certain circumstances. It means that players can cooperate with each other without any randomization device even if the observation is costly. Next, we assume that public randomization device is available, and then we show that any feasible and individual rational payoffs in prisoner’s dilemma can be approximated by sequential equilibria under a specific situation (folk theorem). It implies that players can achieve asymmetric teamwork like leadership situation when public randomization device is available.
Existence Theory for First Order Functional Random Differential Equations
In this paper, the existence of a solution of nonlinear functional random differential equations of the first order is proved under caratheodory condition. The study of the functional random differential equation has got importance in the random analysis of the dynamical systems of universal phenomena. Objectives: Nonlinear functional random differential equation is useful to the scientists, engineers, and mathematicians, who are engaged in N.F.R.D.E. analyzing a universal random phenomenon, govern by nonlinear random initial value problems of D.E. Applications of this in the theory of diffusion or heat conduction. Methodology: Using the concepts of probability theory, functional analysis, generally the existence theorems for the nonlinear F.R.D.E. are prove by using some tools such as fixed point theorem. The significance of the study: Our contribution will be the generalization of some well-known results in the theory of Nonlinear F.R.D.E.s. Further, it seems that our study will be useful to scientist, engineers, economists and mathematicians in their endeavors to analyses the nonlinear random problems of the universe in a better way.
Design and Implementation of Neural Network Based Controller for Self-Driven Vehicle
This paper devises an autonomous self-driven vehicle that is capable of taking a disabled person to his/her desired location using three different power sources (gasoline, solar, electric) without any control from the user, avoiding the obstacles in the way. The GPS co-ordinates of the desired location are sent to the main processing board via a GSM module. After the GPS co-ordinates are sent, the path to be followed by the vehicle is devised by Pythagoras theorem. The distance and angle between the present location and the desired location is calculated and then the vehicle starts moving in the desired direction. Meanwhile real-time data from ultrasonic sensors is fed to the board for obstacle avoidance mechanism. Ultrasonic sensors are used to quantify the distance of the vehicle from the object. The distance and position of the object is then used to make decisions regarding the direction of vehicle in order to avoid the obstacles using artificial neural network which is implemented using ATmega1280. Also the vehicle provides the feedback location at remote location.
Numerical Method for Fin Profile Optimization
In the present work a numerical method is proposed in order to optimize the thermal performance of finned surfaces. The bidimensional temperature distribution on the longitudinal section of the fin is calculated by restoring to the finite volumes method. The heat flux dissipated by a generic profile fin is compared with the heat flux removed by the rectangular profile fin with the same length and volume. In this study, it is shown that a finite volume method for quadrilaterals unstructured mesh is developed to predict the two dimensional steady-state solutions of conduction equation, in order to determine the sinusoidal parameter values which optimize the fin effectiveness. In this scheme, based on the integration around the polygonal control volume, the derivatives of conduction equation must be converted into closed line integrals using same formulation of the Stokes theorem. The numerical results show good agreement with analytical results. To demonstrate the accuracy of the method, the absolute and root-mean square errors versus the grid size are examined quantitatively.
Numerical Method of Heat Transfer in Fin Profiles
In this work, a numerical method is proposed in order to solve the thermal performance problems of heat transfer of fins surfaces. The bidimensional temperature distribution on the longitudinal section of the fin is calculated by restoring to the finite volumes method. The heat flux dissipated by a generic profile fin is compared with the heat flux removed by the rectangular profile fin with the same length and volume. In this study, it is shown that a finite volume method for quadrilaterals unstructured mesh is developed to predict the two dimensional steady-state solutions of conduction equation, in order to determine the sinusoidal parameter values which optimize the fin effectiveness. In this scheme, based on the integration around the polygonal control volume, the derivatives of conduction equation must be converted into closed line integrals using same formulation of the Stokes theorem. The numerical results show good agreement with analytical results. To demonstrate the accuracy of the method, the absolute and root-mean square errors versus the grid size are examined quantitatively.
Theorem on Inconsistency of The Classical Logic
This abstract concerns an extremely fundamental issue. Namely, the fundamental problem of science is the issue of consistency. In this abstract, we present the theorem saying that the classical calculus of quantifiers is inconsistent in the traditional sense. At the beginning, we introduce a notation, and later we remind the definition of the consistency in the traditional sense. S1 is the set of all well-formed formulas in the calculus of quantifiers. RS1 denotes the set of all rules over the set S1. Cn(R, X) is the set of all formulas standardly provable from X by rules R, where R is a subset of RS1, and X is a subset of S1. The couple < R,X > is called a system, whenever R is a subset of RS1, and X is a subset of S1. Definition: The system < R,X > is consistent in the traditional sense if there does not exist any formula from the set S1, such that this formula and its negation are provable from X, by using rules from R. Finally, < R0+, L2 > denotes the classical calculus of quantifiers, where R0+ consists of Modus Ponens and the generalization rule. L2 is the set of all formulas valid in the classical calculus of quantifiers. The Main Result: The system < R0+, L2 > is inconsistent in the traditional sense.
Digital Image Steganography with Multilayer Security
In this paper, a new method is developed for hiding image in a digital image with multilayer security. In the proposed method, the secret image is encrypted in the first instance using a flexible matrix based symmetric key to add first layer of security. Then another layer of security is added to the secret data by encrypting the ciphered data using Pythagorean Theorem method. The ciphered data bits (4 bits) produced after double encryption are then embedded within digital image in the spatial domain using Least Significant Bits (LSBs) substitution. To improve the image quality of the stego-image, an improved form of pixel adjustment process is proposed. To evaluate the effectiveness of the proposed method, image quality metrics including Peak Signal-to-Noise Ratio (PSNR), Mean Square Error (MSE), entropy, correlation, mean value and Universal Image Quality Index (UIQI) are measured. It has been found experimentally that the proposed method provides higher security as well as robustness. In fact, the results of this study are quite promising.
Classifying Time Independent Plane Symmetric Spacetime through Noether`s Approach
The universe is expanding at an accelerated rate. Symmetries are useful in understanding universe’s behavior. Emmy Noether reported the relation between symmetries and conservation laws. These symmetries are known as Noether symmetries which correspond to a conserved quantity. In differential equations, conservation laws play an important role. Noether symmetries are helpful in modified theories of gravity. Time independent plane symmetric spacetime was classified by Noether`s theorem. By using Noether`s theorem, set of linear partial differential equations was obtained having A(r), B(r) and F(r) as unknown radial functions. The Lagrangian corresponding to considered spacetime in the Noether equation was used to get Noether operators. Different possibilities of radial functions were considered. Firstly, all functions were same. All the functions were considered as non-zero constant, linear, reciprocal and exponential respectively. Secondly, two functions were proportional to each other keeping third function different. Second case has four subcases in which four different relationships between A(r), B(r) and F(r) were discussed. In all cases, we obtained nontrivial Noether operators including gauge term. Conserved quantities for each Noether operators were also presented.
[Keynote Speaker]: Some Similarity Considerations for Design of Experiments for Hybrid Buoyant Aerial Vehicle
Buoyancy force applied on deformable symmetric bodies can be estimated by using Archimedes Principle. Such bodies like ellipsoidal bodies have high volume to surface ratio and are isometrically scaled for mass, length, area and volume to follow square cube law. For scaling up such bodies, it is worthwhile to find out the scaling relationship between the other physical quantities that represent thermodynamic, structural and inertial response etc. So, dimensionless similarities to find an allometric scale can be developed by using Bukingham π theorem which utilizes physical dimensions of important parameters. Base on this fact, physical dependencies of buoyancy system are reviewed to find the set of physical variables for deformable bodies of revolution filled with expandable gas like helium. Due to change in atmospheric conditions, this gas changes its volume and this change can effect the stability of elongated bodies on the ground as well as in te air. Special emphasis was given on the existing similarity parameters which can be used in the design of experiments of such bodies whose shape is affected by the external force like a drag, surface tension and kinetic loads acting on the surface. All these similarity criteria are based on non-dimensionalization, which also needs to be consider for scaling up such bodies.
Closed Form Solution for 4-D Potential Integrals for Arbitrary Coplanar Polygonal Surfaces
A closed-form solution for 4-D double surface integrals arising in boundary integrals equations of a potential theory is obtained for arbitrary coplanar polygonal surfaces. The solution method is based on the construction of exact differential forms followed by the application of Stokes' theorem for each surface integral. As a result, the 4-D double surface integral is reduced to a 2-D double line integral. By an appropriate change of variables, the integrand is transformed into a separable function of integration variables. The closed-form solutions to the corresponding 1-D integrals are readily available in the integration tables. Previously closed-form solutions were known only for the case of coincident triangle surfaces and coplanar rectangles. Solutions for these cases were obtained by surface-specific ad-hoc methods, while the present method is general. The method also works for non-polygonal surfaces. As an example, we compute in closed form the 4-D integral for the case of coincident surfaces in the shape of a circular disk. For an arbitrarily shaped surface, the proposed method provides an efficient quadrature rule. Extensions of the method for non-coplanar surfaces and other than 1/R integral kernels are also discussed.
Dynamical Relation of Poisson Spike Trains in Hodkin-Huxley Neural Ion Current Model and Formation of Non-Canonical Bases, Islands, and Analog Bases in DNA, mRNA, and RNA at or near the Transcription
Groundbreaking application of biomathematical and biochemical research in neural networks processes to formation of non-canonical bases, islands, and analog bases in DNA and mRNA at or near the transcription that contradicts the long anticipated statistical assumptions for the distribution of bases and analog bases compounds is implemented through statistical and stochastic methods apparatus with addition of quantum principles, where the usual transience of Poisson spike train becomes very instrumental tool for finding even almost periodical type of solutions to Fokker-Plank stochastic differential equation. Present article develops new multidimensional methods of finding solutions to stochastic differential equations based on more rigorous approach to mathematical apparatus through Kolmogorov-Chentsov continuity theorem that allows the stochastic processes with jumps under certain conditions to have γ-Holder continuous modification that is used as basis for finding analogous parallels in dynamics of neutral networks and formation of analog bases and transcription in DNA.
Effect of Fast and Slow Tempo Music on Muscle Endurance Time
Introduction: According to WHO, Global health observatory at least 2.8 million people die each year because of obesity and overweight. This is mainly because of the adverse metabolic effects of obesity and overweight on blood pressure, lipid profile especially cholesterol and insulin resistance. To achieve optimum health WHO has set the BMI in the range of 18.5 to 24.9 kg/m2. Due to modernization of life style, physical exercise in the form of work is no longer a possibility and hence an effective way to burn out calories to achieve the optimum BMI is the need of the hour. Studies have shown that exercising for more than 60 minutes /day helps to maintain the weight and to reduce the weight exercise should be done for 90 minutes a day. Moderate exercise for about 30 min is essential for burning up of calories. People with low endurance fail to perform even the low intensity exercise for minimal time. Hence, it is necessary to find out some effective method to increase the endurance time. Methodology: This study was approved by the Institutional Ethical committee of our college. After getting written informed consent, 25 apparently healthy males between the age group 18-20 years were selected. Subjects are with muscular disorder, subjects who are Hypertensive, Diabetes, Smokers, Alcoholics, taking drugs affecting the muscle strength. To determine the endurance time: Maximum voluntary contraction (MVC) was measured by asking the participants to squeeze the hand grip dynamometer as hard as possible and hold it for 3 seconds. This procedure was repeated thrice and the average of the three reading was taken as the maximum voluntary contraction. The participant was then asked to squeeze the dynamometer and hold it at 70% of the maximum voluntary contraction while hearing fast tempo music which was played for about ten minutes then the participant was asked to relax for ten minutes and was made to hold the hand grip dynamometer at 70% of the maximum voluntary contraction while hearing slow tempo music. To avoid the bias of getting habituated to the procedure the order of hearing for the fast and slow tempo music was changed. The time for which they can hold it at 70% of MVC was determined by using a stop watch and that was taken as the endurance time. Results: The mean value of the endurance time during fast and slow tempo music was compared in all the subjects. The mean MVC was 34.92 N. The mean endurance time was 21.8 (16.3) seconds with slow tempo music which was more then with fast tempo music with which the mean endurance time was 20.6 (11.7) seconds. The preference was more for slow tempo music then for fast tempo music. Conclusion: Music when played during exercise by some unknown mechanism helps to increase the endurance time by alleviating the symptoms of lactic acid accumulation.
A Method for solving Legendre's Conjecture
Legendre’s conjecture states that there is a prime number between n^2 and (n + 1)^2 for every positive integer n. In this paper we prove that every composite number between n2 and (n + 1)2 can be written u^2 − v^2 or u^2 − v^2 + u − v that u > 0 and v ≥ 0. Using these result as well as induction and residues (modq) we prove Legendre’s conjecture.
Some Inequalities Related with Starlike Log-Harmonic Mappings
Let H(D) be the linear space of all analytic functions defined on the open unit disc. A log-harmonic mappings is a solution of the nonlinear elliptic partial differential equation where w(z) ∈ H(D) is second dilatation such that |w(z)| < 1 for all z ∈ D. The aim of this paper is to define some inequalities of starlike logharmonic functions of order α(0 ≤ α ≤ 1).
Approximation by Generalized Lupaş-Durrmeyer Operators with Two Parameter α and β
This paper deals with the Stancu type generalization of Lupaş-Durrmeyer operators. We establish some direct results in the polynomial weighted space of continuous functions defined on the interval [0, 1]. Also, Voronovskaja type theorem is studied.
On Paranorm Zweier I-Convergent Sequence Spaces
In this article we introduce the Paranorm Zweier I-convergent sequence spaces, for a sequence of positive real numbers. We study some topological properties, prove the decomposition theorem and study some inclusion relations on these spaces.
Performance Optimization on Waiting Time Using Queuing Theory in an Advanced Manufacturing Environment: Robotics to Enhance Productivity
Performance optimization plays a key role in controlling the waiting time during manufacturing in an advanced manufacturing environment to improve productivity. Queuing mathematical modeling theory was used to examine the performance of the multi-stage production line. Robotics as a disruptive technology was implemented into a virtual manufacturing scenario during the packaging process to study the effect of waiting time on productivity. The queuing mathematical model was used to determine the optimum service rate required by robots during the packaging stage of manufacturing to yield an optimum production cost. Different rates of production were assumed in a virtual manufacturing environment, cost of packaging was estimated with optimum production cost. An equation was generated using queuing mathematical modeling theory and the theorem adopted for analysis of the scenario is the Newton Raphson theorem. Queuing theory presented here provides an adequate analysis of the number of robots required to regulate waiting time in order to increase the number of output. Arrival rate of the product was fast which shows that queuing mathematical model was effective in minimizing service cost and the waiting time during manufacturing. At a reduced waiting time, there was an improvement in the number of products obtained per hour. The overall productivity was improved based on the assumptions used in the queuing modeling theory implemented in the virtual manufacturing scenario.
Multiple Positive Solutions for Boundary Value Problem of Nonlinear Fractional Differential Equation
In this paper, we study a boundary value problem of nonlinear fractional differential equation. Existence and positivity results of solutions are obtained.
Existence of Minimal and Maximal Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type
In this work is proved the existence of at least one minimal and maximal mild solutions to the Cauchy problem, for fractional evolution equation of neutral type, involving a general kernel. An operator A generating a resolvent family and integral resolvent family on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Kuratowski measure of noncompactness and fixed point theorems, specifically Darbo-type, and an iterative method of lower and upper solutions, based in an order in X induced by a normal cone P. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the theory of resolvent families, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, the existence of minimal and maximal mild solutions was proved through in an iterative method of lower and upper solutions, using the Azcoli-Arzela Theorem, and the Gronwall’s inequality. Finally, we recovered the case derivate in Caputo sense.
On Chvátal’s Conjecture for the Hamiltonicity of 1-Tough Graphs and Their Complements
In this paper, we show that the conjecture of Chv tal, which states that any 1-tough graph is either a Hamiltonian graph or its complement contains a specific graph denoted by F, does not hold in general. More precisely, it is true only for graphs with six or seven vertices, and is false for graphs with eight or more vertices. A theorem is derived as a correction for the conjecture.
Analysis of User Data Usage Trends on Cellular and Wi-Fi Networks
The availability of on mobile devices that can invoke the demonstrated that the total data demand from users is far higher than previously articulated by measurements based solely on a cellular-centric view of smart-phone usage. The ratio of Wi-Fi to cellular traffic varies significantly between countries, This paper is shown the compression between the cellular data usage and Wi-Fi data usage by the user. This strategy helps operators to understand the growing importance and application of yield management strategies designed to squeeze maximum returns from their investments into the networks and devices that enable the mobile data ecosystem. The transition from unlimited data plans towards tiered pricing and, in the future, towards more value-centric pricing offers significant revenue upside potential for mobile operators, but, without a complete insight into all aspects of smartphone customer behavior, operators will unlikely be able to capture the maximum return from this billion-dollar market opportunity.
Reinforced Concrete Slab under Static and Dynamic Loading
In this study, static and dynamic responses of a typical reinforced concrete flat slab, designed to British Standard (BS 8110, 1997) and under self and live loadings for dance halls are reported. Linear perturbation analysis using finite element method was employed for modal, impulse loading and frequency response analyses of the slab under the aforementioned loading condition. Results from the static and dynamic analyses, comprising of the slab fundamental frequencies and mode shapes, dynamic amplification factor, maximum deflection, stress distributions among other valuable outcomes are presented and discussed. These were gauged with the limiting provisions in the design code with a view to optimise the structure and ensure both adequate strength and economical section for large clear span slabs. This is necessary owing to the continued increase in cost of erecting building structures and the squeeze on public finance globally.
Analysis of Two-Echelon Supply Chain with Perishable Items under Stochastic Demand
Perishability and developing an intelligent control policy for perishable items are the major concerns of marketing managers in a supply chain. In this study, we address a two-echelon supply chain problem for perishable items with a single vendor and a single buyer. The buyer adopts an aged-based continuous review policy which works by taking both the stock level and the aging process of items into account. The vendor works under the warehouse framework, where its lot size is determined with respect to the batch size of the buyer. The model holds for a positive and fixed lead time for the buyer, and zero lead time for the vendor. The demand follows a Poisson process and any unmet demand is lost. We provide exact analytic expressions for the operational characteristics of the system by using the renewal reward theorem. Items have a fixed lifetime after which they become unusable and are disposed of from the buyer's system. The age of items starts when they are unpacked and ready for the consumption at the buyer. When items are held by the vendor, there is no aging process which results in no perishing at the vendor's site. The model is developed under the centralized framework, which takes the expected profit of both vendor and buyer into consideration. The goal is to determine the optimal policy parameters under the service level constraint at the retailer's site. A sensitivity analysis is performed to investigate the effect of the key input parameters on the expected profit and order quantity in the supply chain. The efficiency of the proposed age-based policy is also evaluated through a numerical study. Our results show that when the unit perishing cost is negligible, a significant cost saving is achieved.
Extension of Positive Linear Operator
This research consideres the extension of special functions called Positive Linear Operators. the bounded linear operator which defined from normed space to Banach space will extend to the closure of the its domain, And extend identified linear functional on a vector subspace by Hana-Banach theorem which could be generalized to the positive linear operators.
The Mechanism of Calcium Carbonate Scale Deposition Affected by Carboxymethyl Chitosan
Due to the extensive use of water injection for oil displacement and pressure maintenance in oil fields, many reservoirs experience the problem of scale deposition when injection water starts to break through. In most cases the scaled-up wells are caused by the formation of sulfate and carbonate scales of calcium and strontium. Due to their relative hardness and low solubility, there are limited processes available for their removal and preventive measures such as the “squeeze” inhibitor treatment have to be taken. It is, therefore, important to gain a proper understanding of the kinetics of scale formation and its detrimental effects on formation damage under both inhibited and uninhibited conditions. Recently, the production of chitosan was started in our country and in the PDVSA-Intevep laboratories was synthesized and evaluated the properties of carboxymethyl chitosan (CMQ) as chelating agent of Ca2 + ions in water injection. In this regard, the characterization of the biopolymer by 13C - NMR, FTIR, TGA, and TM0374-2007 standard laboratory test has demonstrated the ability to remove up to 70% calcium ions in solution and shows a behavior that approaches that of commercial products.
Degree of Approximation by the (T.E^1) Means of Conjugate Fourier Series in the Hölder Metric
We compute the degree of approximation of functions\tilde{f}\in H_w, a new Banach space using (T.E^1) summability means of conjugate Fourier series. In this paper, we extend the results of Singh and Mahajan which in turn generalizes the result of Lal and Yadav. Some corollaries have also been deduced from our main theorem and particular cases.
An Overview on Aluminum Matrix Composites: Liquid State Processing
Modern composite materials are increasingly being chosen in replacement of heavier metallic material systems within many engineering fields including aerospace and automotive industries. The increasing push towards satisfying environmental targets are fuelling new material technologies and manufacturing processes. This paper will introduce materials and manufacturing processes using metal matrix composites along with manufacturing processes optimized at Alvant Ltd., based in Basingstoke in the UK which offers modern, cost effective, selectively reinforced composites for light-weighting applications within engineering. An overview and introduction into modern optimized manufacturing methods capable of producing viable replacements for heavier metallic and lower temperature capable polymer composites are offered. A review of the capabilities and future applications of this viable material is discussed to highlight the potential involved in further optimization of old manufacturing techniques, to fully realize the potential to lightweight material using cost-effective methods.
[Keynote Talk]: Discovering Liouville-Type Problems for p-Energy Minimizing Maps in Closed Half-Ellipsoids by Calculus Variation Method
The goal of this project is to investigate constant properties (called the Liouville-type Problem) for a p-stable map as a local or global minimum of a p-energy functional where the domain is a Euclidean space and the target space is a closed half-ellipsoid. The First and Second Variation Formulas for a p-energy functional has been applied in the Calculus Variation Method as computation techniques. Stokes’ Theorem, Cauchy-Schwarz Inequality, Hardy-Sobolev type Inequalities, and the Bochner Formula as estimation techniques have been used to estimate the lower bound and the upper bound of the derived p-Harmonic Stability Inequality. One challenging point in this project is to construct a family of variation maps such that the images of variation maps must be guaranteed in a closed half-ellipsoid. The other challenging point is to find a contradiction between the lower bound and the upper bound in an analysis of p-Harmonic Stability Inequality when a p-energy minimizing map is not constant. Therefore, the possibility of a non-constant p-energy minimizing map has been ruled out and the constant property for a p-energy minimizing map has been obtained. Our research finding is to explore the constant property for a p-stable map from a Euclidean space into a closed half-ellipsoid in a certain range of p. The certain range of p is determined by the dimension values of a Euclidean space (the domain) and an ellipsoid (the target space). The certain range of p is also bounded by the curvature values on an ellipsoid (that is, the ratio of the longest axis to the shortest axis). Regarding Liouville-type results for a p-stable map, our research finding on an ellipsoid is a generalization of mathematicians’ results on a sphere. Our result is also an extension of mathematicians’ Liouville-type results from a special ellipsoid with only one parameter to any ellipsoid with (n+1) parameters in the general setting.
Existence Result of Third Order Functional Random Integro-Differential Inclusion
The FRIGDI (functional random integrodifferential inclusion) seems to be new and includes several known random differential inclusions already studied in the literature as special cases have been discussed in the literature for various aspects of the solutions. In this paper, we prove the existence result for FIGDI under the non-convex case of multi-valued function involved in it.Using random fixed point theorem of B. C. Dhage and caratheodory condition. This result is new to the theory of differential inclusion.
Stability of Property (gm) under Perturbation and Spectral Properties Type Weyl Theorems
A Banach space operator T obeys property (gm) if the isolated points of the spectrum &sigma;(T) of T which are eigenvalues are exactly those points &lambda; of the spectrum for which T &minus; &lambda;I is a left Drazin invertible. In this article, we study the stability of property (gm), for a bounded operator acting on a Banach space, under perturbation by finite rank operators, by nilpotent operators, by quasi-nilpotent operators, or more generally by algebraic operators commuting with T.
Reduced General Dispersion Model in Cylindrical Coordinates and Isotope Transient Kinetic Analysis in Laminar Flow
This abstract discusses a method that reduces the general dispersion model in cylindrical coordinates to a second order linear ordinary differential equation with constant coefficients so that it can be utilized to conduct kinetic studies in packed bed tubular catalytic reactors at a broad range of Reynolds numbers. The model was tested by 13CO isotope transient tracing of the CO adsorption of Boudouard reaction in a differential reactor at an average Reynolds number of 0.2 over Pd-Al2O3 catalyst. Detailed experimental results have provided evidence for the validity of the theoretical framing of the model and the estimated parameters are consistent with the literature. The solution of the general dispersion model requires the knowledge of the radial distribution of axial velocity. This is not always known. Hence, up until now, the implementation of the dispersion model has been largely restricted to the plug-flow regime. But, ideal plug-flow is impossible to achieve and flow regimes approximating plug-flow leave much room for debate as to the validity of the results. The reduction of the general dispersion model transpires as a result of the application of a factorization theorem. Factorization theorem is derived from the observation that a cross section of a catalytic bed consists of a solid phase across which the reaction takes place and a void or porous phase across which no significant measure of reaction occurs. The disparity in flow and the heterogeneity of the catalytic bed cause the concentration of reacting compounds to fluctuate radially. These variabilities signify the existence of radial positions at which the radial gradient of concentration is zero. Succinctly, factorization theorem states that a concentration function of axial and radial coordinates in a catalytic bed is factorable as the product of the mean radial cup-mixing function and a contingent dimensionless function. The concentration of adsorbed compounds are also factorable since they are piecewise continuous functions and suffer the same variability but in the reverse order of the concentration of mobile phase compounds. Factorability is a property of packed beds which transforms the general dispersion model to an equation in terms of the measurable mean radial cup-mixing concentration of the mobile phase compounds and mean cross-sectional concentration of adsorbed species. The reduced model does not require the knowledge of the radial distribution of the axial velocity. Instead, it is characterized by new transport parameters so denoted by Ωc, Ωa, Ωc, and which are respectively denominated convection coefficient cofactor, axial dispersion coefficient cofactor, and radial dispersion coefficient cofactor. These cofactors adjust the dispersion equation as compensation for the unavailability of the radial distribution of the axial velocity. Together with the rest of the kinetic parameters they can be determined from experimental data via an optimization procedure. Our data showed that the estimated parameters Ωc, Ωa Ωr, are monotonically correlated with the Reynolds number. This is expected to be the case based on the theoretical construct of the model. Computer generated simulations of methanation reaction on nickel provide additional support for the utility of the newly conceptualized dispersion model.
Synchronization of Chaotic T-System via Optimal Control as an Adaptive Controller
In this paper we study the optimal synchronization of chaotic T-system with complete uncertain parameter. Optimal control laws and parameter estimation rules are obtained by using Hamilton-Jacobi-Bellman (HJB) technique and Lyapunov stability theorem. The derived control laws are optimal adaptive control and make the states of drive and response systems asymptotically synchronized. Numerical simulation shows the effectiveness and feasibility of the proposed method.
Non-Differentiable Mond-Weir Type Symmetric Duality under Generalized Invexity
In the present paper, a pair of Mond-Weir type non-differentiable multiobjective second-order programming problems, involving two kernel functions, where each of the objective functions contains support function, is formulated. We prove weak, strong and converse duality theorem for the second-order symmetric dual programs under η-pseudoinvexity conditions.
The Faithful Extension of Constant Height and Constant Width Between Finite Posets
The problem of faithful extension with the condition of keeping constant height h and constant width w, i.e. for h w -inextensibility, seems more interesting than the brute extension of finite poset (partially ordered set). We shall investigate some theorems of hw-inextensive and hw-exrensive posets that can be used to formulate the faithful extension problem. A theorem in its general form of hw-inextensive posets is given to implement the presented theorems.
A Secure Digital Signature Scheme with Fault Tolerance Based on the Improved RSA System
Fault tolerance and data security are two important issues in modern communication systems. In this paper, we propose a secure and efficient digital signature scheme with fault tolerance based on the improved RSA system. The proposed scheme for the RSA cryptosystem contains three prime numbers and overcome several attacks possible on RSA. By using the Chinese Reminder Theorem (CRT) the proposed scheme has a speed improvement on the RSA decryption side and it provides high security also.
Generating Links That Are Both Quasi-Alternating and Almost Alternating
We construct an infinite family of links which are both almost alternating and quasi-alternating from a given either almost alternating diagram representing a quasi-alternating link, or connected and reduced alternating tangle diagram. To do that we use what we call a dealternator extension which consists in replacing the dealternator by a rational tangle extending it. We note that all non-alternating and quasi-alternating Montesinos links can be obtained in that way. We check that all the obtained quasi-alternating links satisfy Conjecture 3.1 of Qazaqzeh et al. (JKTR 22 (6), 2013), that is the crossing number of a quasi-alternating link is less than or equal to its determinant. We also prove that the converse of Theorem 3.3 of Qazaqzeh et al. (JKTR 24 (1), 2015) is false.
Faithful Extension of Constant Height and Constant Width between Finite Posets
The problem of faithful extension with the condition of keeping constant height h and constant width w, i.e. for hw-inextensibility, seems more interesting than the brute extension of finite poset (partially ordered set). We shall investigate some theorems of hw-inextensive and hw-extensive posets that can be used to formulate the faithful extension problem. A theorem in its general form of hw-inextensive posets are given to implement the presented theorems.
Subclass of Close-To-Convex Harmonic Mappings
In this article we have studied a class of sense preserving harmonic mappings in the unit disk D. Let B⁰H (α, β) denote the class of sense-preserving harmonic mappings f=h+g ̅ in the open unit disk D and satisfying the condition |z h״(z)+α (h׳(z)-1) | ≤ β - |z g″(z)+α g′(z)| (α > -1, β > 0). We have proved that B⁰H (α, β) is close-to-convex in D. We also prove that the functions in B⁰H (α, β) are stable harmonic univalent, stable harmonic starlike and stable harmonic convex in D for different values of its parameters. Further, the coefficient estimates, growth results, area theorem, boundary behavior, convolution and convex combination properties of the class B⁰H (α, β) of harmonic mapping are obtained.
Ant Colony Optimization Control for Multilevel STATCOM
Flexible AC Transmission Systems (FACTS) are potentially becoming more flexible and more economical local controllers in the power system; and because of the high MVA ratings, it would be expensive to provide independent, equal, regulated DC voltage sources to power the multilevel converters which are presently proposed for STATCOMs. DC voltage sources can be derived from the DC link capacitances which are charged by the rectified ac power. In this paper a new stronger control combined of nonlinear control based Lyapunov’s theorem and Ant Colony Algorithm (ACA) to maintain stability of multilevel STATCOM and the utility.
On the Inequality between Queue Length and Virtual Waiting Time in Open Queueing Networks under Conditions of Heavy Traffic
The paper is devoted to the analysis of queueing systems in the context of the network and communications theory. We investigate the inequality in an open queueing network and its applications to the theorems in heavy traffic conditions (fluid approximation, functional limit theorem, and law of the iterated logarithm) for a queue of customers in an open queueing network.
Numerical Analysis of a Reaction Diffusion System of Lambda-Omega Type
In this study, we consider a nonlinear in time finite element approximation of a reaction diffusion system of lambda-omega type. We use a fixed-point theorem to prove existence of the approximations at each time level. Then, we derive some essential stability estimates and discuss the uniqueness of the approximations. In addition, we employ Nochetto mathematical framework to prove an optimal error bound in time for d= 1, 2 and 3 space dimensions. Finally, we present some numerical experiments to verify the obtained theoretical results.
Non Linear Dynamic Analysis of Cantilever Beam with Breathing Crack Using XFEM
In this paper, breathing crack is considered for the non linear dynamic analysis. The stiffness of the cracked beam is found out by using influence coefficients. The influence coefficients are calculated by using Castigliano’s theorem and strain energy release rate (SERR). The equation of motion of the beam was derived by using Hamilton’s principle. The stiffness and natural frequencies for the cracked beam has been calculated using XFEM and Eigen approach. It is seen that due to presence of cracks, the stiffness and natural frequency changes. The mode shapes and the FRF for the uncracked and breathing cracked cantilever beam also obtained and compared.
A Numerical Method for Diffusion and Cahn-Hilliard Equations on Evolving Spherical Surfaces
In this paper, we present a simple effective numerical geometric method to estimate the divergence of a vector field over a curved surface. The conservation law is an important principle in physics and mathematics. However, many well-known numerical methods for solving diffusion equations do not obey conservation laws. Our presented method in this paper combines the divergence theorem with a generalized finite difference method and obeys the conservation law on discrete closed surfaces. We use the similar method to solve the Cahn-Hilliard equations on evolving spherical surfaces and observe stability results in our numerical simulations.
A Quadratic Approach for Generating Pythagorean Triples
The article explores one of the important relations between numbers-the Pythagorean triples (triplets) which finds its application in distance measurement, construction of roads, towers, buildings and wherever Pythagoras theorem finds its application. The Pythagorean triples are numbers, that satisfy the condition "In a given set of three natural numbers, the sum of squares of two natural numbers is equal to the square of the other natural number". There are numerous methods and equations to obtain the triplets, which have their own merits and demerits. Here, quadratic approach for generating triples uses the hypotenuse leg difference method. The advantage is that variables are few and finally only three independent variables are present.
A Study on Approximate Controllability of Impulsive Integrodifferential Systems with Non Local Conditions
In order to describe various real-world problems in physical and engineering sciences subject to abrupt changes at certain instants during the evolution process, impulsive differential equations has been used to describe the system model. In this article, the problem of approximate controllability for nonlinear impulsive integrodifferential equations with state-dependent delay is investigated. We study the approximate controllability for nonlinear impulsive integrodifferential system under the assumption that the corresponding linear control system is approximately controllable. Using methods of functional analysis and semigroup theory, sufficient conditions are formulated and proved. Finally, an example is provided to illustrate the proposed theory.
Polar Bergman Polynomials on Domain with Corners
In this paper we present a new class named polar of monic orthogonal polynomials with respect to the area measure supported on G, where G is a bounded simply-connected domain in the complex planeℂ. We analyze some open questions and discuss some ideas properties related to solving asymptotic behavior of polar Bergman polynomials over domains with corners and asymptotic behavior of modified Bergman polynomials by affine transforms in variable and polar modified Bergman polynomials by affine transforms in variable. We show that uniform asymptotic of Bergman polynomials over domains with corners and by Pritsker's theorem imply uniform asymptotic for all their derivatives.
One Period Loops of Memristive Circuits with Mixed-Mode Oscillations
Interesting properties of various one-period loops of singularly perturbed memristive circuits with mixed-mode oscillations (MMOs) are analyzed in this paper. The analysis is mixed, both analytical and numerical and focused on the properties of pinched hysteresis of the memristive element and other one-period loops formed by pairs of time-series solutions for various circuits' variables. The memristive element is the only nonlinear element in the two circuits. A theorem on periods of mixed-mode oscillations of the circuits is formulated and proved. Replacements of memristors by parallel G-C or series R-L circuits for a MMO response with equivalent RMS values is also discussed.
The Non-Uniqueness of Partial Differential Equations Options Price Valuation Formula for Heston Stochastic Volatility Model
An option is defined as a financial contract that provides the holder the right but not the obligation to buy or sell a specified quantity of an underlying asset in the future at a fixed price (called a strike price) on or before the expiration date of the option. This paper examined two approaches for derivation of Partial Differential Equation (PDE) options price valuation formula for the Heston stochastic volatility model. We obtained various PDE option price valuation formulas using the riskless portfolio method and the application of Feynman-Kac theorem respectively. From the results obtained, we see that the two derived PDEs for Heston model are distinct and non-unique. This establishes the fact of incompleteness in the model for option price valuation.
Formal Verification for Ethereum Smart Contract Using Coq
The smart contract in Ethereum is a unique program deployed on the Ethereum Virtual Machine (EVM) to help manage cryptocurrency. The security of this smart contract is critical to Ethereum’s operation and highly sensitive. In this paper, we present a formal model for smart contract, using the separated term-obligation (STO) strategy to formalize and verify the smart contract. We use the IBM smart sponsor contract (SSC) as an example to elaborate the detail of the formalizing process. We also propose a formal smart sponsor contract model (FSSCM) and verify SSC’s security properties with an interactive theorem prover Coq. We found the 'Unchecked-Send' vulnerability in the SSC, using our formal model and verification method. Finally, we demonstrate how we can formalize and verify other smart contracts with this approach, and our work indicates that this formal verification can effectively verify the correctness and security of smart contracts.
[Keynote Talk]: Analysis of One Dimensional Advection Diffusion Model Using Finite Difference Method
In this paper, one dimensional advection diffusion model is analyzed using finite difference method based on Crank-Nicolson scheme. A practical problem of filter cake washing of chemical engineering is analyzed. The model is converted into dimensionless form. For the grid Ω × ω = [0, 1] × [0, T], the Crank-Nicolson spatial derivative scheme is used in space domain and forward difference scheme is used in time domain. The scheme is found to be unconditionally convergent, stable, first order accurate in time and second order accurate in space domain. For a test problem, numerical results are compared with the analytical ones for different values of parameter.
On Tarski’s Type Theorems for L-Fuzzy Isotone and L-Fuzzy Relatively Isotone Maps on L-Complete Propelattices
Recently a new type of very general relational structures, the so called (L-)complete propelattices, was introduced. These significantly generalize complete lattices and completely lattice L-ordered sets, because they do not assume the technically very strong property of transitivity. For these structures also the main part of the original Tarski&rsquo;s fixed point theorem holds for (L-fuzzy) isotone maps, i.e., the part which concerns the existence of fixed points and the structure of their set. In this paper, fundamental properties of (L-)complete propelattices are recalled and the so called L-fuzzy relatively isotone maps are introduced. For these maps it is proved that they also have fixed points in L-complete propelattices, even if their set does not have to be of an awaited analogous structure of a complete propelattice.
The Perspective of Waria Transgenders in Singaraja on Their Reproduction Health
Aim: Waria transgenders are a phenomenon whose existence is undeniable. The sexual behaviours of waria transgenders belong to the groups of high-risk STDs infections, especially HIV/AIDS. The present study was aimed at finding out the general idea of the existence of waria transgenders in Singaraja, their sexual transactions, their sexual behaviours, and at exploring the factors affecting their sexual behaviours along with their participation in regular reproduction health control. Methods: The subjects of the present research were male-to-female transgenders living in the town of Singaraja. The research applied a qualitative approach. Data collection in this research was conducted through in-depth interview and observation. Results: The results of the study exposed 1) the existence of waria transgender community in Singaraja observed from their active participation in social events such as taking the roles of counsellors in the campaign of prevention and control of HIV/AIDS with the Local Commission of AIDS Control and other foundations; 2) the sexual services provided by waria transgenders which were performed in squeeze method, oral and anal sex which could be categorized as HIV/AIDS high-risk sexual behaviours, while the consistency in doing safe sex among the trangenders in Singaraja showed that most of the waria transgenders (80%) were aware of the urgency of using condoms during sexual intercourse; and 3) the low participation of the waria transgenders in Singaraja in regular reproduction health check up at the local Centre of Public Health Service was caused by their negative perception about being examined by female doctors. Conclucions: Waria in singaraja categorized as HIV/AIDS high-risk sexual behaviours but they do have consistency in doing safe sex by using condoms. They have a negative psychological perception about being examined by female doctors.
Further Analysis of Global Robust Stability of Neural Networks with Multiple Time Delays
In this paper, we study the global asymptotic robust stability of delayed neural networks with norm-bounded uncertainties. By employing the Lyapunov stability theory and Homeomorphic mapping theorem, we derive some new types of sufficient conditions ensuring the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slopebounded activation functions. An important aspect of our results is their low computational complexity as the reported results can be verified by checking some properties symmetric matrices associated with the uncertainty sets of network parameters. The obtained results are shown to be generalization of some of the previously published corresponding results. Some comparative numerical examples are also constructed to compare our results with some closely related existing literature results.
A Combinatorial Representation for the Invariant Measure of Diffusion Processes on Metric Graphs
We study a generalization to a continuous setting of the classical Markov chain tree theorem. In particular, we consider an irreducible diffusion process on a metric graph. The unique invariant measure has an atomic component on the vertices and an absolutely continuous part on the edges. We show that the corresponding density at x can be represented by a normalized superposition of the weights associated to metric arborescences oriented toward the point x. A metric arborescence is a metric tree oriented towards its root. The weight of each oriented metric arborescence is obtained by the product of the exponential of integrals of the form ∫a/b², where b is the drift and σ² is the diffusion coefficient, along the oriented edges, for a weight for each node determined by the local orientation of the arborescence around the node and for the inverse of the diffusion coefficient at x. The metric arborescences are obtained by cutting the original metric graph along some edges.
Stereotypical Perception as an Influential Factor in the Judicial Decision Making Process for Shoplifting Cases Presided over in the UK
Stereotypes are not generally considered to be an acceptable influence upon any decision making process, particularly those involving judicial decision making outcomes. Yet, we are confronted with an uncomfortable truth that stereotypes may be operating to influence judicial outcomes. Variances in sentencing outcomes are not easily explained away by criminological, psychological, or sociological theorem, but may be answered via qualitative research produced within the field of phenomenology. This paper will examine the current literature pertaining to the effect of stereotypes on the criminal justice system within the UK, and will also discuss what the implications are for stereotypical influences upon decision making in the criminal justice system. This paper will give particular focus to shoplifting offences dealt with in UK criminal courts, but this research has long reaching implications for the criminal process more generally.
Integrated Nested Laplace Approximations For Quantile Regression
The asymmetric Laplace distribution (ADL) is commonly used as the likelihood function of the Bayesian quantile regression, and it offers different families of likelihood method for quantile regression. Notwithstanding their popularity and practicality, ADL is not smooth and thus making it difficult to maximize its likelihood. Furthermore, Bayesian inference is time consuming and the selection of likelihood may mislead the inference, as the Bayes theorem does not automatically establish the posterior inference. Furthermore, ADL does not account for greater skewness and Kurtosis. This paper develops a new aspect of quantile regression approach for count data based on inverse of the cumulative density function of the Poisson, binomial and Delaporte distributions using the integrated nested Laplace Approximations. Our result validates the benefit of using the integrated nested Laplace Approximations and support the approach for count data.
Jordan Curves in the Digital Plane with Respect to the Connectednesses given by Certain Adjacency Graphs
Digital images are approximations of real ones and, therefore, to be able to study them, we need the digital plane Z2 to be equipped with a convenient structure that behaves analogously to the Euclidean topology on the real plane. In particular, it is required that such a structure allows for a digital analogue of the Jordan curve theorem. We introduce certain adjacency graphs on the digital plane and prove digital Jordan curves for them thus showing that the graphs provide convenient structures on Z2 for the study and processing of digital images. Further convenient structures including the wellknown Khalimsky and Marcus-Wyse adjacency graphs may be obtained as quotients of the graphs introduced. Since digital Jordan curves represent borders of objects in digital images, the adjacency graphs discussed may be used as background structures on the digital plane for solving the problems of digital image processing that are closely related to borders like border detection, contour filling, pattern recognition, thinning, etc.
Optimizing Human Diet Problem Using Linear Programming Approach: A Case Study
Health is a common theme in most cultures. In fact all communities have their concepts of health, as part of their culture. Health continues to be a neglected entity. Planning of Human diet should be done very careful by selecting the food items or groups of food items also the composition involved. Low price and good taste of foods are regarded as two major factors for optimal human nutrition. Linear programming techniques have been extensively used for human diet formulation for quiet good number of years. Through the process, we mainly apply “The Simplex Method” which is a very useful statistical tool based on the theorem of Elementary Row Operation from Linear Algebra and also incorporate some other necessary rules set by the Simplex Method to help solve the problem. The study done by us is an attempt to develop a programming model for optimal planning and best use of nutrient ingredients.
Importance of Mathematical Modeling in Teaching Mathematics
Today, in engineering departments, mathematics courses such as calculus, linear algebra and differential equations are generally taught by mathematicians. Therefore, during mathematicians’ classroom teaching there are few or no applications of the concepts to real world problems at all. Most of the times, students do not know whether the concepts or rules taught in these courses will be used extensively in their majors or not. This situation holds true of for all engineering and science disciplines. The general trend toward these mathematic courses is not good. The real-life application of mathematics will be appreciated by students when mathematical modeling of real-world problems are tackled. So, students do not like abstract mathematics, rather they prefer a solid application of the concepts to our daily life problems. The author highly recommends that mathematical modeling is to be taught starting in high schools all over the world In this paper, some mathematical concepts such as limit, derivative, integral, Taylor Series, differential equations and mean-value-theorem are chosen and their applications with graphical representations to real problems are emphasized.
Mathematical Modeling and Analysis of Forced Vibrations in Micro-Scale Microstretch Thermoelastic Simply Supported Beam
The present paper deals with the flexural vibrations of homogeneous, isotropic, generalized micropolar microstretch thermoelastic thin Euler-Bernoulli beam resonators, due to Exponential time varying load. Both the axial ends of the beam are assumed to be at simply supported conditions. The governing equations have been solved analytically by using Laplace transforms technique twice with respect to time and space variables respectively. The inversion of Laplace transform in time domain has been performed by using the calculus of residues to obtain deflection.The analytical results have been numerically analyzed with the help of MATLAB software for magnesium like material. The graphical representations and interpretations have been discussed for Deflection of beam under Simply Supported boundary condition and for distinct considered values of time and space as well. The obtained results are easy to implement for engineering analysis and designs of resonators (sensors), modulators, actuators.
Adaptation Mechanism and Planning Response to Resiliency Shrinking of Small Towns Based on Complex Adaptive System by Taking Wuhan as an Example
The rapid urbanization process taking big cities as the main body leads to the unequal configuration of urban and rural areas in the aspects of land supply, industrial division of labor, service supply and space allocation, and induces the shrinking characterization of service energy, industrial system and population vitality in small towns. As an important spatial unit in the spectrum of urbanization that serves, connects and couples urban and rural areas, the shrinking phenomenon faced by small towns has an important influence on the healthy development of urbanization. Based on the census of small towns in Wuhan metropolitan area, we have found that the shrinking of small towns is a passive contraction of elastic tension under the squeeze in cities. Once affected by the external forces such as policy regulation, planning guidance, and population return, small towns will achieve expansion and growth. Based on the theory of complex adaptive systems, this paper comprehensively constructs the development index evaluation system of small towns from five aspects of population, economy, space, society and ecology, measures the shrinking level of small towns, further analyzes the shrinking characteristics of small towns, and identifies whether the shrinking is elastic or not. And then this paper measures the resilience ability index of small town contract from the above-mentioned five aspects. Finally, this paper proposes an adaptive mechanism of urban-rural interaction evolution under fine division of labor to response the passive shrinking in small towns of Wuhan. Based on the above, the paper creatively puts forward the planning response measures of the small towns on the aspects of spatial layout, function orientation and service support, which can provide reference for other regions.
An Equivalence between a Harmonic Form and a Closed Co-Closed Differential Form in L^Q and Non-L^Q Spaces
An equivalent relation between a harmonic form and a closed co-closed form is established on a complete non-compact manifold. This equivalence has been generalized for a differential k-form ω from Lq spaces to non-Lq spaces when q=2 in the context of p-balanced growth where p=2. Especially for a simple differential k-form on a complete non-compact manifold, the equivalent relation has been verified with the extended scope of q for from finite q-energy in Lq spaces to infinite q-energy in non-Lq spaces when with 2-balanced growth. Generalized Hadamard Theorem, Cauchy-Schwarz Inequality, and Calculus skills including Integration by Parts as well as Convergent Series have been applied as estimation techniques to evaluate growth rates for a differential form. In particular, energy growth rates as indicated by an appropriate power range in a selected test function lead to a balance between a harmonic differential form and a closed co-closed differential form. Research ideas and computational methods in this paper could provide an innovative way in the study of broadening Lq spaces to non-Lq spaces with a wide variety of infinite energy growth for a differential form.
Modified Bat Algorithm for Economic Load Dispatch Problem
According to no free lunch theorem, a single search technique cannot perform best in all conditions. Optimization method can be attractive choice to solve optimization problem that may have exclusive advantages like robust and reliable performance, global search capability, little information requirement, ease of implementation, parallelism, no requirement of differentiable and continuous objective function. In order to synergize between exploration and exploitation and to further enhance the performance of Bat algorithm, the paper proposed a modified bat algorithm that adds additional search procedure based on bat’s previous experience. The proposed algorithm is used for solving the economic load dispatch (ELD) problem. The practical constraint such valve-point loading along with power balance constraints and generator limit are undertaken. To take care of power demand constraint variable elimination method is exploited. The proposed algorithm is tested on various ELD problems. The results obtained show that the proposed algorithm is capable of performing better in majority of ELD problems considered and is at par with existing algorithms for some of problems.
Modeling, Analysis and Control of a Smart Composite Structure
In modern engineering, weight optimization has a priority during the design of structures. However, optimizing the weight can result in lower stiffness and less internal damping, causing the structure to become excessively prone to vibration. To overcome this problem, active or smart materials are implemented. The coupled electromechanical properties of smart materials, used in the form of piezoelectric ceramics in this work, make these materials well-suited for being implemented as distributed sensors and actuators to control the structural response. The smart structure proposed in this paper is composed of a cantilevered steel beam, an adhesive or bonding layer, and a piezoelectric actuator. The static deflection of the structure is derived as function of the piezoelectric voltage, and the outcome is compared to theoretical and experimental results from literature. The relation between the voltage and the piezoelectric moment at both ends of the actuator is also investigated and a reduced finite element model of the smart structure is created and verified. Finally, a linear controller is implemented and its ability to attenuate the vibration due to the first natural frequency is demonstrated.
Open Forging of Cylindrical Blanks Subjected to Lateral Instability
The successful and efficient execution of a forging process is dependent upon the correct analysis of loading and metal flow of blanks. This paper investigates the Upper Bound Technique (UBT) and its application in the analysis of open forging process when a possibility of blank bulging exists. The UBT is one of the energy rate minimization methods for the solution of metal forming process based on the upper bound theorem. In this regards, the kinematically admissible velocity field is obtained by minimizing the total forging energy rate. A computer program is developed in this research to implement the UBT. The significant advantages of this method is the speed of execution while maintaining a fairly high degree of accuracy and the wide prediction capability. The information from this analysis is useful for the design of forging processes and dies. Results for the prediction of forging loads and stresses, metal flow and surface profiles with the assured benefits in terms of press selection and blank preform design are outlined in some detail. The obtained predictions are ready for comparison with both laboratory and industrial results.
Exceptional Cost and Time Optimization with Successful Leak Repair and Restoration of Oil Production: West Kuwait Case Study
Well intervention was done along with Production Logging Tools (PLT) to detect sources of water, and to check well integrity for two West Kuwait oil wells started to produce 100 % water. For the first well, to detect the source of water, PLT was performed to check the perforations, no production observed from the bottom two perforation intervals, and an intake of water was observed from the top most perforation. Then a decision was taken to extend the PLT survey from tag depth to the Y-tool. For the second well, the aim was to detect the source of water and if there was a leak in the 7’’liner in front of the upper zones. Data could not be recorded in flowing conditions due to the casing deformation at almost 8300 ft. For the first well from the interpretation of PLT and well integrity data, there was a hole in the 9 5/8'' casing from 8468 ft to 8494 ft producing almost the majority of water, which is 2478 bbl/d. The upper perforation from 10812 ft to 10854 ft was taking 534 stb/d. For the second well, there was a hole in the 7’’liner from 8303 ft MD to 8324 ft MD producing 8334.0 stb/d of water with an intake zone from10322.9-10380.8 ft MD taking the whole fluid. To restore the oil production, W/O rig was mobilized to prevent dump flooding, and during the W/O, the leaking interval was confirmed for both wells. The leakage was cement squeezed and tested at 900-psi positive pressure and 500-psi drawdown pressure. The cement squeeze job was successful. After W/O, the wells kept producing for cleaning, and eventually, the WC reduced to 0%. Regular PLT and well integrity logs are required to study well performance, and well integrity issues, proper cement behind casing is essential to well longevity and well integrity, and the presence of the Y-tool is essential as monitoring of well parameters and ESP to facilitate well intervention tasks. Cost and time optimization in oil and gas and especially during rig operations is crucial. PLT data quality and the accuracy of the interpretations contributed a lot to identify the leakage interval accurately and, in turn, saved a lot of time and reduced the repair cost with almost 35 to 45 %. The added value here was more related to the cost reduction and effective and quick proper decision making based on the economic environment.
Theory and Practice of Wavelets in Signal Processing
The methods of Fourier, Laplace, and Wavelet Transforms provide transfer functions and relationships between the input and the output signals in linear time invariant systems. This paper shows the equivalence among these three methods and in each case presenting an application of the appropriate (Fourier, Laplace or Wavelet) to the convolution theorem. In addition, it is shown that the same holds for a direct integration method. The Biorthogonal wavelets Bior3.5 and Bior3.9 are examined and the zeros distribution of their polynomials associated filters are located. This paper also presents the significance of utilizing wavelets as effective tools in processing speech signals for common multimedia applications in general, and for recognition and compression in particular. Theoretically and practically, wavelets have proved to be effective and competitive. The practical use of the Continuous Wavelet Transform (CWT) in processing and analysis of speech is then presented along with explanations of how the human ear can be thought of as a natural wavelet transformer of speech. This generates a variety of approaches for applying the (CWT) to many paradigms analysing speech, sound and music. For perception, the flexibility of implementation of this transform allows the construction of numerous scales and we include two of them. Results for speech recognition and speech compression are then included.
From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images
FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.
A Theorem Related to Sample Moments and Two Types of Moment-Based Density Estimates
Numerous statistical inference and modeling methodologies are based on sample moments rather than the actual observations. A result justifying the validity of this approach is introduced. More specifically, it will be established that given the first n moments of a sample of size n, one can recover the original n sample points. This implies that a sample of size n and its first associated n moments contain precisely the same amount of information. However, it is efficient to make use of a limited number of initial moments as most of the relevant distributional information is included in them. Two types of density estimation techniques that rely on such moments will be discussed. The first one expresses a density estimate as the product of a suitable base density and a polynomial adjustment whose coefficients are determined by equating the moments of the density estimate to the sample moments. The second one assumes that the derivative of the logarithm of a density function can be represented as a rational function. This gives rise to a system of linear equations involving sample moments, the density estimate is then obtained by solving a differential equation. Unlike kernel density estimation, these methodologies are ideally suited to model ‘big data’ as they only require a limited number of moments, irrespective of the sample size. What is more, they produce simple closed form expressions that are amenable to algebraic manipulations. They also turn out to be more accurate as will be shown in several illustrative examples.
The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models
In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.
Role of Additional Food Resources in an Ecosystem with Two Discrete Delays
This study proposes a three dimensional prey-predator model with additional food, provided to predator individuals, including gestation delay in predators and delay in supplying the additional food to predators. It is assumed that the interaction between prey and predator is followed by Holling type-II functional response. We discussed the steady states and their local and global asymptotic behavior for the non-delayed system. Hopf-bifurcation phenomenon with respect to different parameters has also been studied. We obtained a range of predator’s tendency factor on provided additional food, in which the periodic solutions occur in the system. We have shown that oscillations can be controlled from the system by increasing the tendency factor. Moreover, the existence of periodic solutions via Hopf-bifurcation is shown with respect to both the delays. Our analysis shows that both delays play an important role in governing the dynamics of the system. It changes the stability behavior into instability behavior. The direction and stability of Hopf-bifurcation are also investigated through the normal form theory and the center manifold theorem. Lastly, some numerical simulations and graphical illustrations have been carried out to validate our analytical findings.
Investigating Safe Operation Condition for Iterative Learning Control under Load Disturbances Effect in Singular Values
An iterative learning control framework designed in state feedback structure suffers a lack in investigating load disturbance considerations. The presented work discusses the controller previously designed, highlights the disturbance problem, finds new conditions using singular value principle to assure safe operation conditions with error convergence and reference tracking under the influence of load disturbance. It is known that periodic disturbances can be represented by a delay model in a positive feedback loop acting on the system input. This model can be manipulated by isolating the delay model and finding a controller for the overall system around the delay model to remedy the periodic disturbances using the small signal theorem. The overall system is the base for control design and load disturbance investigation. The major finding of this work is the load disturbance condition found which clearly sets safe operation condition under the influence of load disturbances such that the error tends to nearly zero as the system keeps operating trial after trial.
Reconfigurable Consensus Achievement of Multi Agent Systems Subject to Actuator Faults in a Leaderless Architecture
In this paper, reconfigurable consensus achievement of a team of agents with marginally stable linear dynamics and single input channel has been considered. The control algorithm is based on a first order linear protocol. After occurrence of a LOE fault in one of the actuators, using the imperfect information of the effectiveness of the actuators from fault detection and identification module, the control gain is redesigned in a way to still reach consensus. The idea is based on the modeling of change in effectiveness as change of Laplacian matrix. Then as special cases of this class of systems, a team of single integrators as well as double integrators are considered and their behavior subject to a LOE fault is considered. The well-known relative measurements consensus protocol is applied to a leaderless team of single integrator as well as double integrator systems, and Gersgorin disk theorem is employed to determine whether fault occurrence has an effect on system stability and team consensus achievement or not. The analyses show that loss of effectiveness fault in actuator(s) of integrator systems affects neither system stability nor consensus achievement.
Implicit Transaction Costs and the Fundamental Theorems of Asset Pricing
This paper studies arbitrage pricing theory in financial markets with transaction costs. We extend the existing theory to include the more realistic possibility that the price at which the investors trade is dependent on the traded volume. The investors in the market always buy at the ask and sell at the bid price. Transaction costs are composed of two terms, one is able to capture the implicit transaction costs and the other the price impact. Moreover, a new definition of a self-financing portfolio is obtained. The self-financing condition suggests that continuous trading is possible, but is restricted to predictable trading strategies which have left and right limit and finite quadratic variation. That is, predictable trading strategies of infinite variation and of finite quadratic variation are allowed in our setting. Within this framework, the existence of an equivalent probability measure is equivalent to the absence of arbitrage opportunities, so that the first fundamental theorem of asset pricing (FFTAP) holds. It is also proved that, when this probability measure is unique, any contingent claim in the market is hedgeable in an L2-sense. The price of any contingent claim is equal to the risk-neutral price. To better understand how to apply the theory proposed we provide an example with linear transaction costs.
Globally Attractive Mild Solutions for Non-Local in Time Subdiffusion Equations of Neutral Type
In this work is proved the existence of at least one globally attractive mild solution to the Cauchy problem, for fractional evolution equation of neutral type, involving the fractional derivate in Caputo sense. An almost sectorial operator on a Banach space X and a kernel belonging to a large class appears in the equation, which covers many relevant cases from physics applications, in particular, the important case of time - fractional evolution equations of neutral type. The main tool used in this work was the Hausdorff measure of noncompactness and fixed point theorems, specifically Darbo-type. Initially, the equation is a Cauchy problem, involving a fractional derivate in Caputo sense. Then, is formulated the equivalent integral version, and defining a convenient functional, using the analytic integral resolvent operator, and verifying the hypothesis of the fixed point theorem of Darbo type, give us the existence of mild solution for the initial problem. Furthermore, each mild solution is globally attractive, a property that is desired in asymptotic behavior for that solution.
Formal Specification of Web Services Applications for Digital Reference Services of Library Information System
This paper discusses the formal specification of web services applications for digital reference services (WSDRS). Digital reference service involves a user requesting for help from a reference librarian and a reference librarian responding to the request of a user all by electronic means. In most cases users do not get satisfied while using digital reference service due to delay of response of the librarians. Another may be due to no response or due to librarian giving an irrelevant solution to the problem submitted by the user. WDSRS is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient way of satisfying users’ need in the reference section of libraries. But informal model is in natural language which is inconsistent and ambiguous that may cause difficulties to the developers of the system. In order to solve this problem we decided to convert the informal specifications into formal specifications. This is supposed to reduce the overall development time and cost. Formal specification can be used to provide an unambiguous and precise supplement to natural language descriptions. It can be rigorously validated and verified leading to the early detection of specification errors. We use Z language to develop the formal model and verify it with Z/EVES theorem prover tool.
Strict Stability of Fuzzy Differential Equations by Lyapunov Functions
In this study, we have investigated the strict stability of fuzzy differential systems and we compare the classical notion of strict stability criteria of ordinary differential equations and the notion of strict stability of fuzzy differential systems. In addition that, we present definitions of stability and strict stability of fuzzy differential equations and also we have some theorems and comparison results. Strict Stability is a different stability definition and this stability type can give us an information about the rate of decay of the solutions. Lyapunov&rsquo;s second method is a standard technique used in the study of the qualitative behavior of fuzzy differential systems along with a comparison result that allows the prediction of behavior of a fuzzy differential system when the behavior of the null solution of a fuzzy comparison system is known. This method is a usefull for investigating strict stability of fuzzy systems. First of all, we present definitions and necessary background material. Secondly, we discuss and compare the differences between the classical notion of stability and the recent notion of strict stability. And then, we have a comparison result in which the stability properties of the null solution of the comparison system imply the corresponding stability properties of the fuzzy differential system. Consequently, we give the strict stability results and a comparison theorem. We have used Lyapunov second method and we have proved a comparison result with scalar differential equations.
Nonlinear Evolution on Graphs
We are concerned with abstract fully nonlinear differential equations having the form y’(t)=Ay(t)+f(t,y(t)) where A is an m—dissipative operator (possibly multi—valued) defined on a subset D(A) of a Banach space X with values in X and f is a given function defined on I×X with values in X. We consider a graph K in I×X. We recall that K is said to be viable with respect to the above abstract differential equation if for each initial data in K there exists at least one trajectory starting from that initial data and remaining in K at least for a short time. The viability problem has been studied by many authors by using various techniques and frames. If K is closed, it is shown that a tangency condition, which is mainly linked to the dynamic, is crucial for viability. In the case when X is infinite dimensional, compactness and convexity assumptions are needed. In this paper, we are concerned with the notion of near viability for a given graph K with respect to y’(t)=Ay(t)+f(t,y(t)). Roughly speaking, the graph K is said to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)), if for each initial data in K there exists at least one trajectory remaining arbitrary close to K at least for short time. It is interesting to note that the near viability is equivalent to an appropriate tangency condition under mild assumptions on the dynamic. Adding natural convexity and compactness assumptions on the dynamic, we may recover the (exact) viability. Here we investigate near viability for a graph K in I×X with respect to y’(t)=Ay(t)+f(t,y(t)) where A and f are as above. We emphasis that the t—dependence on the perturbation f leads us to introduce a new tangency concept. In the base of a tangency conditions expressed in terms of that tangency concept, we formulate criteria for K to be near viable with respect to y’(t)=Ay(t)+f(t,y(t)). As application, an abstract null—controllability theorem is given.
Time Delayed Susceptible-Vaccinated-Infected-Recovered-Susceptible Epidemic Model along with Nonlinear Incidence and Nonlinear Treatment
Infectious diseases are a leading cause of death worldwide and hence a great challenge for every nation. Thus, it becomes utmost essential to prevent and reduce the spread of infectious disease among humans. Mathematical models help to better understand the transmission dynamics and spread of infections. For this purpose, in the present article, we have proposed a nonlinear time-delayed SVIRS (Susceptible-Vaccinated-Infected-Recovered-Susceptible) mathematical model with nonlinear type incidence rate and nonlinear type treatment rate. Analytical study of the model shows that model exhibits two types of equilibrium points, namely, disease-free equilibrium and endemic equilibrium. Further, for the long-term behavior of the model, stability of the model is discussed with the help of basic reproduction number R₀ and we showed that disease-free equilibrium is locally asymptotically stable if the basic reproduction number R₀ is less than one and unstable if the basic reproduction number R₀ is greater than one for the time lag τ≥0. Furthermore, when basic reproduction number R₀ is one, using center manifold theory and Casillo-Chavez and Song theorem, we showed that the model undergoes transcritical bifurcation. Moreover, numerical simulations are being carried out using MATLAB 2012b to illustrate the theoretical results.
Theoretical Study of Structural, Magnetic, and Magneto-Optical Properties of Ultrathin Films of Fe/Cu (001)
By means of the first principle calculation, we have investigated the structural, magnetic and magneto-optical properties of the ultra-thin films of Fen/Cu(001) with (n=1, 2, 3). We adopted a relativistic approach using DFT theorem with local spin density approximation (LSDA). The electronic structure is performed within the framework of the Spin-Polarized Relativistic (SPR) Linear Muffin-Tin Orbitals (LMTO) with the Atomic Sphere Approximation (ASA) method. During the variational principle, the crystal wave function is expressed as a linear combination of the Bloch sums of the so-called relativistic muffin-tin orbitals centered on the atomic sites. The crystalline structure is calculated after an atomic relaxation process using the optimization of the total energy with respect to the atomic interplane distance. A body-centered tetragonal (BCT) pseudomorphic crystalline structure with a tetragonality ratio c/a larger than unity is found. The magnetic behaviour is characterized by an enhanced magnetic moment and a ferromagnetic interplane coupling. The polar magneto-optical Kerr effect spectra are given over a photon energy range extended to 15eV and the microscopic origin of the most interesting features are interpreted by interband transitions. Unlike thin layers, the anisotropy in the ultra-thin films is characterized by a perpendicular magnetization which is perpendicular to the film plane.
Quantifying Fatigue during Periods of Intensified Competition in Professional Ice Hockey Players: Magnitude of Fatigue in Selected Markers
The professional ice hockey season consists of approximately 60 regular season games with periods of fixture congestion occurring several times in the average season. These periods of congestion provide limited time for recovery, exposing the athletes to the risk of competing whilst not fully recovered. Although a body of research is growing with respect to monitoring fatigue, particularly during periods of congested fixtures in team sports such as rugby and soccer, it has received little to no attention thus far in ice hockey athletes. Consequently, there is limited knowledge on monitoring tools that might effectively detect a fatigue response and the magnitude of fatigue that can accumulate when recovery is limited by competitive fixtures. The benefit of quantifying and establishing fatigue status is the ability to optimise training and provide pertinent information on player health, injury risk, availability and readiness. Some commonly used methods to assess fatigue and recovery status of athletes include the use of perceived fatigue and wellbeing questionnaires, tests of muscular force and ratings of perceive exertion (RPE). These measures are widely used in popular team sports such as soccer and rugby and show promise as assessments of fatigue and recovery status for ice hockey athletes. As part of a larger study, this study explored the magnitude of changes in adductor muscle strength after game play and throughout a period of fixture congestion and examined the relationship between internal game load and perceived wellbeing with adductor muscle strength. Methods 8 professional ice hockey players from a British Elite League club volunteered to participate (age = 29.3 ± 2.49 years, height = 186.15 ± 6.75 cm, body mass = 90.85 ± 8.64 kg). Prior to and after competitive games each player performed trials of the adductor squeeze test at 0˚ hip flexion with the lead investigator using hand-held dynamometry. Rate of perceived exertion was recorded for each game and from data of total ice time individual session RPE was calculated. After each game players completed a 5- point questionnaire to assess perceived wellbeing. Data was collected from six competitive games, 1 practice and 36 hours post the final game, over a 10 – day period. Results Pending final data collection in February Conclusions Pending final data collection in February.
Anthropomorphism in the Primate Mind-Reading Debate: A Critique of Sober's Justification Argument
This study aims to discuss whether anthropomorphism some scientists tend to use in cross-species comparison can be justified epistemologically, especially in the primate mind-reading debate. Concretely, this study critically analyzes Elliott Sober’s argument about mind-reading hypothesis (MRH), an anthropomorphic hypothesis which states that nonhuman primates (e.g., chimpanzee) are mind-readers like humans. Although many scientists consider anthropomorphism as an error and choosing anthropomorphic hypothesis like MRH without any definite evidence invalid, Sober advocates that anthropomorphism is supported by cladistic parsimony that suggests choosing the simplest hypothesis postulating the minimum number of evolutionary changes, which can be justified epistemologically in the mind-reading debate. However, his argument has several problems. First, Reichenbach’s theorem which Sober uses in process of showing that MRH has the higher likelihood than its competing hypothesis, behavior-reading hypothesis (BRH), does not fit in the context of inferring the evolutionary relationship. Second, the phylogenetic tree Sober supports is one of the possible scenarios of MRH, and even without this problem, it is difficult to prove that the possibility nonhuman primate species and human share mind-reading ability is higher than the possibility of the other case, considering how evolution occurs. Consequently, it seems hard to justify anthropomorphism of MRH under Sober’s argument. Some scientists and philosophers say that anthropomorphism sometimes helps observe interesting phenomena or make hypotheses in comparative biology. Nonetheless, we cannot determine that it provides answers about why and how the interesting phenomena appear or which of the hypotheses is better, at least the mind-reading debate, under the current state.
Setting Uncertainty Conditions Using Singular Values for Repetitive Control in State Feedback
A repetitive controller designed to accommodate periodic disturbances via state feedback is discussed. Periodic disturbances can be represented by a time delay model in a positive feedback loop acting on system output. A direct use of the small gain theorem solves the periodic disturbances problem via 1) isolating the delay model, 2) finding the overall system representation around the delay model and 3) designing a feedback controller that assures overall system stability and tracking error convergence. This paper addresses uncertainty conditions for the repetitive controller designed in state feedback in either past error feedforward or current error feedback using singular values. The uncertainty investigation is based on the overall system found and the stability condition associated with it; depending on the scheme used, to set an upper/lower limit weighting parameter. This creates a region that should not be exceeded in selecting the weighting parameter which in turns assures performance improvement against system uncertainty. Repetitive control problem can be described in lifted form. This allows the usage of singular values principle in setting the range for the weighting parameter selection. The Simulation results obtained show a tracking error convergence against dynamic system perturbation if the weighting parameter chosen is within the range obtained. Simulation results also show the advantage of weighting parameter usage compared to the case where it is omitted.
A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function
Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.
Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem
This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.
Modeling of Drug Distribution in the Human Vitreous
The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.
Derivation of a Risk-Based Level of Service Index for Surface Street Network Using Reliability Analysis
Current Level of Service (LOS) index adopted in Highway Capacity Manual (HCM) for signalized intersections on surface streets is based on the intersection average delay. The delay thresholds for defining LOS grades are subjective and is unrelated to critical traffic condition. For example, an intersection delay of 80 sec per vehicle for failing LOS grade F does not necessarily correspond to the intersection capacity. Also, a specific measure of average delay may result from delay minimization, delay equality, or other meaningful optimization criteria. To that end, a reliability version of the intersection critical degree of saturation (v/c) as the LOS index is introduced. Traditionally, the level of saturation at a signalized intersection is defined as the ratio of critical volume sum (per lane) to the average saturation flow (per lane) during all available effective green time within a cycle. The critical sum is the sum of the maximal conflicting movement-pair volumes in northbound-southbound and eastbound/westbound right of ways. In this study, both movement volume and saturation flow are assumed log-normal distributions. Because, when the conditions of central limit theorem obtain, multiplication of the independent, positive random variables tends to result in a log-normal distributed outcome in the limit, the critical degree of saturation is expected to be a log-normal distribution as well. Derivation of the risk index predictive limits is complex due to the maximum and absolute value operators, as well as the ratio of random variables. A fairly accurate functional form for the predictive limit at a user-specified significant level is yielded. The predictive limit is then compared with the designated LOS thresholds for the intersection critical degree of saturation (denoted as X
AI Peer Review Challenge: Standard Model of Physics vs 4D GEM EOS
Natural evolution of ATP cognitive systems is to meet AI peer review standards. ATP process of axiom selection from Mizar to prove a conjecture would be further refined, as in all human and machine learning, by solving the real world problem of the proposed AI peer review challenge: Determine which conjecture forms the higher confidence level constructive proof between Standard Model of Physics SU(n) lattice gauge group operation vs. present non-standard 4D GEM EOS SU(n) lattice gauge group spatially extended operation in which the photon and electron are the first two trace angular momentum invariants of a gravitoelectromagnetic (GEM) energy momentum density tensor wavetrain integration spin-stress pressure-volume equation of state (EOS), initiated via 32 lines of Mathematica code. Resulting gravitoelectromagnetic spectrum ranges from compressive through rarefactive of the central cosmological constant vacuum energy density in units of pascals. Said self-adjoint group operation exclusively operates on the stress energy momentum tensor of the Einstein field equations, introducing quantization directly on the 4D spacetime level, essentially reformulating the Yang-Mills virtual superpositioned particle compounded lattice gauge groups quantization of the vacuum—into a single hyper-complex multi-valued GEM U(1) × SU(1,3) lattice gauge group Planck spacetime mesh quantization of the vacuum. Thus the Mizar corpus already contains all of the axioms required for relevant DeepMath premise selection and unambiguous formal natural language parsing in context deep learning.
A Location-Based Search Approach According to Users’ Application Scenario
Global positioning system (GPS) has become increasing precise in recent years, and the location-based service (LBS) has developed rapidly. Take the example of finding a parking lot (such as Parking apps). The location-based service can offer immediate information about a nearby parking lot, including the information about remaining parking spaces. However, it cannot provide expected search results according to the requirement situations of users. For that reason, this paper develops a “Location-based Search Approach according to Users’ Application Scenario” according to the location-based search and demand determination to help users obtain the information consistent with their requirements. The “Location-based Search Approach based on Users’ Application Scenario” of this paper consists of one mechanism and three kernel modules. First, in the Information Pre-processing Mechanism (IPM), this paper uses the cosine theorem to categorize the locations of users. Then, in the Information Category Evaluation Module (ICEM), the kNN (k-Nearest Neighbor) is employed to classify the browsing records of users. After that, in the Information Volume Level Determination Module (IVLDM), this paper makes a comparison between the number of users’ clicking the information at different locations and the average number of users’ clicking the information at a specific location, so as to evaluate the urgency of demand; then, the two-dimensional space is used to estimate the application situations of users. For the last step, in the Location-based Search Module (LBSM), this paper compares all search results and the average number of characters of the search results, categorizes the search results with the Manhattan Distance, and selects the results according to the application scenario of users. Additionally, this paper develops a Web-based system according to the methodology to demonstrate practical application of this paper. The application scenario-based estimate and the location-based search are used to evaluate the type and abundance of the information expected by the public at specific location, so that information demanders can obtain the information consistent with their application situations at specific location.
The Dressing Field Method of Gauge Symmetries Reduction: Presentation and Examples
Gauge theories are the natural background for describing geometrically fundamental interactions using principal and associated fiber bundles as dynamical entities. The central notion of these theories is their local gauge symmetry implemented by the local action of a Lie group H. There exist several methods used to reduce the symmetry of a gauge theory, like gauge fixing, bundle reduction theorem or spontaneous symmetry breaking mechanism (SSBM). This paper is a presentation of another method of gauge symmetry reduction, distinct from those three. Given a symmetry group H acting on a fiber bundle and its naturally associated fields (Ehresmann (or Cartan) connection, curvature, matter fields, etc.) there sometimes exists a way to erase (in whole or in part) the H-action by just reconfiguring these fields, i.e. by making a mere change of field variables in order to get new (‘composite‘) fields on which H (in whole or in part) does not act anymore. Two examples: the re-interpretation of the BEHGHK (Higgs) mechanism, on the one hand, and the top-down construction of Tractor and Penrose's Twistor spaces and connections in the framework of conformal Cartan geometry, one the other, will be discussed. They have, of course, nothing to do with each other but the dressing field method can be applied on both to get a new insight. In the first example, it turns out, indeed, that generation of masses in the Standard Model can be separated from the symmetry breaking, the latter being a mere change of field variables, i.e. a dressing. This offers an interpretation in opposition with the one usually found in textbooks. In the second case, the dressing field method applied to the conformal Cartan geometry offer a way of understanding the deep geometric nature of the so-called Tractors and Twistors. The dressing field method, distinct from a gauge transformation (even if it can have apparently the same form), is a systematic way of finding and erasing artificial symmetries of a theory, by a mere change of field variables which redistributes the degrees of freedom of the theories.
Dual Duality for Unifying Spacetime and Internal Symmetry
The current efforts for Grand Unification Theory (GUT) can be classified into General Relativity, Quantum Mechanics, String Theory and the related formalisms. In the geometric approaches for extending General Relativity, the efforts are establishing global and local invariance embedded into metric formalisms, thereby additional dimensions are constructed for unifying canonical formulations, such as Hamiltonian and Lagrangian formulations. The approaches of extending Quantum Mechanics adopt symmetry principle to formulate algebra-group theories, which evolved from Maxwell formulation to Yang-Mills non-abelian gauge formulation, and thereafter manifested the Standard model. This thread of efforts has been constructing super-symmetry for mapping fermion and boson as well as gluon and graviton. The efforts of String theory currently have been evolving to so-called gauge/gravity correspondence, particularly the equivalence between type IIB string theory compactified on AdS5 × S5 and N = 4 supersymmetric Yang-Mills theory. Other efforts are also adopting cross-breeding approaches of above three formalisms as well as competing formalisms, nevertheless, the related symmetries, dualities, and correspondences are outlined as principles and techniques even these terminologies are defined diversely and often generally coined as duality. In this paper, we firstly classify these dualities from the perspective of physics. Then examine the hierarchical structure of classes from mathematical perspective referring to Coleman-Mandula theorem, Hidden Local Symmetry, Groupoid-Categorization and others. Based on Fundamental Theorems of Algebra, we argue that rather imposing effective constraints on different algebras and the related extensions, which are mainly constructed by self-breeding or self-mapping methodologies for sustaining invariance, we propose a new addition, momentum-angular momentum duality at the level of electromagnetic duality, for rationalizing the duality algebras, and then characterize this duality numerically with attempt for addressing some unsolved problems in physics and astrophysics.
Seismic Active Earth Pressure on Retaining Walls with Reinforced Backfill
The increase in active earth pressure during the event of an earthquake results sliding, overturning and tilting of earth retaining structures. In order to improve upon the stability of structures, the soil mass is often reinforced with various types of reinforcements such as metal strips, geotextiles, and geogrids etc. The stresses generated in the soil mass are transferred to the reinforcements through the interface friction between the earth and the reinforcement, which in turn reduces the lateral earth pressure on the retaining walls. Hence, the evaluation of earth pressure in the presence of seismic forces with an inclusion of reinforcements is important for the design retaining walls in the seismically active zones. In the present analysis, the effect of reinforcing horizontal layers of reinforcements in the form of sheets (Geotextiles and Geogrids) in sand used as backfill, on reducing the active earth pressure due to earthquake body forces has been studied. For carrying out the analysis, pseudo-static approach has been adopted by employing upper bound theorem of limit analysis in combination with finite elements and linear optimization. The computations have been performed with and out reinforcements for different internal friction angle of sand varying from 30 ° to 45 °. The effectiveness of the reinforcement in reducing the active earth pressure on the retaining walls is examined in terms of active earth pressure coefficient for presenting the solutions in a non-dimensional form. The active earth pressure coefficient is expressed as functions of internal friction angle of sand, interface friction angle between sand and reinforcement, soil-wall interface roughness conditions, and coefficient of horizontal seismic acceleration. It has been found that (i) there always exists a certain optimum depth of the reinforcement layers corresponding to which the value of active earth pressure coefficient becomes always the minimum, and (ii) the active earth pressure coefficient decreases significantly with an increase in length of reinforcements only up to a certain length beyond which a further increase in length hardly causes any reduction in the values active earth pressure. The optimum depth of the reinforcement layers and the required length of reinforcements corresponding to the optimum depth of reinforcements have been established. The numerical results developed in this analysis are expected to be useful for purpose of design of retaining walls.
Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database
For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.
Periodicity of Solutions to Impulsive Equations
It is known that there exist many physical phenomena where abrupt or impulsive changes occur either in the system dynamics, for example, ad-hoc network, or in the input forces containing impacts, for example, the bombardment of space antenna by micrometeorites. There are many other examples such as ultra high-speed optical signals over communication networks, the collision of particles, inventory control, government decisions, interest changes, changes in stock price, etc. These are impulsive phenomena. Hence, as a combination of the traditional initial value problems and the short-term perturbations whose duration can be negligible in comparison with the duration of the process, the systems with impulsive conditions (i.e., impulsive systems) are more realistic models for describing the impulsive phenomenon. Such a situation is also suitable for the delay systems, which include some of the past states of the system. So far, there have been a lot of research results in the study of impulsive systems with delay both in finite and infinite dimensional spaces. In this paper, we investigate the periodicity of solutions to the nonautonomous impulsive evolution equations with infinite delay in Banach spaces, where the coefficient operators (possibly unbounded) in the linear part depend on the time, which are impulsive systems in infinite dimensional spaces and come from the optimal control theory. It was indicated that the study of periodic solutions for these impulsive evolution equations with infinite delay was challenging because the fixed point theorems requiring some compactness conditions are not applicable to them due to the impulsive condition and the infinite delay. We are happy to report that after detailed analysis, we are able to combine the techniques developed in our previous papers, and some new ideas in this paper, to attack these impulsive evolution equations and derive periodic solutions. More specifically, by virtue of the related transition operator family (evolution family), we present a Poincaré operator given by the nonautonomous impulsive evolution system with infinite delay, and then show that the operator is a condensing operator with respect to Kuratowski's measure of non-compactness in a phase space by using an Amann's lemma. Finally, we derive periodic solutions from bounded solutions in view of the Sadovskii fixed point theorem. We also present a relationship between the boundedness and the periodicity of the solutions of the nonautonomous impulsive evolution system. The new results obtained here extend some earlier results in this area for evolution equations without impulsive conditions or without infinite delay.
Clustering Ethno-Informatics of Naming Village in Java Island Using Data Mining
Ethnoscience is used to see the culture with a scientific perspective, which may help to understand how people develop various forms of knowledge and belief, initially focusing on the ecology and history of the contributions that have been there. One of the areas studied in ethnoscience is etno-informatics, is the application of informatics in the culture. In this study the science of informatics used is data mining, a process to automatically extract knowledge from large databases, to obtain interesting patterns in order to obtain a knowledge. While the application of culture described by naming database village on the island of Java were obtained from Geographic Indonesia Information Agency (BIG), 2014. The purpose of this study is; first, to classify the naming of the village on the island of Java based on the structure of the word naming the village, including the prefix of the word, syllable contained, and complete word. Second to classify the meaning of naming the village based on specific categories, as well as its role in the community behavioral characteristics. Third, how to visualize the naming of the village to a map location, to see the similarity of naming villages in each province. In this research we have developed two theorems, i.e theorems area as a result of research studies have collected intersection naming villages in each province on the island of Java, and the composition of the wedge theorem sets the provinces in Java is used to view the peculiarities of a location study. The methodology in this study base on the method of Knowledge Discovery in Database (KDD) on data mining, the process includes preprocessing, data mining and post processing. The results showed that the Java community prioritizes merit in running his life, always working hard to achieve a more prosperous life, and love as well as water and environmental sustainment. Naming villages in each location adjacent province has a high degree of similarity, and influence each other. Cultural similarities in the province of Central Java, East Java and West Java-Banten have a high similarity, whereas in Jakarta-Yogyakarta has a low similarity. This research resulted in the cultural character of communities within the meaning of the naming of the village on the island of Java, this character is expected to serve as a guide in the behavior of people's daily life on the island of Java.
Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.
Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures
The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.