Performing Diagnosis in Building with Partially Valid Heterogeneous Tests
Building system is highly vulnerable to different kinds
of faults and human misbehaviors. Energy efficiency and user comfort
are directly targeted due to abnormalities in building operation. The
available fault diagnosis tools and methodologies particularly rely on
rules or pure model-based approaches. It is assumed that model or
rule-based test could be applied to any situation without taking into
account actual testing contexts. Contextual tests with validity domain
could reduce a lot of the design of detection tests. The main objective of this paper is to consider fault validity when
validate the test model considering the non-modeled events such
as occupancy, weather conditions, door and window openings and
the integration of the knowledge of the expert on the state of the
system. The concept of heterogeneous tests is combined with test
validity to generate fault diagnoses. A combination of rules, range
and model-based tests known as heterogeneous tests are proposed
to reduce the modeling complexity. Calculation of logical diagnoses
coming from artificial intelligence provides a global explanation
consistent with the test result. An application example shows the efficiency of the proposed
technique: an office setting at Grenoble Institute of Technology.
Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation
The large pose discrepancy is one of the critical
challenges in face recognition during video surveillance. Due to
the entanglement of pose attributes with identity information, the
conventional approaches for pose-independent representation lack
in providing quality results in recognizing largely posed faces. In
this paper, we propose a practical approach to disentangle the pose
attribute from the identity information followed by synthesis of a face
using a classifier network in latent space. The proposed approach
employs a modified generative adversarial network framework
consisting of an encoder-decoder structure embedded with a classifier
in manifold space for carrying out factorization on the latent
encoding. It can be further generalized to other face and non-face
attributes for real-life video frames containing faces with significant
attribute variations. Experimental results and comparison with state
of the art in the field prove that the learned representation of the
proposed approach synthesizes more compelling perceptual images
through a combination of adversarial and classification losses.
Machine Learning for Aiding Meningitis Diagnosis in Pediatric Patients
This paper presents a Machine Learning (ML) approach to support Meningitis diagnosis in patients at a children’s hospital in Sao Paulo, Brazil. The aim is to use ML techniques to reduce the use of invasive procedures, such as cerebrospinal fluid (CSF) collection, as much as possible. In this study, we focus on predicting the probability of Meningitis given the results of a blood and urine laboratory tests, together with the analysis of pain or other complaints from the patient. We tested a number of different ML algorithms, including: Adaptative Boosting (AdaBoost), Decision Tree, Gradient Boosting, K-Nearest Neighbors (KNN), Logistic Regression, Random Forest and Support Vector Machines (SVM). Decision Tree algorithm performed best, with 94.56% and 96.18% accuracy for training and testing data, respectively. These results represent a significant aid to doctors in diagnosing Meningitis as early as possible and in preventing expensive and painful procedures on some children.
From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
The Design of Multiple Detection Parallel Combined Spread Spectrum Communication System
Many jobs in society go underground, such as mine mining, tunnel construction and subways, which are vital to the development of society. Once accidents occur in these places, the interruption of traditional wired communication is not conducive to the development of rescue work. In order to realize the positioning, early warning and command functions of underground personnel and improve rescue efficiency, it is necessary to develop and design an emergency ground communication system. It is easy to be subjected to narrowband interference when performing conventional underground communication. Spreading communication can be used for this problem. However, general spread spectrum methods such as direct spread communication are inefficient, so it is proposed to use parallel combined spread spectrum (PCSS) communication to improve efficiency. The PCSS communication not only has the anti-interference ability and the good concealment of the traditional spread spectrum system, but also has a relatively high frequency band utilization rate and a strong information transmission capability. So, this technology has been widely used in practice. This paper presents a PCSS communication model-multiple detection parallel combined spread spectrum (MDPCSS) communication system. In this paper, the principle of MDPCSS communication system is described, that is, the sequence at the transmitting end is processed in blocks and cyclically shifted to facilitate multiple detection at the receiving end. The block diagrams of the transmitter and receiver of the MDPCSS communication system are introduced. At the same time, the calculation formula of the system bit error rate (BER) is introduced, and the simulation and analysis of the BER of the system are completed. By comparing with the common parallel PCSS communication, we can draw a conclusion that it is indeed possible to reduce the BER and improve the system performance. Furthermore, the influence of different pseudo-code lengths selected on the system BER is simulated and analyzed, and the conclusion is that the larger the pseudo-code length is, the smaller the system error rate is.
Pre-Operative Tool for Facial-Post-Surgical Estimation and Detection
Goal: Purpose of the project was to make a plastic surgery prediction by using pre-operative images for the plastic surgeries’ patients and to show this prediction on a screen to compare between the current case and the appearance after the surgery. Methods: To this aim, we implemented a software which used data from the internet for facial skin diseases, skin burns, pre-and post-images for plastic surgeries then the post- surgical prediction is done by using K-nearest neighbor (KNN). So we designed and fabricated a smart mirror divided into two parts a screen and a reflective mirror so patient's pre- and post-appearance will be showed at the same time. Results: We worked on some skin diseases like vitiligo, skin burns and wrinkles. We classified the three degrees of burns using KNN classifier with accuracy 60%. We also succeeded in segmenting the area of vitiligo. Our future work will include working on more skin diseases, classify them and give a prediction for the look after the surgery. Also we will go deeper into facial deformities and plastic surgeries like nose reshaping and face slim down. Conclusion: Our project will give a prediction relates strongly to the real look after surgery and decrease different diagnoses among doctors. Significance: The mirror may have broad societal appeal as it will make the distance between patient's satisfaction and the medical standards smaller.
Deep Learning Based Fall Detection Using Simplified Human Posture
Falls are one of the major causes of injury and death
among elderly people aged 65 and above. A support system to
identify such kind of abnormal activities have become extremely
important with the increase in ageing population. Pose estimation
is a challenging task and to add more to this, it is even more
challenging when pose estimations are performed on challenging
poses that may occur during fall. Location of the body provides a
clue where the person is at the time of fall. This paper presents
a vision-based tracking strategy where available joints are grouped
into three different feature points depending upon the section they are
located in the body. The three feature points derived from different
joints combinations represents the upper region or head region,
mid-region or torso and lower region or leg region. Tracking is always
challenging when a motion is involved. Hence the idea is to locate
the regions in the body in every frame and consider it as the tracking
strategy. Grouping these joints can be beneficial to achieve a stable
region for tracking. The location of the body parts provides a crucial
information to distinguish normal activities from falls.
Context Aware Anomaly Behavior Analysis for Smart Home Systems
The Internet of Things (IoT) will lead to the development of advanced Smart Home services that are pervasive, cost-effective, and can be accessed by home occupants from anywhere and at any time. However, advanced smart home applications will introduce grand security challenges due to the increase in the attack surface. Current approaches do not handle cybersecurity from a holistic point of view; hence, a systematic cybersecurity mechanism needs to be adopted when designing smart home applications. In this paper, we present a generic intrusion detection methodology to detect and mitigate the anomaly behaviors happened in Smart Home Systems (SHS). By utilizing our Smart Home Context Data Structure, the heterogeneous information and services acquired from SHS are mapped in context attributes which can describe the context of smart home operation precisely and accurately. Runtime models for describing usage patterns of home assets are developed based on characterization functions. A threat-aware action management methodology, used to efficiently mitigate anomaly behaviors, is proposed at the end. Our preliminary experimental results show that our methodology can be used to detect and mitigate known and unknown threats, as well as to protect SHS premises and services.
Labview-Based System for Fiber Links Events Detection
With the rapid development of modern communication,
diagnosing the fiber-optic quality and faults in real-time is widely
focused. In this paper, a Labview-based system is proposed for
fiber-optic faults detection. The wavelet threshold denoising method
combined with Empirical Mode Decomposition (EMD) is applied to
denoise the optical time domain reflectometer (OTDR) signal. Then
the method based on Gabor representation is used to detect events.
Experimental measurements show that signal to noise ratio (SNR)
of the OTDR signal is improved by 1.34dB on average, compared
with using the wavelet threshold denosing method. The proposed
system has a high score in event detection capability and accuracy.
The maximum detectable fiber length of the proposed Labview-based
system can be 65km.
Use of Hierarchical Temporal Memory Algorithm in Heart Attack Detection
In order to reduce the number of deaths due to heart
problems, we propose the use of Hierarchical Temporal Memory
Algorithm (HTM) which is a real time anomaly detection algorithm.
HTM is a cortical learning algorithm based on neocortex used for
anomaly detection. In other words, it is based on a conceptual theory
of how the human brain can work. It is powerful in predicting unusual
patterns, anomaly detection and classification. In this paper, HTM
have been implemented and tested on ECG datasets in order to detect
cardiac anomalies. Experiments showed good performance in terms
of specificity, sensitivity and execution time.
Multiphase Flow Regime Detection Algorithm for Gas-Liquid Interface Using Ultrasonic Pulse-Echo Technique
Efficiency of the cooling process for cryogenic
propellant boiling in engine cooling channels on space applications is
relentlessly affected by the phase change occurs during the boiling.
The effectiveness of the cooling process strongly pertains to the
type of the boiling regime such as nucleate and film. Geometric
constraints like a non-transparent cooling channel unable to use
any of visualization methods. The ultrasonic (US) technique as a
non-destructive method (NDT) has therefore been applied almost
in every engineering field for different purposes. Basically, the
discontinuities emerge between mediums like boundaries among
different phases. The sound wave emitted by the US transducer is
both transmitted and reflected through a gas-liquid interface which
makes able to detect different phases. Due to the thermal and
structural concerns, it is impractical to sustain a direct contact
between the US transducer and working fluid. Hence the transducer
should be located outside of the cooling channel which results in
additional interfaces and creates ambiguities on the applicability
of the present method. In this work, an exploratory research is
prompted so as to determine detection ability and applicability of
the US technique on the cryogenic boiling process for a cooling
cycle where the US transducer is taken place outside of the channel.
Boiling of the cryogenics is a complex phenomenon which mainly
brings several hindrances for experimental protocol because of
thermal properties. Thus substitute materials are purposefully selected
based on such parameters to simplify experiments. Aside from
that, nucleate and film boiling regimes emerging during the boiling
process are simply simulated using non-deformable stainless steel
balls, air-bubble injection apparatuses and air clearances instead
of conducting a real-time boiling process. A versatile detection
algorithm is perennially developed concerning exploratory studies
afterward. According to the algorithm developed, the phases can be
distinguished 99% as no-phase, air-bubble, and air-film presences.
The results show the detection ability and applicability of the US
technique for an exploratory purpose.
Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition
The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.
Emotions in Health Tweets: Analysis of American Government Official Accounts
The Government Departments of Health have the task of informing and educating citizens about public health issues. For this, they use channels like Twitter, key in the search for health information and the propagation of content. The tweets, important in the virality of the content, may contain emotions that influence the contagion and exchange of knowledge. The goal of this study is to perform an analysis of the emotional projection of health information shared on Twitter by official American accounts: the disease control account CDCgov, National Institutes of Health, NIH, the government agency HHSGov, and the professional organization PublicHealth. For this, we used Tone Analyzer, an International Business Machines Corporation (IBM) tool specialized in emotion detection in text, corresponding to the categorical model of emotion representation. For 15 days, all tweets from these accounts were analyzed with the emotional analysis tool in text. The results showed that their tweets contain an important emotional load, a determining factor in the success of their communications. This exposes that official accounts also use subjective language and contain emotions. The predominance of emotion joy over sadness and the strong presence of emotions in their tweets stimulate the virality of content, a key in the work of informing that government health departments have.
Evaluation of Four Different DNA Targets in Polymerase Chain Reaction for Detection and Genotyping of Helicobacter pylori
Polymerase chain reaction (PCR) assays targeting genomic DNA segments have been established for the detection of Helicobacter pylori in clinical specimens. However, the data on comparative evaluations of various targets in detection of H. pylori are limited. Furthermore, the frequencies of vacA (s1 and s2) and cagA genotypes, which are suggested to be involved in the pathogenesis of H. pylori in other parts of the world, are not well studied in Kuwait. The aim of this study was to evaluate PCR assays for the detection and genotyping of H. pylori by targeting the amplification of DNA targets from four genomic segments. The genomic DNA were isolated from 72 clinical isolates of H. pylori and tested in PCR with four pairs of oligonucleotides primers, i.e. ECH-U/ECH-L, ET-5U/ET-5L, CagAF/CagAR and Vac1F/Vac1XR, which were expected to amplify targets of various sizes (471 bp, 230 bp, 183 bp and 176/203 bp, respectively) from the genomic DNA of H. pylori. The PCR-amplified DNA were analyzed by agarose gel electrophoresis. PCR products of expected size were obtained with all primer pairs by using genomic DNA isolated from H. pylori. DNA dilution experiments showed that the most sensitive PCR target was 471 bp DNA amplified by the primers ECH-U/ECH-L, followed by the targets of Vac1F/Vac1XR (176 bp/203 DNA), CagAF/CagAR (183 bp DNA) and ET-5U/ET-5L (230 bp DNA). However, when tested with undiluted genomic DNA isolated from single colonies of all isolates, the Vac1F/Vac1XR target provided the maximum positive results (71/72 (99% positives)), followed by ECH-U/ECH-L (69/72 (93% positives)), ET-5U/ET-5L (51/72 (71% positives)) and CagAF/CagAR (26/72 (46% positives)). The results of genotyping experiments showed that vacA s1 (46% positive) and vacA s2 (54% positive) genotypes were almost equally associated with VaCA+/CagA- isolates (P > 0.05), but with VacA+/CagA+ isolates, S1 genotype (92% positive) was more frequently detected than S2 genotype (8% positive) (P< 0.0001). In conclusion, among the primer pairs tested, Vac1F/Vac1XR provided the best results for detection of H. pylori. The genotyping experiments showed that vacA s1 and vacA s2 genotypes were almost equally associated with vaCA+/cagA- isolates, but vacA s1 genotype had a significantly increased association with vacA+/cagA+ isolates.
Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time
Society demands more reliable manufacturing processes
capable of producing high quality products in shorter production
cycles. New control algorithms have been studied to satisfy this
paradigm, in which Fault-Tolerant Control (FTC) plays a significant
role. It is suitable to detect, isolate and adapt a system when a harmful
or faulty situation appears. In this paper, a general overview about
FTC characteristics are exposed; highlighting the properties a system
must ensure to be considered faultless. In addition, a research to
identify which are the main FTC techniques and a classification
based on their characteristics is presented in two main groups:
Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant
Controllers (PFTCs). AFTC encompasses the techniques capable of
re-configuring the process control algorithm after the fault has been
detected, while PFTC comprehends the algorithms robust enough
to bypass the fault without further modifications. The mentioned
re-configuration requires two stages, one focused on detection,
isolation and identification of the fault source and the other one in
charge of re-designing the control algorithm by two approaches: fault
accommodation and control re-design. From the algorithms studied,
one has been selected and applied to a case study based on an
industrial hydraulic-press. The developed model has been embedded
under a real-time validation platform, which allows testing the FTC
algorithms and analyse how the system will respond when a fault
arises in similar conditions as a machine will have on factory. One
AFTC approach has been picked up as the methodology the system
will follow in the fault recovery process. In a first instance, the fault
will be detected, isolated and identified by means of a neural network.
In a second instance, the control algorithm will be re-configured to
overcome the fault and continue working without human interaction.
Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM
Eyes are considered to be the most sensitive and
important organ for human being. Thus, any eye disorder will affect
the patient in all aspects of life. Cataract is one of those eye disorders
that lead to blindness if not treated correctly and quickly. This paper
demonstrates a model for automatic detection, classification, and
grading of cataracts based on image processing techniques and
artificial intelligence. The proposed system is developed to ease the
cataract diagnosis process for both ophthalmologists and patients.
The wavelet transform combined with 2D Log Gabor Wavelet
transform was used as feature extraction techniques for a dataset of
120 eye images followed by a classification process that classified the
image set into three classes; normal, early, and advanced stage. A
comparison between the two used classifiers, the support vector
machine SVM and the artificial neural network ANN were done for
the same dataset of 120 eye images. It was concluded that SVM gave
better results than ANN. SVM success rate result was 96.8%
accuracy where ANN success rate result was 92.3% accuracy.
Detection of Keypoint in Press-Fit Curve Based on Convolutional Neural Network
The quality of press-fit assembly is closely related to
reliability and safety of product. The paper proposed a keypoint
detection method based on convolutional neural network to improve
the accuracy of keypoint detection in press-fit curve. It would
provide an auxiliary basis for judging quality of press-fit assembly.
The press-fit curve is a curve of press-fit force and displacement.
Both force data and distance data are time-series data. Therefore,
one-dimensional convolutional neural network is used to process
the press-fit curve. After the obtained press-fit data is filtered, the
multi-layer one-dimensional convolutional neural network is used to
perform the automatic learning of press-fit curve features, and then
sent to the multi-layer perceptron to finally output keypoint of the
curve. We used the data of press-fit assembly equipment in the actual
production process to train CNN model, and we used different data
from the same equipment to evaluate the performance of detection.
Compared with the existing research result, the performance of
detection was significantly improved. This method can provide a
reliable basis for the judgment of press-fit quality.
Implementation of Edge Detection Based on Autofluorescence Endoscopic Image of Field Programmable Gate Array
Autofluorescence Imaging (AFI) is a technology for detecting early carcinogenesis of the gastrointestinal tract in recent years. Compared with traditional white light endoscopy (WLE), this technology greatly improves the detection accuracy of early carcinogenesis, because the colors of normal tissues are different from cancerous tissues. Thus, edge detection can distinguish them in grayscale images. In this paper, based on the traditional Sobel edge detection method, optimization has been performed on this method which considers the environment of the gastrointestinal, including adaptive threshold and morphological processing. All of the processes are implemented on our self-designed system based on the image sensor OV6930 and Field Programmable Gate Array (FPGA), The system can capture the gastrointestinal image taken by the lens in real time and detect edges. The final experiments verified the feasibility of our system and the effectiveness and accuracy of the edge detection algorithm.
A Procedure for Post-Earthquake Damage Estimation Based on Detection of High-Frequency Transients
In the current research structural health
monitoring is considered for addressing the critical issue of
post-earthquake damage detection. A non-standard approach
for damage detection via acoustic emission is presented -
acoustic emissions are monitored in the low frequency range
(up to 120 Hz). Such emissions are termed high-frequency
transients. Further a damage indicator defined as the
Time-Ratio Damage Indicator is introduced. The indicator
relies on time-instance measurements of damage initiation and
deformation peaks. Based on the time-instance measurements
a procedure for estimation of the maximum drift ratio is
proposed. Monitoring data is used from a shaking-table test
of a full-scale reinforced concrete bridge pier. Damage of the
experimental column is successfully detected and the proposed
damage indicator is calculated.
An Elaborate Survey on Node Replication Attack in Static Wireless Sensor Networks
Recent innovations in the field of technology led to the use of wireless sensor networks in various applications, which consists of a number of small, very tiny, low-cost, non-tamper proof and resource constrained sensor nodes. These nodes are often distributed and deployed in an unattended environment, so as to collaborate with each other to share data or information. Amidst various applications, wireless sensor network finds a major role in monitoring battle field in military applications. As these non-tamperproof nodes are deployed in an unattended location, they are vulnerable to many security attacks. Amongst many security attacks, the node replication attack seems to be more threatening to the network users. Node Replication attack is caused by an attacker, who catches one true node, duplicates the first certification and cryptographic materials, makes at least one or more copies of the caught node and spots them at certain key positions in the system to screen or disturb the network operations. Preventing the occurrence of such node replication attacks in network is a challenging task. In this survey article, we provide the classification of detection schemes and also explore the various schemes proposed in each category. Also, we compare the various detection schemes against certain evaluation parameters and also its limitations. Finally, we provide some suggestions for carrying out future research work against such attacks.
Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery
Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed.
Semi-Supervised Outlier Detection Using a Generative and Adversary Framework
In many outlier detection tasks, only training data
belonging to one class, i.e., the positive class, is available. The
task is then to predict a new data point as belonging either to
the positive class or to the negative class, in which case the
data point is considered an outlier. For this task, we propose a
novel corrupted Generative Adversarial Network (CorGAN). In the
adversarial process of training CorGAN, the Generator generates
outlier samples for the negative class, and the Discriminator is trained
to distinguish the positive training data from the generated negative
data. The proposed framework is evaluated using an image dataset
and a real-world network intrusion dataset. Our outlier-detection
method achieves state-of-the-art performance on both tasks.
Digital Image Forensics: Discovering the History of Digital Images
Digital multimedia contents such as image, video, and audio can be tampered easily due to the availability of powerful editing softwares. Multimedia forensics is devoted to analyze these contents by using various digital forensic techniques in order to validate their authenticity. Digital image forensics is dedicated to investigate the reliability of digital images by analyzing the integrity of data and by reconstructing the historical information of an image related to its acquisition phase. In this paper, a survey is carried out on the forgery detection by considering the most recent and promising digital image forensic techniques.
MITOS-RCNN: Mitotic Figure Detection in Breast Cancer Histopathology Images Using Region Based Convolutional Neural Networks
Studies estimate that there will be 266,120 new cases
of invasive breast cancer and 40,920 breast cancer induced deaths
in the year of 2018 alone. Despite the pervasiveness of this
affliction, the current process to obtain an accurate breast cancer
prognosis is tedious and time consuming. It usually requires a
trained pathologist to manually examine histopathological images and
identify the features that characterize various cancer severity levels.
We propose MITOS-RCNN: a region based convolutional neural
network (RCNN) geared for small object detection to accurately
grade one of the three factors that characterize tumor belligerence
described by the Nottingham Grading System: mitotic count. Other
computational approaches to mitotic figure counting and detection
do not demonstrate ample recall or precision to be clinically viable.
Our models outperformed all previous participants in the ICPR 2012
challenge, the AMIDA 2013 challenge and the MITOS-ATYPIA-14
challenge along with recently published works. Our model achieved
an F- measure score of 0.955, a 6.11% improvement in accuracy from
the most accurate of the previously proposed models.
Normalizing Scientometric Indicators of Individual Publications Using Local Cluster Detection Methods on Citation Networks
One of the major shortcomings of widely used
scientometric indicators is that different disciplines cannot be
compared with each other. The issue of cross-disciplinary
normalization has been long discussed, but even the classification
of publications into scientific domains poses problems. Structural
properties of citation networks offer new possibilities, however, the
large size and constant growth of these networks asks for precaution.
Here we present a new tool that in order to perform cross-field
normalization of scientometric indicators of individual publications
relays on the structural properties of citation networks. Due to the
large size of the networks, a systematic procedure for identifying
scientific domains based on a local community detection algorithm
is proposed. The algorithm is tested with different benchmark
and real-world networks. Then, by the use of this algorithm, the
mechanism of the scientometric indicator normalization process is
shown for a few indicators like the citation number, P-index and
a local version of the PageRank indicator. The fat-tail trend of the
article indicator distribution enables us to successfully perform the
indicator normalization process.
Evolving Digital Circuits for Early Stage Breast Cancer Detection Using Cartesian Genetic Programming
Cartesian Genetic Programming (CGP) is explored to
design an optimal circuit capable of early stage breast cancer
detection. CGP is used to evolve simple multiplexer circuits for
detection of malignancy in the Fine Needle Aspiration (FNA) samples
of breast. The data set used is extracted from Wisconsins Breast
Cancer Database (WBCD). A range of experiments were performed,
each with different set of network parameters. The best evolved
network detected malignancy with an accuracy of 99.14%, which is
higher than that produced with most of the contemporary non-linear
techniques that are computational expensive than the proposed
system. The evolved network comprises of simple multiplexers
and can be implemented easily in hardware without any further
complications or inaccuracy, being the digital circuit.
Cost Effective Real-Time Image Processing Based Optical Mark Reader
In this modern era of automation, most of the academic
exams and competitive exams are Multiple Choice Questions (MCQ).
The responses of these MCQ based exams are recorded in the
Optical Mark Reader (OMR) sheet. Evaluation of the OMR sheet
requires separate specialized machines for scanning and marking.
The sheets used by these machines are special and costs more than a
normal sheet. Available process is non-economical and dependent on
paper thickness, scanning quality, paper orientation, special hardware
and customized software. This study tries to tackle the problem of
evaluating the OMR sheet without any special hardware and making
the whole process economical. We propose an image processing
based algorithm which can be used to read and evaluate the scanned
OMR sheets with no special hardware required. It will eliminate the
use of special OMR sheet. Responses recorded in normal sheet is
enough for evaluation. The proposed system takes care of color,
brightness, rotation, little imperfections in the OMR sheet images.
Lexical Based Method for Opinion Detection on Tripadvisor Collection
The massive development of online social networks
allows users to post and share their opinions on various topics.
With this huge volume of opinion, it is interesting to extract and
interpret these information for different domains, e.g., product and
service benchmarking, politic, system of recommendation. This is
why opinion detection is one of the most important research tasks.
It consists on differentiating between opinion data and factual data.
The difficulty of this task is to determine an approach which returns
opinionated document. Generally, there are two approaches used
for opinion detection i.e. Lexical based approaches and Machine
Learning based approaches. In Lexical based approaches, a dictionary
of sentimental words is used, words are associated with weights. The
opinion score of document is derived by the occurrence of words from
this dictionary. In Machine learning approaches, usually a classifier
is trained using a set of annotated document containing sentiment,
and features such as n-grams of words, part-of-speech tags, and
logical forms. Majority of these works are based on documents text
to determine opinion score but dont take into account if these texts
are really correct. Thus, it is interesting to exploit other information
to improve opinion detection. In our work, we will develop a new
way to consider the opinion score. We introduce the notion of
trust score. We determine opinionated documents but also if these
opinions are really trustable information in relation with topics. For
that we use lexical SentiWordNet to calculate opinion and trust
scores, we compute different features about users like (numbers of
their comments, numbers of their useful comments, Average useful
review). After that, we combine opinion score and trust score to
obtain a final score. We applied our method to detect trust opinions in
TRIPADVISOR collection. Our experimental results report that the
combination between opinion score and trust score improves opinion
A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm
All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.
An Earth Mover’s Distance Algorithm Based DDoS Detection Mechanism in SDN
Software-defined networking (SDN) provides a solution
for scalable network framework with decoupled control and data
plane. However, this architecture also induces a particular distributed
denial-of-service (DDoS) attack that can affect or even overwhelm
the SDN network. DDoS attack detection problem has to date been
mostly researched as entropy comparison problem. However, this
problem lacks the utilization of SDN, and the results are not accurate.
In this paper, we propose a DDoS attack detection method, which
interprets DDoS detection as a signature matching problem and is
formulated as Earth Mover’s Distance (EMD) model. Considering
the feasibility and accuracy, we further propose to define the cost
function of EMD to be a generalized Kullback-Leibler divergence.
Simulation results show that our proposed method can detect DDoS
attacks by comparing EMD values with the ones computed in the case
without attacks. Moreover, our method can significantly increase the
true positive rate of detection.