21
10009227
Optical Flow Based System for Cross Traffic Alert
Abstract: This document describes an advanced system and methodology for Cross Traffic Alert (CTA), able to detect vehicles that move into the vehicle driving path from the left or right side. The camera is supposed to be not only on a vehicle still, e.g. at a traffic light or at an intersection, but also moving slowly, e.g. in a car park. In all of the aforementioned conditions, a driver’s short loss of concentration or distraction can easily lead to a serious accident. A valid support to avoid these kinds of car crashes is represented by the proposed system. It is an extension of our previous work, related to a clustering system, which only works on fixed cameras. Just a vanish point calculation and simple optical flow filtering, to eliminate motion vectors due to the car relative movement, is performed to let the system achieve high performances with different scenarios, cameras and resolutions. The proposed system just uses as input the optical flow, which is hardware implemented in the proposed platform and since the elaboration of the whole system is really speed and power consumption, it is inserted directly in the camera framework, allowing to execute all the processing in real-time.
Digital Article Identifier (DOI):
20
10008013
An Efficient Fundamental Matrix Estimation for Moving Object Detection
Abstract: In this paper, an improved method for estimating fundamental matrix is proposed. The method is applied effectively to monocular camera based moving object detection. The method consists of corner points detection, moving object’s motion estimation and fundamental matrix calculation. The corner points are obtained by using Harris corner detector, motions of moving objects is calculated from pyramidal Lucas-Kanade optical flow algorithm. Through epipolar geometry analysis using RANSAC, the fundamental matrix is calculated. In this method, we have improved the performances of moving object detection by using two threshold values that determine inlier or outlier. Through the simulations, we compare the performances with varying the two threshold values.
Digital Article Identifier (DOI):
19
10005541
Real Time Video Based Smoke Detection Using Double Optical Flow Estimation
Abstract: In this paper, we present a video based smoke detection
algorithm based on TVL1 optical flow estimation. The main part
of the algorithm is an accumulating system for motion angles and
upward motion speed of the flow field. We optimized the usage of
TVL1 flow estimation for the detection of smoke with very low smoke
density. Therefore, we use adapted flow parameters and estimate the
flow field on difference images. We show in theory and in evaluation
that this improves the performance of smoke detection significantly.
We evaluate the smoke algorithm using videos with different smoke
densities and different backgrounds. We show that smoke detection
is very reliable in varying scenarios. Further we verify that our
algorithm is very robust towards crowded scenes disturbance videos.
Digital Article Identifier (DOI):
18
10003353
Optical Flow Technique for Supersonic Jet Measurements
Abstract: This paper outlines the development of an
experimental technique in quantifying supersonic jet flows, in an
attempt to avoid seeding particle problems frequently associated with
particle-image velocimetry (PIV) techniques at high Mach numbers.
Based on optical flow algorithms, the idea behind the technique
involves using high speed cameras to capture Schlieren images of the
supersonic jet shear layers, before they are subjected to an adapted
optical flow algorithm based on the Horn-Schnuck method to
determine the associated flow fields. The proposed method is capable
of offering full-field unsteady flow information with potentially
higher accuracy and resolution than existing point-measurements or
PIV techniques. Preliminary study via numerical simulations of a
circular de Laval jet nozzle successfully reveals flow and shock
structures typically associated with supersonic jet flows, which serve
as useful data for subsequent validation of the optical flow based
experimental results. For experimental technique, a Z-type Schlieren
setup is proposed with supersonic jet operated in cold mode,
stagnation pressure of 4 bar and exit Mach of 1.5. High-speed singleframe
or double-frame cameras are used to capture successive
Schlieren images. As implementation of optical flow technique to
supersonic flows remains rare, the current focus revolves around
methodology validation through synthetic images. The results of
validation test offers valuable insight into how the optical flow
algorithm can be further improved to improve robustness and
accuracy. Despite these challenges however, this supersonic flow
measurement technique may potentially offer a simpler way to
identify and quantify the fine spatial structures within the shock shear
layer.
Digital Article Identifier (DOI):
17
10004913
Human Action Recognition System Based on Silhouette
Abstract: Human action is recognized directly from the video sequences. The objective of this work is to recognize various human actions like run, jump, walk etc. Human action recognition requires some prior knowledge about actions namely, the motion estimation, foreground and background estimation. Region of interest (ROI) is extracted to identify the human in the frame. Then, optical flow technique is used to extract the motion vectors. Using the extracted features similarity measure based classification is done to recognize the action. From experimentations upon the Weizmann database, it is found that the proposed method offers a high accuracy.
Digital Article Identifier (DOI):
16
10002835
A Low-Cost Vision-Based Unmanned Aerial System for Extremely Low-Light GPS-Denied Navigation and Thermal Imaging
Abstract: This paper presents the design and implementation
details of a complete unmanned aerial system (UAS) based
on commercial-off-the-shelf (COTS) components, focusing on
safety, security, search and rescue scenarios in GPS-denied
environments. In particular, The aerial platform is capable
of semi-autonomously navigating through extremely low-light,
GPS-denied indoor environments based on onboard sensors only,
including a downward-facing optical flow camera. Besides, an
additional low-cost payload camera system is developed to stream
both infra-red video and visible light video to a ground station in
real-time, for the purpose of detecting sign of life and hidden humans.
The total cost of the complete system is estimated to be $1150,
and the effectiveness of the system has been tested and validated
in practical scenarios.
Digital Article Identifier (DOI):
15
17157
Optical Flow Based Moving Object Detection and Tracking for Traffic Surveillance
Abstract: Automated motion detection and tracking is a challenging task in traffic surveillance. In this paper, a system is developed to gather useful information from stationary cameras for detecting moving objects in digital videos. The moving detection and tracking system is developed based on optical flow estimation together with application and combination of various relevant computer vision and image processing techniques to enhance the process. To remove noises, median filter is used and the unwanted objects are removed by applying thresholding algorithms in morphological operations. Also the object type restrictions are set using blob analysis. The results show that the proposed system successfully detects and tracks moving objects in urban videos.
Digital Article Identifier (DOI):
14
151
A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot
Abstract: This paper presents an algorithm for the recognition
and tracking of moving objects, 1/10 scale model car is used to verify
performance of the algorithm. Presented algorithm for the recognition
and tracking of moving objects in the paper is as follows. SURF
algorithm is merged with Lucas-Kanade algorithm. SURF algorithm
has strong performance on contrast, size, rotation changes and it
recognizes objects but it is slow due to many computational
complexities. Processing speed of Lucas-Kanade algorithm is fast but
the recognition of objects is impossible. Its optical flow compares the
previous and current frames so that can track the movement of a pixel.
The fusion algorithm is created in order to solve problems which
occurred using the Kalman Filter to estimate the position and the
accumulated error compensation algorithm was implemented. Kalman
filter is used to create presented algorithm to complement problems
that is occurred when fusion two algorithms. Kalman filter is used to
estimate next location, compensate for the accumulated error. The
resolution of the camera (Vision Sensor) is fixed to be 640x480. To
verify the performance of the fusion algorithm, test is compared to
SURF algorithm under three situations, driving straight, curve, and
recognizing cars behind the obstacles. Situation similar to the actual is
possible using a model vehicle. Proposed fusion algorithm showed
superior performance and accuracy than the existing object
recognition and tracking algorithms. We will improve the performance
of the algorithm, so that you can experiment with the images of the
actual road environment.
Digital Article Identifier (DOI):
13
261
Performance Improvement of Moving Object Recognition and Tracking Algorithm using Parallel Processing of SURF and Optical Flow
Abstract: The paper proposes a way of parallel processing of
SURF and Optical Flow for moving object recognition and tracking.
The object recognition and tracking is one of the most important task
in computer vision, however disadvantage are many operations cause
processing speed slower so that it can-t do real-time object recognition
and tracking. The proposed method uses a typical way of feature
extraction SURF and moving object Optical Flow for reduce
disadvantage and real-time moving object recognition and tracking,
and parallel processing techniques for speed improvement. First
analyse that an image from DB and acquired through the camera using
SURF for compared to the same object recognition then set ROI
(Region of Interest) for tracking movement of feature points using
Optical Flow. Secondly, using Multi-Thread is for improved
processing speed and recognition by parallel processing. Finally,
performance is evaluated and verified efficiency of algorithm
throughout the experiment.
Digital Article Identifier (DOI):
12
6842
Combining Skin Color and Optical Flow for Computer Vision Systems
Abstract: Skin color is an important visual cue for computer
vision systems involving human users. In this paper we combine skin
color and optical flow for detection and tracking of skin regions. We
apply these techniques to gesture recognition with encouraging
results. We propose a novel skin similarity measure. For grouping
detected skin regions we propose a novel skin region grouping
mechanism. The proposed techniques work with any number of skin
regions making them suitable for a multiuser scenario.
Digital Article Identifier (DOI):
11
11710
A Reliable FPGA-based Real-time Optical-flow Estimation
Abstract: Optical flow is a research topic of interest for many
years. It has, until recently, been largely inapplicable to real-time
applications due to its computationally expensive nature. This paper
presents a new reliable flow technique which is combined with a
motion detection algorithm, from stationary camera image streams,
to allow flow-based analyses of moving entities, such as rigidity, in
real-time. The combination of the optical flow analysis with motion
detection technique greatly reduces the expensive computation of
flow vectors as compared with standard approaches, rendering the
method to be applicable in real-time implementation. This paper
describes also the hardware implementation of a proposed pipelined
system to estimate the flow vectors from image sequences in real
time. This design can process 768 x 576 images at a very high frame
rate that reaches to 156 fps in a single low cost FPGA chip, which is
adequate for most real-time vision applications.
Digital Article Identifier (DOI):
10
6720
Motion Analysis for Duplicate Frame Removal in Wireless Capsule Endoscope Video
Abstract: Wireless capsule Endoscopy (WCE) has rapidly
shown its wide applications in medical domain last ten years
thanks to its noninvasiveness for patients and support for thorough
inspection through a patient-s entire digestive system including
small intestine. However, one of the main barriers to efficient
clinical inspection procedure is that it requires large amount of
effort for clinicians to inspect huge data collected during the
examination, i.e., over 55,000 frames in video. In this paper, we
propose a method to compute meaningful motion changes of
WCE by analyzing the obtained video frames based on regional
optical flow estimations. The computed motion vectors are used to
remove duplicate video frames caused by WCE-s imaging nature,
such as repetitive forward-backward motions from peristaltic
movements. The motion vectors are derived by calculating
directional component vectors in four local regions. Our
experiments are performed on small intestine area, which is of
main interest to clinical experts when using WCEs, and our
experimental results show significant frame reductions comparing
with a simple frame-to-frame similarity-based image reduction
method.
Digital Article Identifier (DOI):
9
1574
Object Tracking using MACH filter and Optical Flow in Cluttered Scenes and Variable Lighting Conditions
Abstract: Vision based tracking problem is solved through a
combination of optical flow, MACH filter and log r-θ mapping.
Optical flow is used for detecting regions of movement in video
frames acquired under variable lighting conditions. The region of
movement is segmented and then searched for the target. A template
is used for target recognition on the segmented regions for detecting
the region of interest. The template is trained offline on a sequence of
target images that are created using the MACH filter and log r-θ
mapping. The template is applied on areas of movement in
successive frames and strong correlation is seen for in-class targets.
Correlation peaks above a certain threshold indicate the presence of
target and the target is tracked over successive frames.
Digital Article Identifier (DOI):
8
1581
Efficient Block Matching Algorithm for Motion Estimation
Abstract: Motion estimation is a key problem in video
processing and computer vision. Optical flow motion estimation can
achieve high estimation accuracy when motion vector is small.
Three-step search algorithm can handle large motion vector but not
very accurate. A joint algorithm was proposed in this paper to
achieve high estimation accuracy disregarding whether the motion
vector is small or large, and keep the computation cost much lower
than full search.
Digital Article Identifier (DOI):
7
8769
Development System for Emotion Detection Based on Brain Signals and Facial Images
Abstract: Detection of human emotions has many potential applications. One of application is to quantify attentiveness audience in order evaluate acoustic quality in concern hall. The subjective audio preference that based on from audience is used. To obtain fairness evaluation of acoustic quality, the research proposed system for multimodal emotion detection; one modality based on brain signals that measured using electroencephalogram (EEG) and the second modality is sequences of facial images. In the experiment, an audio signal was customized which consist of normal and disorder sounds. Furthermore, an audio signal was played in order to stimulate positive/negative emotion feedback of volunteers. EEG signal from temporal lobes, i.e. T3 and T4 was used to measured brain response and sequence of facial image was used to monitoring facial expression during volunteer hearing audio signal. On EEG signal, feature was extracted from change information in brain wave, particularly in alpha and beta wave. Feature of facial expression was extracted based on analysis of motion images. We implement an advance optical flow method to detect the most active facial muscle form normal to other emotion expression that represented in vector flow maps. The reduce problem on detection of emotion state, vector flow maps are transformed into compass mapping that represents major directions and velocities of facial movement. The results showed that the power of beta wave is increasing when disorder sound stimulation was given, however for each volunteer was giving different emotion feedback. Based on features derived from facial face images, an optical flow compass mapping was promising to use as additional information to make decision about emotion feedback.
Digital Article Identifier (DOI):
6
6512
An Efficient Algorithm for Motion Detection Based Facial Expression Recognition using Optical Flow
Abstract: One of the popular methods for recognition of facial
expressions such as happiness, sadness and surprise is based on
deformation of facial features. Motion vectors which show these
deformations can be specified by the optical flow. In this method, for
detecting emotions, the resulted set of motion vectors are compared
with standard deformation template that caused by facial expressions.
In this paper, a new method is introduced to compute the quantity of
likeness in order to make decision based on the importance of
obtained vectors from an optical flow approach. For finding the
vectors, one of the efficient optical flow method developed by
Gautama and VanHulle[17] is used. The suggested method has been
examined over Cohn-Kanade AU-Coded Facial Expression Database,
one of the most comprehensive collections of test images available.
The experimental results show that our method could correctly
recognize the facial expressions in 94% of case studies. The results
also show that only a few number of image frames (three frames) are
sufficient to detect facial expressions with rate of success of about
83.3%. This is a significant improvement over the available methods.
Digital Article Identifier (DOI):
5
5078
Region-Based Segmentation of Generic Video Scenes Indexing
Abstract: In this work we develop an object extraction method
and propose efficient algorithms for object motion characterization.
The set of proposed tools serves as a basis for development of objectbased
functionalities for manipulation of video content. The
estimators by different algorithms are compared in terms of quality
and performance and tested on real video sequences. The proposed
method will be useful for the latest standards of encoding and
description of multimedia content – MPEG4 and MPEG7.
Digital Article Identifier (DOI):
4
11692
An Optical Flow Based Segmentation Method for Objects Extraction
Abstract: This paper describes a segmentation algorithm based
on the cooperation of an optical flow estimation method with edge
detection and region growing procedures.
The proposed method has been developed as a pre-processing
stage to be used in methodologies and tools for video/image indexing
and retrieval by content. The addressed problem consists in
extracting whole objects from background for producing images of
single complete objects from videos or photos. The extracted images
are used for calculating the object visual features necessary for both
indexing and retrieval processes.
The first task of the algorithm exploits the cues from motion
analysis for moving area detection. Objects and background are then
refined using respectively edge detection and region growing
procedures. These tasks are iteratively performed until objects and
background are completely resolved.
The developed method has been applied to a variety of indoor and
outdoor scenes where objects of different type and shape are
represented on variously textured background.
Digital Article Identifier (DOI):
3
14909
Facial Expressions Animation and Lip Tracking Using Facial Characteristic Points and Deformable Model
Abstract: Face and facial expressions play essential roles in
interpersonal communication. Most of the current works on the facial
expression recognition attempt to recognize a small set of the
prototypic expressions such as happy, surprise, anger, sad, disgust
and fear. However the most of the human emotions are
communicated by changes in one or two of discrete features. In this
paper, we develop a facial expressions synthesis system, based on the
facial characteristic points (FCP's) tracking in the frontal image
sequences. Selected FCP's are automatically tracked using a crosscorrelation
based optical flow. The proposed synthesis system uses a
simple deformable facial features model with a few set of control
points that can be tracked in original facial image sequences.
Digital Article Identifier (DOI):
2
15382
Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures
Abstract: The image segmentation method described in this
paper has been developed as a pre-processing stage to be used in
methodologies and tools for video/image indexing and retrieval by
content. This method solves the problem of whole objects extraction
from background and it produces images of single complete objects
from videos or photos. The extracted images are used for calculating
the object visual features necessary for both indexing and retrieval
processes.
The segmentation algorithm is based on the cooperation among an
optical flow evaluation method, edge detection and region growing
procedures. The optical flow estimator belongs to the class of
differential methods. It permits to detect motions ranging from a
fraction of a pixel to a few pixels per frame, achieving good results in
presence of noise without the need of a filtering pre-processing stage
and includes a specialised model for moving object detection.
The first task of the presented method exploits the cues from
motion analysis for moving areas detection. Objects and background
are then refined using respectively edge detection and seeded region
growing procedures. All the tasks are iteratively performed until
objects and background are completely resolved.
The method has been applied to a variety of indoor and outdoor
scenes where objects of different type and shape are represented on
variously textured background.
Digital Article Identifier (DOI):
1
13201
Accurate Optical Flow Based on Spatiotemporal Gradient Constancy Assumption
Abstract: Variational methods for optical flow estimation are
known for their excellent performance. The method proposed by Brox
et al. [5] exemplifies the strength of that framework. It combines
several concepts into single energy functional that is then minimized
according to clear numerical procedure. In this paper we propose
a modification of that algorithm starting from the spatiotemporal
gradient constancy assumption. The numerical scheme allows to
establish the connection between our model and the CLG(H) method
introduced in [18]. Experimental evaluation carried out on synthetic
sequences shows the significant superiority of the spatial variant of
the proposed method. The comparison between methods for the realworld
sequence is also enclosed.
Digital Article Identifier (DOI):