# Alumni

### 2018

• J. Rajagopal, “Accurate trajectory tracking on the KUKA youBot manipulator: Computed-torque control and Friction compensation,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

Accurate trajectory tracking control is a desirable quality for the robotic manipulators. Since the manipulators are highly non-linear due to the presence of structured (i.e. model errors), and unstructured uncertainties (i.e. friction), it becomes very difficult to achieve high-precision tracking in the manipulator joints. Most common solution to this problem is the computed-torque control scheme where the inclusion of the dynamic model linearizes the non-linear system. Disadvantage of this method is that the controller’s performance in high-speed operations relies heavily on the accuracy of the dynamic model parameters. Model parameters provided by the manufacturers are generally not accurate, and thus the identification of link parameters becomes necessary. Additionally, inclusion of friction compensation terms in the dynamic model improves the controller’s performance, and helps us in achieving better dynamics control of the manipulator. This work considers the implementation of both identification of the dynamic model parameters and computed- torque controller. The identification procedure is the continuation of the previous research where the geometric relation semantics and the dynamics of the system needed correction. Real concern in any of the robotics based applications is safety of the hardwares used (i.e. motors, sensors) because they are generally quite expensive. So, this work gives highest priority regarding safety of the real system and the safety control layer monitors the state of the joints with the help of the encoder data such as joint positions, velocities and torques. The model-based controller uses the dynamic model of the youBot manipulator and feed- forward torques are computed by using the inverse dynamics solver based on a library. There are two kinds of control schemes considered in this work such as the basic and the alternate control. The alternate control method is chosen over the basic method, because the basic approach suffers in predicting the model torques due to the presence of inaccurate dynamic model parameters. Both of the schemes implement the same cascade PI controller which is the combination of both the position and velocity controllers for the purpose of better disturbance rejection. Controller gains are tuned empirically to the optimum for most controlwise challenging joints on the end of the kinematic chain. This work reports analysis on the semantics used by the existing rigid-body algorithms and these findings are useful in constructing the geometrical relation semantics between rigid bodies without introducing the logical errors. Then, a safety control check is performed in the real system that handles the breaches in the safety limit of the manipulator joints effectively with an understandable latency issues due to the use of non real-time operating system. The correctness of the dynamic model is tested with the gravity compensation task, and then the pure controller without the dynamic model is validated with the analytical trajectories. The friction modelling and compensation experiments are conducted in the basic control scheme implemented in this work. The computed-torque control scheme is evaluated on the farthest joints of the base with the help of the analytically formulated trajectories. In spite of using the inaccurate model parameters in the dynamic model, the controller tracks of the trajectory accurately with an acceptable tracking error on the manipulator joints.

@MastersThesis{ 2018rajagopal,
abstract  = {Accurate trajectory tracking control is a desirable
quality for the robotic manipulators. Since the
manipulators are highly non-linear due to the presence of
structured (i.e. model errors), and unstructured
uncertainties (i.e. friction), it becomes very difficult to
achieve high-precision tracking in the manipulator joints.
Most common solution to this problem is the computed-torque
control scheme where the inclusion of the dynamic model
linearizes the non-linear system. Disadvantage of this
method is that the controller’s performance in high-speed
operations relies heavily on the accuracy of the dynamic
model parameters. Model parameters provided by the
manufacturers are generally not accurate, and thus the
identification of link parameters becomes necessary.
Additionally, inclusion of friction compensation terms in
the dynamic model improves the controller’s performance,
and helps us in achieving better dynamics control of the
manipulator. This work considers the implementation of both
identification of the dynamic model parameters and
computed- torque controller. The identification procedure
is the continuation of the previous research where the
geometric relation semantics and the dynamics of the system
needed correction. Real concern in any of the robotics
based applications is safety of the hardwares used (i.e.
motors, sensors) because they are generally quite
expensive. So, this work gives highest priority regarding
safety of the real system and the safety control layer
monitors the state of the joints with the help of the
encoder data such as joint positions, velocities and
torques. The model-based controller uses the dynamic model
of the youBot manipulator and feed- forward torques are
computed by using the inverse dynamics solver based on a
library. There are two kinds of control schemes considered
in this work such as the basic and the alternate control.
The alternate control method is chosen over the basic
method, because the basic approach suffers in predicting
the model torques due to the presence of inaccurate dynamic
model parameters. Both of the schemes implement the same
cascade PI controller which is the combination of both the
position and velocity controllers for the purpose of better
disturbance rejection. Controller gains are tuned
empirically to the optimum for most controlwise challenging
joints on the end of the kinematic chain. This work reports
analysis on the semantics used by the existing rigid-body
algorithms and these findings are useful in constructing
the geometrical relation semantics between rigid bodies
without introducing the logical errors. Then, a safety
control check is performed in the real system that handles
the breaches in the safety limit of the manipulator joints
effectively with an understandable latency issues due to
the use of non real-time operating system. The correctness
of the dynamic model is tested with the gravity
compensation task, and then the pure controller without the
dynamic model is validated with the analytical
trajectories. The friction modelling and compensation
experiments are conducted in the basic control scheme
implemented in this work. The computed-torque control
scheme is evaluated on the farthest joints of the base with
the help of the analytically formulated trajectories. In
spite of using the inaccurate model parameters in the
dynamic model, the controller tracks of the trajectory
accurately with an acceptable tracking error on the
manipulator joints.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS12 FH-BRS - Plöger, Chakirov, Schneider supervising},
author  = {Jeyaprakash Rajagopal},
month = {August},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Accurate trajectory tracking on the KUKA youBot
manipulator: Computed-torque control and Friction
compensation},
year = {2018}
}

• A. Vinokurov, “Towards improvements on RoboCup @home robots architecture, capabilities and development process,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

Domestic robotics is a vast field where a lot of knowledge and team effort is required to build quality software for daily use and for RoboCup competitions. Current robot that the research is carried on is the service robot Care-O-bot 3, developed by Fraunhofer IPA and that is used for research at Hochshule Bonn-Rhein-Sieg. It is a big machine containing 4 different computers and many peripheral devices includ- ing laser scanners, audio system, a manipulator (that has a dedicated computer) and other peripheral devices. It is fairly complicated to come up with a suitable archi- tecture for such a robot, so it would be most efficient in @home scenarios. Partially this is due to the overall complexity of the robot, partially due to the deficiencies in current development process. For example the lack of testing procedures. The goal of this project is to analyse current robot architecture and modify it in a way so it would be more reliable and easy to maintain by the team. Another goal is to separate ROS framework dependencies and the algorithmic part of the software developed as well as to provide a template for the package structure that would facilitate writing software in such a way, so frameworks will be easily inter- changeable. Good documentation is vital for the new members of the team, who are not familiar with the development process and tooling, so to ease the on-boarding process everything will be well documented. After the start of this work a new Toyota HSR robot was acquired, featuring its own architecture and Python API abstraction from ROS framework. This robot will be looked at in detail form the software point of view to study the way of integrating both robots in one common software space.

@MastersThesis{ 2018vinokurov,
abstract  = {Domestic robotics is a vast field where a lot of knowledge
and team effort is required to build quality software for
daily use and for RoboCup competitions. Current robot that
the research is carried on is the service robot Care-O-bot
3, developed by Fraunhofer IPA and that is used for
research at Hochshule Bonn-Rhein-Sieg. It is a big machine
containing 4 different computers and many peripheral
devices includ- ing laser scanners, audio system, a
manipulator (that has a dedicated computer) and other
peripheral devices. It is fairly complicated to come up
with a suitable archi- tecture for such a robot, so it
would be most efficient in @home scenarios. Partially this
is due to the overall complexity of the robot, partially
due to the deficiencies in current development process. For
example the lack of testing procedures. The goal of this
project is to analyse current robot architecture and modify
it in a way so it would be more reliable and easy to
maintain by the team. Another goal is to separate ROS
framework dependencies and the algorithmic part of the
software developed as well as to provide a template for the
package structure that would facilitate writing software in
such a way, so frameworks will be easily inter- changeable.
Good documentation is vital for the new members of the
team, who are not familiar with the development process and
tooling, so to ease the on-boarding process everything will
be well documented. After the start of this work a new
Toyota HSR robot was acquired, featuring its own
architecture and Python API abstraction from ROS framework.
This robot will be looked at in detail form the software
point of view to study the way of integrating both robots
in one common software space. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS14 Pl{\"o}ger, Kraetzschmar, Mitrevski supervising},
author  = {Artem Vinokurov},
month = {July},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Towards improvements on RoboCup @home robots architecture,
capabilities and development process},
year = {2018}
}

• P. S. Vokuda, “Interactive Object Detection,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

The success of state-of-the-art object detection methods depend heavily on the availability of a large amount of annotated image data. The raw image data available from various sources are abundant but non-annotated. Annotating image data is often costly, time-consuming or needs expert help. In this work, a new paradigm of learning called Active Learning is explored which uses user interaction to obtain annotations for a sub- set of the dataset. The goal of active learning is to achieve superior object detection performance with images that are annotated on demand. To realize active learning method, the trade-off between the effort to annotate (annotation cost) unlabelled data and the performance of object detection model is minimised. Random Forests based method called Hough Forest is chosen as the object detection model and the annotation cost is calculated as the predicted false positive and false negative rate. The framework is successfully evaluated on two Computer Vision benchmark and two Carl Zeiss custom datasets. Also, an evaluation of RGB, HoG and Deep features for the task is presented. Experimental results show that using Deep features with Hough Forest achieves the maximum performance. By employing Active Learning, it is demonstrated that performance comparable to the fully supervised setting can be achieved by annotating just 2.5\% of the dataset. To this end, an annotation tool is developed for user interaction during Active Learning.

@MastersThesis{ 2018vokuda,
abstract  = {The success of state-of-the-art object detection methods
depend heavily on the availability of a large amount of
annotated image data. The raw image data available from
various sources are abundant but non-annotated. Annotating
image data is often costly, time-consuming or needs expert
help. In this work, a new paradigm of learning called
Active Learning is explored which uses user interaction to
obtain annotations for a sub- set of the dataset. The goal
of active learning is to achieve superior object detection
performance with images that are annotated on demand. To
realize active learning method, the trade-off between the
effort to annotate (annotation cost) unlabelled data and
the performance of object detection model is minimised.
Random Forests based method called Hough Forest is chosen
as the object detection model and the annotation cost is
calculated as the predicted false positive and false
negative rate. The framework is successfully evaluated on
two Computer Vision benchmark and two Carl Zeiss custom
datasets. Also, an evaluation of RGB, HoG and Deep features
Experimental results show that using Deep features with
Hough Forest achieves the maximum performance. By employing
Active Learning, it is demonstrated that performance
comparable to the fully supervised setting can be achieved
by annotating just 2.5\% of the dataset. To this end, an
annotation tool is developed for user interaction during
Active Learning.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS15 FH-BRS - Pl{\"o}ger, Thiele supervising},
author  = {Priyanka Subramanya Vokuda},
month = {April},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Interactive Object Detection},
year = {2018}
}

• W. Burgard, Course: Introduction to Mobile Robotics, University of Freiburg, 2018.
[BibTeX]
@Misc{ burgard2018,
author  = {Burgard, W.},
date-modified  = {2018-08-28 10:44:31 +0200},
howpublished  = {http://ais.informatik.uni-freiburg.de/teaching/ss17/robotics/slides/05-prob-intro.pdf}
,
month = {August},
title = {Course: Introduction to Mobile Robotics, University of
Freiburg},
year = {2018}
}

• S. N. Boris, “An Affordable, Integrated and Digitized System for Sound Insulation Tests in Buildings,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

In the field of building acoustics, sound insulation is the capacity of buildings not to let external noises in or sounds from the inside out. This property of buildings is important for the well-being, comfort and privacy of building occupants, and therefore regulations on acceptable sound insulation levels are defined by each country depending on the building type and usage. It is then customary to conduct sound insulation tests on buildings in order to determine their sound insulation capability and verify whether they conform to regulations. In this work, an affordable, integrated, and digitized system for sound insulation tests in buildings is proposed. Affordable, because low-cost alternatives to the conventional measurement equipment and strategies used for sound insulation tests are proposed. Integrated, because all sound insulation test procedures are integrated into a single system that generates test signals, acquires and processes acoustic data, and detects whether or not building acoustics regulations are met. Digitized, because the system proposes to automate the interpretation of sound insulation test results, which is usually done by building acoustics experts, by diagnosing and identifying the potential defects that may cause poor sound insulation and suggesting potential remedies to improve the sound insulation. To achieve an affordable system, this work proposes alternative strategies in order to use a directional loudspeaker as well as a low precision and non-calibrated microphone as alternatives to the conventional omnidirectional and high precision and calibrated microphones used in sound insulation tests. Part of the proposed system is implemented, tested and evaluated. The evaluation is done qualitatively by investigating the precision and accuracy of the obtained results in varying environments and conditions, and by analysing how the implemented system compares to other solutions for the calculation of acoustic parameters namely the Sound Level Meter free mobile phone application for Sound Pressure Level calculation, and Norsonic’s building acoustics proprietary software for Sound Pressure Level and reverberation time calculation. Generally the results of the implemented system show closeness to those of the reference solutions, and the main differences and discrepancies are shown to be caused by the choice of the equipment. To digitize the interpretation of sound insulation tests results, a proposition is made to make use of the mass-law which states that the sound reduction of a building element increases by $6 dB$ as its mass density or the sound frequency is doubled. It is shown that a number of features like the Schroeder frequency, the coincidence frequency, the mass density, and the damping level of the building element under test can be extracted from the mass-law and subsequently used to categorize and diagnose which defects are present. Two commonly encountered defects in building acoustics, the weak wall and the air-leakage defects, are particularly investigated and discussed in the light of experiments, and an agreement with the mass-law is observed hence supporting the proposition that it can be used to digitize the interpretation of results.

@MastersThesis{ 2018ndimubanzi,
abstract  = { In the field of building acoustics, sound insulation is
the capacity of buildings not to let external noises in or
sounds from the inside out. This property of buildings is
important for the well-being, comfort and privacy of
building occupants, and therefore regulations on acceptable
sound insulation levels are defined by each country
depending on the building type and usage. It is then
customary to conduct sound insulation tests on buildings in
order to determine their sound insulation capability and
verify whether they conform to regulations.
In this work, an affordable, integrated, and digitized
system for sound insulation tests in buildings is proposed.
Affordable, because low-cost alternatives to the
conventional measurement equipment and strategies used for
sound insulation tests are proposed. Integrated, because
all sound insulation test procedures are integrated into a
single system that generates test signals, acquires and
processes acoustic data, and detects whether or not
building acoustics regulations are met. Digitized, because
the system proposes to automate the interpretation of sound
insulation test results, which is usually done by building
acoustics experts, by diagnosing and identifying the
potential defects that may cause poor sound insulation and
suggesting potential remedies to improve the sound
insulation.
To achieve an affordable system, this work proposes
alternative strategies in order to use a directional
loudspeaker as well as a low precision and non-calibrated
microphone as alternatives to the conventional
omnidirectional and high precision and calibrated
microphones used in sound insulation tests. Part of the
proposed system is implemented, tested and evaluated. The
evaluation is done qualitatively by investigating the
precision and accuracy of the obtained results in varying
environments and conditions, and by analysing how the
implemented system compares to other solutions for the
calculation of acoustic parameters namely the Sound Level
Meter free mobile phone application for Sound Pressure
Level calculation, and Norsonic's building acoustics
proprietary software for Sound Pressure Level and
reverberation time calculation. Generally the results of
the implemented system show closeness to those of the
reference solutions, and the main differences and
discrepancies are shown to be caused by the choice of the
equipment.
To digitize the interpretation of sound insulation tests
results, a proposition is made to make use of the mass-law
which states that the sound reduction of a building element
increases by $6 dB$ as its mass density or the sound
frequency is doubled. It is shown that a number of features
like the Schroeder frequency, the coincidence frequency,
the mass density, and the damping level of the building
element under test can be extracted from the mass-law and
subsequently used to categorize and diagnose which defects
are present. Two commonly encountered defects in building
acoustics, the weak wall and the air-leakage defects, are
particularly investigated and discussed in the light of
experiments, and an agreement with the mass-law is observed
hence supporting the proposition that it can be used to
digitize the interpretation of results. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS15 measX GmbH Pl{\"o}ger, Kraetzschmar, Hilsmann
supervising},
author  = {Senga Ndimubanzi Boris},
month = {August},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {An Affordable, Integrated and Digitized System for Sound
Insulation Tests in Buildings},
year = {2018}
}

• M. Naazare, “Simultaneous Exploration and Information Delivery Using Multiple Heterogeneous Robots Under Limited Communication,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

After a disaster, there is an urgent need to deliver humanitarian aid such as emergency medical treatment, supply of food and medicines to the affected regions. Since disasters can cause existing maps to change, reaching the affected regions immediately can be extremely challenging. In this work, the problem of exploring an unknown environment using a team of Unmanned Aerial Vehicles (UAVs) and Unmanned Ground Vehicles (UGV) to discover traversable regions for emergency response vehicles is investigated. The goal of the exploration is to ensure that the explored information reaches the base station despite its strict limited communication range which allows no inter-robot communication outside the range. For the given exploration scenario, the key issue addressed is the coordination of multiple-robots The proposed Simultaneous Exploration and Information Delivery (SEAID) planner operates as a centralised structure and assigns target positions to the Multi-robot System (MRS) to explore, and report the gathered information periodically. It deploys UAVs to perform Boustrophedon motions while UGVs perform frontier-based exploration. As UAVs return periodically, more information about the environment is revealed and the planner switches from frontier-based planning to grid-based coverage for the UGVs where it assigns grid-like pattern of target positions. To ensure coordination, the planner predicts the influence of other robots due to their presence and their respective goals on the new information that can be gained for a robot. Additionally, it employs a divide and rule strategy and assigns robots their respective regions to carry out their tasks. To evaluate the performance of different coordination strategies and observe the behaviour of the MRS, a Robot Operating System-based benchmark called SEAID-Sim was developed to simulate the given scenario. The proposed SEAID planner was tested against existing techniques for multi-robot exploration and multi-robot coverage. The results demonstrate that SEAID planner explores the complete environment, discovers all the obstacles, traversable and non-traversable regions for the emergency response vehicles faster than existing techniques.

@MastersThesis{ 2018naazare,
abstract  = {After a disaster, there is an urgent need to deliver
humanitarian aid such as emergency medical treatment,
supply of food and medicines to the affected regions. Since
disasters can cause existing maps to change, reaching the
affected regions immediately can be extremely challenging.
In this work, the problem of exploring an unknown
environment using a team of Unmanned Aerial Vehicles (UAVs)
and Unmanned Ground Vehicles (UGV) to discover traversable
regions for emergency response vehicles is investigated.
The goal of the exploration is to ensure that the explored
information reaches the base station despite its strict
limited communication range which allows no inter-robot
communication outside the range. For the given exploration
scenario, the key issue addressed is the coordination of
multiple-robots
The proposed Simultaneous Exploration and Information
Delivery (SEAID) planner operates as a centralised
structure and assigns target positions to the Multi-robot
System (MRS) to explore, and report the gathered
information periodically. It deploys UAVs to perform
Boustrophedon motions while UGVs perform frontier-based
about the environment is revealed and the planner switches
from frontier-based planning to grid-based coverage for the
UGVs where it assigns grid-like pattern of target
positions. To ensure coordination, the planner predicts the
influence of other robots due to their presence and their
respective goals on the new information that can be gained
for a robot. Additionally, it employs a divide and rule
strategy and assigns robots their respective regions to
To evaluate the performance of different coordination
strategies and observe the behaviour of the MRS, a Robot
Operating System-based benchmark called SEAID-Sim was
developed to simulate the given scenario. The proposed
SEAID planner was tested against existing techniques for
multi-robot exploration and multi-robot coverage. The
results demonstrate that SEAID planner explores the
complete environment, discovers all the obstacles,
traversable and non-traversable regions for the emergency
response vehicles faster than existing techniques.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[SS 2014] [Becker], [Alda], [Br{\"u}ggemann]},
author  = {Menaka Naazare},
month = {June},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Simultaneous Exploration and Information Delivery Using
Multiple Heterogeneous Robots Under Limited Communication},
year = {2018}
}

• L. O. A. Camargo, “Artificial Transfer Learning in Convolutional Neural Networks,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

Recent studies in transfer learning with synthetic data have shown significant results in the context of deep neural networks. However, most of these developments have not been thoroughly studied for the tasks of object detection and image classification. To research the learning transfer ability of artificial neural networks for these specific tasks, we generated over 1M synthetic images. These images were rendered from 35K 3D models extracted from the ShapeNet dataset. We trained several convolutional neural network architectures to perform classification and object detection using our synthetic datasets. We then performed transfer learning by fine-tuning the networks with real images extracted from the COCO and VOC2007/2012 datasets. In order to provide statistically significant results, we trained over 1K CNNs and we show that networks pre-trained with synthetic data are able to obtain higher accuracies and mAPs than the networks trained from scratch. Thus, we show empirical evidence that inexpensive synthetic data can be used to improve general classification and detection tasks in deep learning models. Furthermore, we are able to leverage from this result in order to train a real-time CPU object detector under the single-shot multibox detector scheme. Finally we also provide a self-contained deep learning review that allows us to revisit several hypothesis made in modern CNN architectures.

@MastersThesis{ 2018arriaga,
abstract  = {Recent studies in transfer learning with synthetic data
have shown significant results in the context of deep
neural networks. However, most of these developments have
not been thoroughly studied for the tasks of object
detection and image classification. To research the
learning transfer ability of artificial neural networks for
these specific tasks, we generated over 1M synthetic
images. These images were rendered from 35K 3D models
extracted from the ShapeNet dataset. We trained several
convolutional neural network architectures to perform
classification and object detection using our synthetic
datasets. We then performed transfer learning by
fine-tuning the networks with real images extracted from
the COCO and VOC2007/2012 datasets. In order to provide
statistically significant results, we trained over 1K CNNs
and we show that networks pre-trained with synthetic data
are able to obtain higher accuracies and mAPs than the
networks trained from scratch. Thus, we show empirical
evidence that inexpensive synthetic data can be used to
improve general classification and detection tasks in deep
learning models. Furthermore, we are able to leverage from
this result in order to train a real-time CPU object
detector under the single-shot multibox detector scheme.
Finally we also provide a self-contained deep learning
review that allows us to revisit several hypothesis made in
modern CNN architectures.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS15/16 Pl{\"o}ger, Asteroth, Valdenegro-Toro supervising},
author  = {Luis Octavio Arriaga Camargo},
month = {February},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Artificial Transfer Learning in Convolutional Neural
Networks},
year = {2018}
}

• S. Kannaiah, “Weakly Supervised Object Detection,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

Creating object detection datasets for supervised training is very arduous and time consuming since they require class level labels as well as bounding box annotations. Thus to mitigate the problem weakly supervised object detection is necessary where there are only class level labels and no bounding box annotations. Most of the existing methods for weakly supervised object detection concentrate on just post-processing the output of object proposal algorithms without really understanding the merits or demerits of the object proposal algorithms used. Thus the existing methods fail to detect multiple objects in the scene. Our method takes a holistic approach, by first analyzing the factors that improve object localization using CNN’s which is one of primary contributions of our work. With these findings a new CNN based object proposal algorithm is proposed and has the highest recall rates compared to other object proposal algorithms on pascal VOC dataset with 0.8, 0.95 and 0.96 for 100, 1000 and 1805 number of object proposals respectively for an IoU threshold of 50\% . This new CNN based object proposal algorithm was our second contribution. Finally a pipeline was built around this object proposal algorithm to do object detection in a weakly supervised manner. Our proposed method excels where other methods perform poorly, especially when there are multiple objects in the scene. Our method for weakly supervised object detection achieves a second place in non end-to-end methods and third amongst all the methods doing weakly supervised object detection

@MastersThesis{ 2018kannaiah,
abstract  = {Creating object detection datasets for supervised training
is very arduous and time consuming since they require class
level labels as well as bounding box annotations. Thus to
mitigate the problem weakly supervised object detection is
necessary where there are only class level labels and no
bounding box annotations. Most of the existing methods for
weakly supervised object detection concentrate on just
post-processing the output of object proposal algorithms
without really understanding the merits or demerits of the
object proposal algorithms used. Thus the existing methods
fail to detect multiple objects in the scene. Our method
takes a holistic approach, by first analyzing the factors
that improve object localization using CNN's which is one
of primary contributions of our work. With these findings a
new CNN based object proposal algorithm is proposed and has
the highest recall rates compared to other object proposal
algorithms on pascal VOC dataset with 0.8, 0.95 and 0.96
for 100, 1000 and 1805 number of object proposals
respectively for an IoU threshold of 50\% . This new CNN
based object proposal algorithm was our second
contribution. Finally a pipeline was built around this
object proposal algorithm to do object detection in a
weakly supervised manner. Our proposed method excels where
other methods perform poorly, especially when there are
multiple objects in the scene. Our method for weakly
supervised object detection achieves a second place in non
end-to-end methods and third amongst all the methods doing
weakly supervised object detection},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS2015, Towards Weakly Supervised Object Detection,
Ploeger, Breuer, Valdenegro supervising},
author  = {Saikiran Kannaiah},
month = {July},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Weakly Supervised Object Detection},
year = {2018}
}

• S. A. N. Khan, “Path optimization for a VR application using local search,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

In this thesis, an algorithm is proposed and implemented to solve Time Dependent Shortest Path Problems (TDSPP) within an NP-hard Traveling Salesman Problem (TSP). Local search algorithms are useful in solving such problems. Various local search algorithms are tested in this thesis. Particle Swarm Optimization (PSO) is chosen as the most suitable method to solve TSPs. PSO is adapted to solve TDSPP within a TSP. To do this with the least changes in the original path, various methods are introduced. At the end, it is concluded, that in a TDSPP within a TSP of 30 cities, the implemented algorithm finds an optimal solution 99% of the times with minimal variation from the original path within a 1200 milliseconds processing time limit if a solution exists. A space ride simulation synchronized with a massage program is used as an example scenario.

@MastersThesis{ 2018khan,
abstract  = {In this thesis, an algorithm is proposed and implemented
to solve Time Dependent Shortest Path Problems (TDSPP)
within an NP-hard Traveling Salesman Problem (TSP). Local
search algorithms are useful in solving such problems.
Various local search algorithms are tested in this thesis.
Particle Swarm Optimization (PSO) is chosen as the most
suitable method to solve TSPs. PSO is adapted to solve
TDSPP within a TSP. To do this with the least changes in
the original path, various methods are introduced. At the
end, it is concluded, that in a TDSPP within a TSP of 30
cities, the implemented algorithm finds an optimal solution
99% of the times with minimal variation from the original
path within a 1200 milliseconds processing time limit if a
solution exists. A space ride simulation synchronized with
a massage program is used as an example scenario.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS13 N/A - N/A Heiden, Kraetzschmar, AwaadLeon
supervising},
author  = {Sardar Adnan Nawaz Khan},
month = {July},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Path optimization for a VR application using local
search},
year = {2018}
}

• N. R. Koripalli, “Strategies to pre-train deep models for efficient multi-task fine-tuning,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

Deep Convolutional Neural Networks have made incredible leaps in the realm of image classification but they do not lend themselves to multi-task scenarios. Although they generalize quite well for the dataset that they are trained on, they do not generalize well for other image classification tasks where the dataset is not in the same domain as the dataset that it has been trained on. Transfer learning merely shifts the starting weights of the DCNN to a location that is biased towards the information within one dataset. The reason transfer learning works is because most datasets have similar information content as the original dataset that the DCNN was trained on. The problem is not with the DCNNs itself but with the optimizers that have fixed hyper-parameters. We apply principles of meta-learning, where the gradients of gradients are learned in the optimization process and sometimes the parameters of the optimizers are learned giving the DCNN the freedom to learn weights that lend themselves to being adapted to new tasks much more readily. We show that such a process can lead to reduced fine-tuning times for unseen tasks and datasets and the model can handle scenarios where the dataset is quite small.

@MastersThesis{ 2018koripalli,
abstract  = {Deep Convolutional Neural Networks have made incredible
leaps in the realm of image classification but they do not
lend themselves to multi-task scenarios. Although they
generalize quite well for the dataset that they are trained
on, they do not generalize well for other image
classification tasks where the dataset is not in the same
domain as the dataset that it has been trained on. Transfer
learning merely shifts the starting weights of the DCNN to
a location that is biased towards the information within
one dataset. The reason transfer learning works is because
most datasets have similar information content as the
original dataset that the DCNN was trained on. The problem
is not with the DCNNs itself but with the optimizers that
have fixed hyper-parameters. We apply principles of
in the optimization process and sometimes the parameters of
the optimizers are learned giving the DCNN the freedom to
learn weights that lend themselves to being adapted to new
datasets and the model can handle scenarios where the
dataset is quite small.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS15 BRSU - Recogizer GmbH Plöger, Hinkenjann, Zimmerling
supervising},
author  = {Nitish Reddy Koripalli},
month = {July},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Strategies to pre-train deep models for efficient
year = {2018}
}

• A. Ajmera, “Estimation of Prediction Uncertainty for Semantic Scene Labeling Using Bayesian Approximation,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2018.
[BibTeX] [Abstract]

With the advancement in technology, autonomous and assisted driving are close to being reality. A key component of such systems is the understanding of the surrounding environment. This understanding about the environment can be attained by performing semantic labeling of the driving scenes. Existing deep learning based models have been developed over the years that outperform classical image processing algorithms for the task of semantic labeling. However, the existing models only produce semantic predictions and do not provide a measure of uncertainty about the predictions. Hence, this work focuses on developing a deep learning based semantic labeling model that can produce semantic predictions and their corresponding uncertainties. Autonomous driving needs a real-time operating model, however the Full Resolution Residual Network (FRRN) [4] architecture, which is found as the best performing architecture during literature search, is not able to satisfy this condition. Hence, a small network, similar to FRRN, has been developed and used in this work. Based on the work of [13], the developed network is then extended by adding dropout layers and the dropouts are used during testing to perform approximate Bayesian inference. The existing works on uncertainties, do not have quantitative metrics to evaluate the quality of uncertainties estimated by a model. Hence, the area under curve (AUC) of the receiver operating characteristic (ROC) curves is proposed and used as an evaluation metric in this work. Further, a comparative analysis about the influence of dropout layer position, drop probability and the number of samples, on the quality of uncertainty estimation is performed. Finally, based on the insights gained from the analysis, a model with optimal configuration of dropout is developed. It is then evaluated on the Cityscape dataset and shown to be outperforming the baseline model with an AUC-ROC of about 90%, while the latter having AUC-ROC of about 80%.

@MastersThesis{ 2018ajmera,
abstract  = {With the advancement in technology, autonomous and
assisted driving are close to being reality. A key
component of such systems is the understanding of the
surrounding environment. This understanding about the
environment can be attained by performing semantic labeling
of the driving scenes. Existing deep learning based models
have been developed over the years that outperform
classical image processing algorithms for the task of
semantic labeling. However, the existing models only
produce semantic predictions and do not provide a measure
of uncertainty about the predictions. Hence, this work
focuses on developing a deep learning based semantic
labeling model that can produce semantic predictions and
their corresponding uncertainties.
Autonomous driving needs a real-time operating model,
however the Full Resolution Residual Network (FRRN) [4]
architecture, which is found as the best performing
architecture during literature search, is not able to
satisfy this condition. Hence, a small network, similar to
FRRN, has been developed and used in this work. Based on
the work of [13], the developed network is then extended by
adding dropout layers and the dropouts are used during
testing to perform approximate Bayesian inference. The
existing works on uncertainties, do not have quantitative
metrics to evaluate the quality of uncertainties estimated
by a model. Hence, the area under curve (AUC) of the
receiver operating characteristic (ROC) curves is proposed
and used as an evaluation metric in this work. Further, a
comparative analysis about the influence of dropout layer
position, drop probability and the number of samples, on
the quality of uncertainty estimation is performed.
Finally, based on the insights gained from the analysis, a
model with optimal configuration of dropout is developed.
It is then evaluated on the Cityscape dataset and shown to
be outperforming the baseline model with an AUC-ROC of
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS15/16, Fraunhofer IAIS Ploeger, Herpers, Eickeler
supervising},
author  = {Anand Ajmera},
month = {January},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Estimation of Prediction Uncertainty for Semantic Scene
Labeling Using Bayesian Approximation},
year = {2018}
}

### 2017

• C. Quignon, “Simultaneous Estimation of Rewards and Dynamics in Continuous Environments,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2017.
[BibTeX] [Abstract]

The field of robotics is steadily progressing into more diverse settings and increasingly complex tasks. Often, the exact environment that a robot will be applied in is not known nor is it possible to accurately specify solutions to the desired tasks. Instead of defining a rule based solution, Inverse Reinforcement Learning (IRL) provides an approach to learn a task from expert demonstrations by estimating the reward function of a Markov Decision Process. This reward function can be used to infer a policy solving the task even in new environments. The current state of the art in research often assumes the dynamics of the environment to be given. This assumption can hardly be satisfied in real world problems. This thesis introduces a novel approach to simultaneously estimate the reward function and the dynamics from a limited set of expert demonstrations in continuous state and action spaces. The approach is compared to Relative Entropy IRL (REIRL) with a naive dynamics estimate. The experimental evaluation showed that the simultaneous estimation of the dynamics is more accurate than the naive, fixed estimate. Since both the rewards and the dynamics influence the policy, it is plausible that a more expressive reward function can counteract inaccuracies of the dynamics estimate. To test this, the reward function is modified with additional features. Although both approaches can benefit from additional features, a careful choice of the features and initial values is required. The additional degree of freedom from the added features may result in an a worse dynamics estimate, overfitting or an ambiguous reward function. A theoretic explanation on the latter is also provided.

@MastersThesis{ 2017quignon,
abstract  = {The field of robotics is steadily progressing into more
diverse settings and increasingly complex tasks. Often, the
exact environment that a robot will be applied in is not
known nor is it possible to accurately specify solutions to
solution, Inverse Reinforcement Learning (IRL) provides an
approach to learn a task from expert demonstrations by
estimating the reward function of a Markov Decision
Process. This reward function can be used to infer a policy
solving the task even in new environments. The current
state of the art in research often assumes the dynamics of
the environment to be given. This assumption can hardly be
satisfied in real world problems.
This thesis introduces a novel approach to simultaneously
estimate the reward function and the dynamics from a
limited set of expert demonstrations in continuous state
and action spaces. The approach is compared to Relative
Entropy IRL (REIRL) with a naive dynamics estimate. The
experimental evaluation showed that the simultaneous
estimation of the dynamics is more accurate than the naive,
fixed estimate. Since both the rewards and the dynamics
influence the policy, it is plausible that a more
expressive reward function can counteract inaccuracies of
the dynamics estimate. To test this, the reward function is
modified with additional features. Although both approaches
can benefit from additional features, a careful choice of
the features and initial values is required. The additional
degree of freedom from the added features may result in an
a worse dynamics estimate, overfitting or an ambiguous
reward function. A theoretic explanation on the latter is
also provided.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS14 Robert Bosch GmbH Herman, Kraetzschmar, M{\"u}ller,
supervising},
author  = {Christophe Quignon},
month = {February},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Simultaneous Estimation of Rewards and Dynamics in
Continuous Environments},
year = {2017}
}

• Y. Youssef, C. Hebbal, A. Drak, P. G. Plöger, and A. Kuestenmacher, “Model-Based Remote Diagnosis of Motion Faults on an Omnidirectional Robot via Structural Analysis,” in 28th International Workshop on Principles of Diagnosis (DX), Brescia, Italy, 2017.
[BibTeX]
@InProceedings{ youssef2017,
author  = {Youssef, Y. and Hebbal, C. and Drak, A. and Pl{\"o}ger, P.
G. and Kuestenmacher, A.},
booktitle  = {28th International Workshop on Principles of Diagnosis
(DX)},
date-modified  = {2018-01-02 10:15:14 +0000},
title = {Model-Based Remote Diagnosis of Motion Faults on an
Omnidirectional Robot via Structural Analysis},
year = {2017}
}

• M. Schoebel, “Deep Learning for Recognition of Daily-Living Actions,” Master Thesis, Rheinaustraße 32, 53225 Bonn, Germany, 2017.
[BibTeX] [Abstract]

With the world’s population growing older and caring facilities having reached their capacities already today, the need for assistive (robotic) systems is specifically critical in elderly care. In order to aid, an autonomous assistive system first needs to know in what form and when help is required. One way to achieve this is to recognize the currently performed actions by a human user and offer assistance accordingly. This work studies \textit{temporal order verification}, a quasi unsupervised pre-training method for improving the recognition of daily living actions from video data, using the \textit{C3D} deep convolutional neural network. In particular, we evaluate \textit{temporal order verification} as a means to incorporate motion sensitivity in 3D convolutional neural networks, which otherwise treat the temporal evolution of a video and the spatial dimension of video frames equally. Since pre-training a network in an unsupervised way does not require labelled data, this method can be specifically beneficial for recognition tasks where large amounts of labelled data are sparse. We focus on the Charades dataset, which features a unique yet small amount of mundane daily-living action videos and therefore enables the design of vision-based systems, which can be deployed in real-world applications such as assistive robotics. Our \textit{C3D} implementation yields an accuracy of $44.57\%$ on the UCF101 standard action recognition benchmark and a mean average precision of $9.01\%$ for recognizing daily-living actions of the Charades dataset, when trained from randomly initialized weights. By pre-training the model with \textit{temporal order verification}, we were not able to improve classification results, in fact the pre-trained weights turned out to impair the network’s performance.

@MastersThesis{ yyyylastname,
abstract  = { With the world's population growing older and caring
facilities having reached their capacities already today,
the need for assistive (robotic) systems is specifically
critical in elderly care. In order to aid, an autonomous
assistive system first needs to know in what form and when
help is required. One way to achieve this is to recognize
the currently performed actions by a human user and offer
assistance accordingly. This work studies \textit{temporal
order verification}, a quasi unsupervised pre-training
method for improving the recognition of daily living
actions from video data, using the \textit{C3D} deep
convolutional neural network. In particular, we evaluate
\textit{temporal order verification} as a means to
incorporate motion sensitivity in 3D convolutional neural
networks, which otherwise treat the temporal evolution of a
video and the spatial dimension of video frames equally.
Since pre-training a network in an unsupervised way does
not require labelled data, this method can be specifically
beneficial for recognition tasks where large amounts of
labelled data are sparse. We focus on the Charades dataset,
which features a unique yet small amount of mundane
daily-living action videos and therefore enables the design
of vision-based systems, which can be deployed in
real-world applications such as assistive robotics. Our
\textit{C3D} implementation yields an accuracy of $44.57\%$
on the UCF101 standard action recognition benchmark and a
mean average precision of $9.01\%$ for recognizing
daily-living actions of the Charades dataset, when trained
from randomly initialized weights. By pre-training the
model with \textit{temporal order verification}, we were
not able to improve classification results, in fact the
pre-trained weights turned out to impair the network's
performance.},
address  = {Rheinaustraße 32, 53225 Bonn, Germany},
annote  = {WS 2015/16, RoboLand Project, Prassler, Plöger
supervising},
author  = {Maximilian Schoebel},
month = {October},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Deep Learning for Recognition of Daily-Living Actions},
year = {2017}
}

• P. Lukin, “Comparison of controller auto-tuning methods for manipulator axis,” mathesis Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2017.
[BibTeX] [Abstract]

Robotic arms have a variety of applications in industry and can be programmed to perform specific tasks. Control process of robotic arms is usually reduced to end-effector trajectory following. This task requires position control of all manipulator joints. The current standard control algorithm is PID controller and various modifications of it. Performance of the controller is governed by a set of parameters that are specified during the tuning process. These parameters can drastically change behavior of the system and must be chosen correctly in order to meet demanded controller performance. There are numerous tuning methods that were designed to achieve different control-loop qualities. Additionally, a control engineer must be involved every time a robot is deployed. Thus, it is important to investigate which methods can be successfully applied to tune robotic arm joints in an automatic manner. The main goal of this thesis is to investigate what controller auto-tuning method is the most efficient for manipulator axis position control. The object of the research is the robotic arm axis that includes a joint with an attached load and a control unit. Efficiency of the controller is based on closed-loop control specifications: transient response, robustness and disturbance rejection. The approach is to design a mathematical model of the manipulator axis and obtain model parameters using system identification methods. Next, linearization and model order reduction are applied to the obtained model that is used for comparative analysis of existing PID-based controllers and tuning methods. The most theoretically promising algorithms are applied for controller design given control-loop specification. MATLAB Simulink batch experiments with pseudo-random manipulator parameters performed to evaluate tuning algorithms. Finally, the most effective auto-tuning method is validated on board of BLDC motor yielding good results and concurrence with theoretical expectations.

@MastersThesis{ 2017lukin,
author  = {Petr Lukin},
title = {Comparison of controller auto-tuning methods for
manipulator axis},
school  = {Hochschule Bonn-Rhein-Sieg},
year = {2017},
type = {mathesis},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
month = nov,
abstract  = {Robotic arms have a variety of applications in industry
and can be programmed to perform specific tasks. Control
process of robotic arms is usually reduced to end-effector
trajectory following. This task requires position control
of all manipulator joints. The current standard control
algorithm is PID controller and various modifications of
it. Performance of the controller is governed by a set of
parameters that are specified during the tuning process.
These parameters can drastically change behavior of the
system and must be chosen correctly in order to meet
demanded controller performance. There are numerous tuning
methods that were designed to achieve different
control-loop qualities. Additionally, a control engineer
must be involved every time a robot is deployed. Thus, it
is important to investigate which methods can be
successfully applied to tune robotic arm joints in an
automatic manner.
The main goal of this thesis is to investigate what
controller auto-tuning method is the most efficient for
manipulator axis position control. The object of the
research is the robotic arm axis that includes a joint with
an attached load and a control unit. Efficiency of the
controller is based on closed-loop control specifications:
transient response, robustness and disturbance rejection.
The approach is to design a mathematical model of the
manipulator axis and obtain model parameters using system
identification methods. Next, linearization and model order
reduction are applied to the obtained model that is used
for comparative analysis of existing PID-based controllers
and tuning methods. The most theoretically promising
algorithms are applied for controller design given
experiments with pseudo-random manipulator parameters
performed to evaluate tuning algorithms. Finally, the most
effective auto-tuning method is validated on board of BLDC
motor yielding good results and concurrence with
theoretical expectations. },
annote  = {WS15/16 Synapticon GmbH, Pl{\"o}ger, Chakirov supervising}
}

• C. Hebbal, “Developemnt of learning lidar sensor model,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2017.
[BibTeX] [Abstract]

In recent times, academia and industry alike are working in a big way on autonomous vehicles, with the intention of making the future of transportation safe and efficient. Some more time is needed, before we see fully autonomous vehicles on road, but the journey towards it has already begun with the introduction of ADAS and HAD functions in vehicles that enable them to drive around autonomously in selected scenarios like parking, highway driving etc. The key requirement of these systems is comprehensive and accurate information about the surrounding environment of the vehicle. This information is usually represented by maps. There exists numerous approaches to represent environment in literature, among them occupancy grid representation is the popular and most widely used. Occupancy grids tessellate the area to be mapped by means of finite evenly spaced grid of cells. Each cell in the grid is associated with a binary variable that specifies the occupancy value of the cell. Usually to construct maps, occupancy grids make use of inverse sensor models to transform sensor measurements to occupancy values. Most of the existing grid approaches in literature make use of simplified inverse sensor models, which results in generating maps of lower quality. This is due to the fact that most of these models do not take into account the uncertainty in measurement and environmental factors influencing measurement. Thus, in this thesis we set out to develop a better stochastic inverse sensor model by using classical machine learning technique. We developed the stochastic inverse sensor model for the LIDAR sensor by using regression trees. The developed sensor model was evaluated against existing models in simulation and was found to generate better quality maps as compared to existing models. The model was also integrated into EB robinos framework and was evaluated against existing highly tuned static model on measurement data. The quality of map generated by developed model was found to be better than the static model.

@MastersThesis{ 2017hebbal,
abstract  = {In recent times, academia and industry alike are working
in a big way on autonomous vehicles, with the intention of
making the future of transportation safe and efficient.
Some more time is needed, before we see fully autonomous
vehicles that enable them to drive around autonomously in
selected scenarios like parking, highway driving etc. The
key requirement of these systems is comprehensive and
accurate information about the surrounding environment of
the vehicle. This information is usually represented by
maps.
There exists numerous approaches to represent environment
in literature, among them occupancy grid representation is
the popular and most widely used. Occupancy grids
tessellate the area to be mapped by means of finite evenly
spaced grid of cells. Each cell in the grid is associated
with a binary variable that specifies the occupancy value
of the cell. Usually to construct maps, occupancy grids
make use of inverse sensor models to transform sensor
measurements to occupancy values. Most of the existing grid
approaches in literature make use of simplified inverse
sensor models, which results in generating maps of lower
quality. This is due to the fact that most of these models
do not take into account the uncertainty in measurement and
environmental factors influencing measurement.
Thus, in this thesis we set out to develop a better
stochastic inverse sensor model by using classical machine
learning technique. We developed the stochastic inverse
sensor model for the LIDAR sensor by using regression
trees. The developed sensor model was evaluated against
existing models in simulation and was found to generate
better quality maps as compared to existing models. The
model was also integrated into EB robinos framework and was
evaluated against existing highly tuned static model on
measurement data. The quality of map generated by developed
model was found to be better than the static model.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS 15/16 Elektrobit Automotive Gmbh – EB Robinos
Pl{\”o}ger, Asterorth, Tollk{\”u}hn supervising},
author  = {Chaitanya Hebbal},
month = {December},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Developemnt of learning lidar sensor model},
year = {2017}
}

• M. S. Abubucker, “Humanoid Whole-Body Motion Planning for Locomotion Synchronized with Manipulation Task,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2017.
[BibTeX] [Abstract]

In this work, we consider the problem of whole-body motion planning for humanoid that should perform a task by using both the locomotion task and the manipulation task in a synchronized manner. This would allow us to perform smooth reaching motions. The task assigned to the robot would be to reach a goal position with its hand which is quite far away from the robot. In order to reach the goal position, the robot must execute a manipulation task while performing a locomotion task implicitly. The presented whole-body motion planner builds solution by merging CoM movement primitives in the configuration space. The CoM movement primitives are typical humanoid actions like walking with forward step, lateral steps and curved steps. To enable smooth reaching motions, three zones are defined: locomotion zone, loco-manipulation zone and manipulation zone. The planner has been tested for the HRP-4 robot in V-REP simulator. Different types of task were performed with different levels of difficulties to evaluate the planner. The simulation results show that the planner was able to find the solution regardless of the obstacles in the scene and the performance of the planner was evaluated using performance metrics like time taken to plan, tree size and motion duration.

@MastersThesis{ 2017abubucker,
abstract  = { In this work, we consider the problem of whole-body
motion planning for humanoid that should perform a task by
a synchronized manner. This would allow us to perform
smooth reaching motions. The task assigned to the robot
would be to reach a goal position with its hand which is
quite far away from the robot. In order to reach the goal
position, the robot must execute a manipulation task while
performing a locomotion task implicitly. The presented
whole-body motion planner builds solution by merging CoM
movement primitives in the configuration space.
The CoM movement primitives are typical humanoid actions
like walking with forward step, lateral steps and curved
steps. To enable smooth reaching motions, three zones are
defined: locomotion zone, loco-manipulation zone and
manipulation zone. The planner has been tested for the
HRP-4 robot in V-REP simulator. Different types of task
were performed with different levels of difficulties to
evaluate the planner. The simulation results show that the
planner was able to find the solution regardless of the
obstacles in the scene and the performance of the planner
was evaluated using performance metrics like time taken to
plan, tree size and motion duration. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS13 H-BRS , DIAG Sapienza University of Rome - COMANOID
Pl{\"o}ger, Oriolo, Cognetti supervising},
author  = {Mohammed Shameer Abubucker},
month = {December},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Humanoid Whole-Body Motion Planning for Locomotion
year = {2017}
}

• D. Arya, “Complete Path Coverage while Exploring Unknown Environment for Radiation Mapping using Unmanned Ground Systems,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2017.
[BibTeX] [Abstract]

In the recent past, several events have highlighted the need for unmanned autonomous systems in disaster situations caused by natural hazards or due to human-driven actions. The use of autonomous systems for the first analysis of the environment and the provision of the initial report can limit hazards to the first responders. This can increase their efficiency in planning response operations and reduce the reaction time. In case of a nuclear disaster, an autonomous system that could gather radiation measurements of the site and provide a radiation map of the area, would be of great help for the response team. Such radiation maps would assist the response team to plan their actions before beginning recovery activities, and thus reduce the risk of putting the life of the recovery team in danger. These radiation maps will also allow them to understand the extent of damage and identify regions where it will be dangerous for the response team to go. In this work, we have developed an online exploration method, Simultaneous Meandering and Mapping (SMAM), for unknown environments which enables a ground robot to cover every free space in the region and gather radiation information. The robot covers this free space effectively without revisiting a space unless it is necessary or unavoidable. Such kind of exploration is called Complete Coverage Path Planning (CCPP). In the proposed algorithm, the robot tries to sweep the region in zig-zag motion and optimizes its path to have maximal straight trajectories. The algorithm stores the environment information regarding obstacles, free spaces and radiation measurement, which is later used to generate a radiation and environment map. We visualized this radiation information as a heat-map laid over the generated environment map and physical layout of the area. We tested SMAM against two existing online CCPP exploration techniques. We chose scenarios with varying complexity from widely open to cluttered environments and analyzed the performance of the three methods in terms of total number of revisits, turns and path length. SMAM has performed better and more efficiently than the other two for complex environments. SMAM can also generate continuous straight trajectories preventing the robot to stop at every position to plan for new goal, and hence save power consumption.

@MastersThesis{ 2017arya,
abstract  = {In the recent past, several events have highlighted the
need for unmanned autonomous systems in disaster situations
caused by natural hazards or due to human-driven actions.
The use of autonomous systems for the first analysis of the
environment and the provision of the initial report can
limit hazards to the first responders. This can increase
their efficiency in planning response operations and reduce
the reaction time. In case of a nuclear disaster, an
autonomous system that could gather radiation measurements
of the site and provide a radiation map of the area, would
be of great help for the response team. Such radiation maps
would assist the response team to plan their actions before
beginning recovery activities, and thus reduce the risk of
putting the life of the recovery team in danger. These
radiation maps will also allow them to understand the
extent of damage and identify regions where it will be
dangerous for the response team to go.
In this work, we have developed an online exploration
method, Simultaneous Meandering and Mapping (SMAM), for
unknown environments which enables a ground robot to cover
every free space in the region and gather radiation
information. The robot covers this free space effectively
without revisiting a space unless it is necessary or
unavoidable. Such kind of exploration is called Complete
Coverage Path Planning (CCPP). In the proposed algorithm,
the robot tries to sweep the region in zig-zag motion and
optimizes its path to have maximal straight trajectories.
The algorithm stores the environment information regarding
obstacles, free spaces and radiation measurement, which is
later used to generate a radiation and environment map. We
visualized this radiation information as a heat-map laid
over the generated environment map and physical layout of
the area.
We tested SMAM against two existing online CCPP exploration
techniques. We chose scenarios with varying complexity from
widely open to cluttered environments and analyzed the
performance of the three methods in terms of total number
of revisits, turns and path length. SMAM has performed
better and more efficiently than the other two for complex
environments. SMAM can also generate continuous straight
trajectories preventing the robot to stop at every position
to plan for new goal, and hence save power consumption.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS13/14 Asteroth, M\"uller, Schneider supervising},
author  = {Devvrat Arya},
month = {June},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Complete Path Coverage while Exploring Unknown Environment
for Radiation Mapping using Unmanned Ground Systems},
year = {2017}
}

• A. Drak, “DoveCopter: A Tracking and Following Aerial Platform for Aerodynamics Analysis of a Velomobile,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2017.
[BibTeX] [Abstract]

Efficient control and design of electrically-assisted vehicles is a well established branch in the field of intelligent transportation which remains under active and constant development. A testbed for such a field is the electrically assisted vehicle “velomobile”. The velomobile has an aerodynamic body fabricated from light weight carbon fiber, two steered front wheels, and an electric motor to assist the rider. Optimization of the body is actively sought and can be achieved through extracting airflow models from tufts attached to the surface of the vehicle. The aim of this project is to provide high quality (HQ) video footage of the tufts under realistic outdoor conditions in order to extract airflow models. Conventionally, airflow models are extracted by means of a wind tunnel or computational fluid dynamics simulation. Woefully however, the former lacks a realistic test environment and the latter comes at a higher computational cost and reduced accuracy. The proposed method to acquire HQ video footage of tufts attached to the velomobile is by means of a drone. The drone will autonomously perform a side-by-side tracking and following of the vehicle while recording HQ video of the tufts. This method has the added benefit of utilizing all the surrounding space to traverse with no limits, thus eliminating the limited recording space restriction caused by side-byside tracking

@MastersThesis{ 2017drak,
abstract  = {Efficient control and design of electrically-assisted
vehicles is a well established branch in the field of
intelligent transportation which remains under active and
constant development. A testbed for such a field is the
electrically assisted vehicle "velomobile". The velomobile
has an aerodynamic body fabricated from light weight carbon
fiber, two steered front wheels, and an electric motor to
assist the rider. Optimization of the body is actively
sought and can be achieved through extracting airflow
models from tufts attached to the surface of the vehicle.
The aim of this project is to provide high quality (HQ)
video footage of the tufts under realistic outdoor
conditions in order to extract airflow models.
Conventionally, airflow models are extracted by means of a
wind tunnel or computational fluid dynamics simulation.
Woefully however, the former lacks a realistic test
environment and the latter comes at a higher computational
cost and reduced accuracy. The proposed method to acquire
HQ video footage of tufts attached to the velomobile is by
means of a drone. The drone will autonomously perform a
side-by-side tracking and following of the vehicle while
recording HQ video of the tufts. This method has the added
benefit of utilizing all the surrounding space to traverse
with no limits, thus eliminating the limited recording
space restriction caused by side-byside tracking},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[WS2014] [HBRS] [Asteroth], [Julier], [Kruijff]
supervising},
month = {July},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {DoveCopter: A Tracking and Following Aerial Platform for
Aerodynamics Analysis of a Velomobile},
year = {2017}
}

• S. Biswas, “Deployment of a Correlation Based Algorithm for a Robotic Black Box,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2017.
[BibTeX] [Abstract]

Robotic systems are vulnerable to faults. If faults in robot systems are not handled quickly enough it might lead to a catastrophe. Hence real-time fault detection and diagnosis on robots are important. Since robots are desired to work autonomously there is usually no human to maintain the robot instantly. This necessitates a fault detection and diagno- sis module on the robot. Again this module itself should not induce fault in the robot system and kept separate from the robot system as much as possible. Hence the idea of a black box is conceived, which would act as an external fault observer to the robot system. Considering the requirement of real-time operation, computationally light ap- proaches should be implemented. Since correlation measure gives hint about how system components might be interdependent on the same system variables, an approach based on correlation is investigated in this work. In order to detect fault the observable sensors and actuators values along with structural model of the system are available to the detection module. This works tries to keep the correlation based approach as general as possible. Finally after investigating deeply, an approach based on correlation change is proposed which is able to detect more faults. Upon detection of fault a simple diagnosis is done based on the knowledge of structural model of the robot system.

@MastersThesis{ 2008biswas,
abstract  = {Robotic systems are vulnerable to faults. If faults in
robot systems are not handled quickly enough it might lead
to a catastrophe. Hence real-time fault detection and
diagnosis on robots are important. Since robots are desired
to work autonomously there is usually no human to maintain
the robot instantly. This necessitates a fault detection
and diagno- sis module on the robot. Again this module
itself should not induce fault in the robot system and kept
separate from the robot system as much as possible. Hence
the idea of a black box is conceived, which would act as an
external fault observer to the robot system. Considering
the requirement of real-time operation, computationally
light ap- proaches should be implemented. Since correlation
measure gives hint about how system components might be
interdependent on the same system variables, an approach
based on correlation is investigated in this work. In order
to detect fault the observable sensors and actuators values
along with structural model of the system are available to
the detection module. This works tries to keep the
correlation based approach as general as possible. Finally
after investigating deeply, an approach based on
correlation change is proposed which is able to detect more
faults. Upon detection of fault a simple diagnosis is done
based on the knowledge of structural model of the robot
system.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[SS13] [BRSU] - [Deployment of a Correlation Based
Algorithm for a Robotic Black Box] Pl{\"o}ger, Breuer,
Thoduka supervising},
author  = {Saugata Biswas},
month = {September},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Deployment of a Correlation Based Algorithm for a Robotic
Black Box},
year = {2017}
}

### 2016

• C. K. Tan, “Neural Maps for Robotics – A Biologically-Inspired Approach for Obstacle Avoidance,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

A general aim of autonomous robots is to create agents that can operate independently for long periods of time. One key aspect for them will be the control of movement, since a stationary robot would have less utility than a moving one. One way of fulfilling this aim is to build robots such that they mimic the way humans and animals execute movements, since they are extremely versatile and have the ability to learn and generalize their movements to many situations. This would require studies in neuroscience to discover mechanisms behind such adaptability. We propose the use of Dynamic Neural Fields (DNFs) as a tool to build maps of neural activities, which we term “neural maps” in this thesis, to model what is happening within the brain and nervous system during its operation. For example, they can be used as an internal representation of a robot in the context of localization. Another use of such maps is that they are useful in providing a visual representation correlating neural activity and physical activity, aiding understand of what goes on in the neural networks during execution of a task. In our thesis, we extended a generalized motor program from [Stringer et al. 2003] for the task of obstacle avoidance. This program utilizes memorized motor sequences within a neural network for its movement. By adding a decision component into their model, we show that an agent with such a motor program is able to select memorized trajectories that allows it to avoid obstacle while travelling towards a goal. Simulations show that our model is capable of obstacle avoidance. Our extended model can also be viewed in context of a middle-level controller that utilizes simple rules to decide on an action to take, without the intervention from a high-level planner.

@MastersThesis{ 2016tan,
abstract  = {A general aim of autonomous robots is to create agents
that can operate independently for long periods of time.
One key aspect for them will be the control of movement,
since a stationary robot would have less utility than a
moving one. One way of fulfilling this aim is to build
robots such that they mimic the way humans and animals
execute movements, since they are extremely versatile and
have the ability to learn and generalize their movements to
many situations. This would require studies in neuroscience
to discover mechanisms behind such adaptability. We propose
the use of Dynamic Neural Fields (DNFs) as a tool to build
maps of neural activities, which we term "neural maps" in
this thesis, to model what is happening within the brain
and nervous system during its operation. For example, they
can be used as an internal representation of a robot in the
context of localization. Another use of such maps is that
they are useful in providing a visual representation
correlating neural activity and physical activity, aiding
understand of what goes on in the neural networks during
execution of a task. In our thesis, we extended a
generalized motor program from [Stringer et al. 2003] for
the task of obstacle avoidance. This program utilizes
memorized motor sequences within a neural network for its
movement. By adding a decision component into their model,
we show that an agent with such a motor program is able to
select memorized trajectories that allows it to avoid
obstacle while travelling towards a goal. Simulations show
that our model is capable of obstacle avoidance. Our
extended model can also be viewed in context of a
middle-level controller that utilizes simple rules to
decide on an action to take, without the intervention from
a high-level planner.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS 2013/14, Pl{\"o}ger, Kraetzschmar, Trappenberg},
author  = {Chun Kwang Tan},
month = {January},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Neural Maps for Robotics - A Biologically-Inspired
Approach for Obstacle Avoidance},
year = {2016}
}

• M. U. Tahir, “Analysis of live 3D mapping approaches with different sensors on various hardware platforms,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

SLAM is an active research problem and there are various implementations available under ROS (Robot Operating System) which are real-time but because of low range sensors and hardware limitations, the data quality is not as good as in industrial 3D scanners (e.g. FARO Focus3D scanner) which are not real-time though. So, there is a need to have a system which is real-time and could also deliver high data quality under given circumstances. Certain real-time SLAM approaches may deliver higher data quality under certain environments and system specifications. For this, first an overview of the available algorithms and sensors have been provided followed by their evaluation in terms of sensor trajectory estimation (RMSE) and system performance (CPU + memory). “FARO Laser Tracker Vantage” has been employed to serve as the ground truth for the carried out experiments which has an accuracy of up to 0.015mm. After acquiring and analyzing the results from the experiments it has been concluded that “rtabmap” paired with “Asus Xtion Pro Live” on an Intel NUC board is a better option and is more accurate. The proposed system is real-time, efficient and low cost at the same time as compared to other existing low-cost counter-parts.

@MastersThesis{ 2016tahir,
abstract  = {SLAM is an active research problem and there are various
implementations available under ROS (Robot Operating
System) which are real-time but because of low range
sensors and hardware limitations, the data quality is not
as good as in industrial 3D scanners (e.g. FARO Focus3D
scanner) which are not real-time though. So, there is a
need to have a system which is real-time and could also
deliver high data quality under given circumstances.
Certain real-time SLAM approaches may deliver higher data
quality under certain environments and system
specifications. For this, first an overview of the
available algorithms and sensors have been provided
followed by their evaluation in terms of sensor trajectory
estimation (RMSE) and system performance (CPU + memory).
"FARO Laser Tracker Vantage" has been employed to serve as
the ground truth for the carried out experiments which has
an accuracy of up to 0.015mm. After acquiring and analyzing
the results from the experiments it has been concluded that
"rtabmap" paired with "Asus Xtion Pro Live" on an Intel NUC
board is a better option and is more accurate. The proposed
system is real-time, efficient and low cost at the same
time as compared to other existing low-cost
counter-parts.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS12/13 FARO Europe - Master Thesis Pl{\"o}ger,
Kraetzschmar, Zweigle},
month = {August},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Analysis of live 3D mapping approaches with different
sensors on various hardware platforms},
year = {2016}
}

• S. Thoduka, “Motion Detection for Mobile Robots Using Vision,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

Motion detection is an important skill for an autonomous mobile robot op- erating in dynamic environments. Motion in the environment could, for instance, indicate the presence of an obstacle, a human attracting the atten- tion of the robot, unwanted disturbance of the environment by the robot etc. Vision-based motion detection is particularly challenging when the robot’s camera is in motion. In addition to detecting independently moving objects while in motion, the robot must be able to distinguish between motions in the environment and motions of its own parts, such as its manipulator. In this thesis, we use a Fourier-Mellin transform-based image registra- tion method to compensate for camera motion before applying two different methods of motion detection: temporal differencing and feature tracking. Self-masking using the robot’s state and model is used to ignore motions of visible robot parts. The approach is evaluated on a set of navigation and manipulation se- quences recorded on a Care-O-bot 3 and a youBot and was also integrated and tested on a real Care-O-bot 3. The image registration method is able to compensate for most of the camera motion, but is still inaccurate at depth discontinuities and when there is large depth variance in the scene. The tem- poral difference method performs better than feature tracking, with a more consistent true positive rate and a lower false discovery rate.

@MastersThesis{ 2016thoduka,
abstract  = {Motion detection is an important skill for an autonomous
mobile robot op- erating in dynamic environments. Motion in
the environment could, for instance, indicate the presence
of an obstacle, a human attracting the atten- tion of the
robot, unwanted disturbance of the environment by the robot
etc. Vision-based motion detection is particularly
challenging when the robot’s camera is in motion. In
addition to detecting independently moving objects while in
motion, the robot must be able to distinguish between
motions in the environment and motions of its own parts,
such as its manipulator. In this thesis, we use a
Fourier-Mellin transform-based image registra- tion method
to compensate for camera motion before applying two
different methods of motion detection: temporal
differencing and feature tracking. Self-masking using the
robot’s state and model is used to ignore motions of
visible robot parts. The approach is evaluated on a set of
navigation and manipulation se- quences recorded on a
Care-O-bot 3 and a youBot and was also integrated and
tested on a real Care-O-bot 3. The image registration
method is able to compensate for most of the camera motion,
but is still inaccurate at depth discontinuities and when
there is large depth variance in the scene. The tem- poral
difference method performs better than feature tracking,
with a more consistent true positive rate and a lower false
discovery rate.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS13, Pl\"{o}ger, Kraetzschmar, Hegger supervising},
author  = {Santosh Thoduka},
month = {September},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {{M}otion {D}etection for {M}obile {R}obots {U}sing
{V}ision},
year = {2016}
}

• D. Vázquez, “Video Analysis and Anomaly Detection in Human Gait Patterns from a Fast Moving Camera – Development of an Outdoor Gait Analysis System,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

Today, running has become an important activity in the life of many people as it mainly has a good impact on their health. Unfortunately, some individuals do not exercise in an appropriate way provoking several injuries to their bodies. Indoor gait analysis has been used to detect and treat abnormal patterns but it has been proven that the performance exhibited in indoor laboratories is not the same as the performance exhibited outdoors. If the gait patterns of the runners could be analyzed in environments in which they perform their outdoor running activities, better diagnostics would be obtained. Therefore, this Master thesis proposes a method where a moving robot could be used to record the runner’s performance by means of a depth camera. The proposed approach relies on a stereo vision system and color markers. Given a frontal stereo view of a subject, the location of the markers can be determined and used to create a 3D model of the human skeleton. Based on this model, several joint angles, and other features are created. Then, a Support Vector Machine is trained to differentiate between normal and abnormal gait patterns. The proposed method showed a classification rate of 76\% proving that frontal gait analysis is feasible.

@MastersThesis{ 2016vazquezurenad,
abstract  = {Today, running has become an important activity in the
life of many people as it mainly has a good impact on their
health. Unfortunately, some individuals do not exercise in
an appropriate way provoking several injuries to their
bodies. Indoor gait analysis has been used to detect and
treat abnormal patterns but it has been proven that the
performance exhibited in indoor laboratories is not the
same as the performance exhibited outdoors. If the gait
patterns of the runners could be analyzed in environments
in which they perform their outdoor running activities,
better diagnostics would be obtained. Therefore, this
Master thesis proposes a method where a moving robot could
be used to record the runner's performance by means of a
depth camera. The proposed approach relies on a stereo
vision system and color markers. Given a frontal stereo
view of a subject, the location of the markers can be
determined and used to create a 3D model of the human
skeleton. Based on this model, several joint angles, and
other features are created. Then, a Support Vector Machine
is trained to differentiate between normal and abnormal
gait patterns. The proposed method showed a classification
rate of 76\% proving that frontal gait analysis is
feasible.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS2014 - HBRS - Prof. Prassler, Prof. P\"oger, F\"uller},
author  = {Daniel V\'azquez},
month = {November},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Video Analysis and Anomaly Detection in Human Gait
Patterns from a Fast Moving Camera - Development of an
Outdoor Gait Analysis System},
year = {2016}
}

• E. Yildiz, P. {G. Ploeger}, S. Alda, and K. Pervoelz, “Fault-Tolerant Software Architecture for the Unmanned Surface Vehicle Roboship Declaration of Authorship,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

One of the major goals of research in Robotics is to develop truly autonomous vehicles that can be utilized to assist humans in dangerous environments, especially where human life is under threat. In this regard, Unmanned Surface Vehicles (USVs) are developed and utilized to cope up with military scenarios which put human life at potential risk. One of the very common scenarios is harbor security, where USVs are expected to conduct surveillance and feedback to strengthen the harbor’s security. The (USV) Roboship has been designed and developed for harbor patrolling and coastal surveillance. It goes without saying, in such an application domain dependability plays a major role. USVs must not endanger the mission, face unexpected situations and allow no human intervention in case of a problem. At this point, it should be acknowledged that dependability is strongly coupled with fault tolerance when it comes to autonomous robots. However, due to complicated design schemes of USVs and unpredictable environmental factors of harbor environments, USVs are prone to fail. As it is almost certain that the USVs will fail, in order to not to endanger the mission, USVs should fail in a way that the recovery from the failures is possible, and mission integrity is protected. This paper highlights a fault tolerant software architecture for the USV Roboship. We conduct a comprehensive reliability analysis and propose a fault tolerant scheme, wherein the sensitive points of the existing architecture are spotted, and the fault tolerant techniques are incorporated. Since the approach is model-free and its decisions are made at a high frequency, the system is able to deal with highly dynamic scenarios. We used the UWSim simulation environment to simulate the scenarios in which the USV was supposed to navigate safely using its sensors. Field tests have proven the performance and reliability of the system to be satisfactory, yielding 53.57% decrease in faults. We plan to deploy the USV in Eckernfoerde harbor region of Germany to test the scheme in the future.

@MastersThesis{ yildiz2016,
abstract  = {One of the major goals of research in Robotics is to
develop truly autonomous vehicles that can be utilized to
assist humans in dangerous environments, especially where
human life is under threat. In this regard, Unmanned
Surface Vehicles (USVs) are developed and utilized to cope
up with military scenarios which put human life at
potential risk. One of the very common scenarios is harbor
security, where USVs are expected to conduct surveillance
and feedback to strengthen the harbor's security. The (USV)
Roboship has been designed and developed for harbor
patrolling and coastal surveillance. It goes without
saying, in such an application domain dependability plays a
major role. USVs must not endanger the mission, face
unexpected situations and allow no human intervention in
case of a problem. At this point, it should be acknowledged
that dependability is strongly coupled with fault tolerance
when it comes to autonomous robots. However, due to
complicated design schemes of USVs and unpredictable
environmental factors of harbor environments, USVs are
prone to fail. As it is almost certain that the USVs will
fail, in order to not to endanger the mission, USVs should
fail in a way that the recovery from the failures is
possible, and mission integrity is protected. This paper
highlights a fault tolerant software architecture for the
USV Roboship. We conduct a comprehensive reliability
analysis and propose a fault tolerant scheme, wherein the
sensitive points of the existing architecture are spotted,
and the fault tolerant techniques are incorporated. Since
the approach is model-free and its decisions are made at a
high frequency, the system is able to deal with highly
dynamic scenarios. We used the UWSim simulation environment
to simulate the scenarios in which the USV was supposed to
navigate safely using its sensors. Field tests have proven
the performance and reliability of the system to be
satisfactory, yielding 53.57% decrease in faults. We plan
to deploy the USV in Eckernfoerde harbor region of Germany
to test the scheme in the future.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS13, Fraunhofer Institute IAIS - Roboship, Pl{\"o}ger,
Alda, Perv{\"o}lz supervising},
author  = {Yildiz, Erenus and {G. Ploeger}, Paul and Alda, Sascha and
Pervoelz, Kai},
month = {October 2016},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {{Fault-Tolerant Software Architecture for the Unmanned
Surface Vehicle Roboship Declaration of Authorship}},
year = {2016}
}

• P. Vlantis, C. P. Bechlioulis, G. Karras, G. Fourlas, and K. J. Kyriakopoulos, “Fault tolerant control for omni-directional mobile platforms with 4 mecanum wheels,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 2395-2400.
[BibTeX]
@InProceedings{ vlantis16,
author  = {Vlantis, P. and Bechlioulis, C. P. and Karras, G. and
Fourlas, G. and Kyriakopoulos, K. J.},
booktitle  = {2016 IEEE International Conference on Robotics and
Automation (ICRA)},
date-modified  = {2018-01-02 09:40:43 +0000},
keywords  = {adaptive control;collision avoidance;fault tolerant
control;friction;mobile robots;robot dynamics;robust
techniques;collision avoidance;constrained workspace;drive
shaft;dynamic model uncertainties;dynamic model
uncertainty;fault tolerant control;fault tolerant control
functions;omnidirectional mobile platforms;operational
workspace;parametric uncertainty;robot dynamics;robust
motion control scheme;robust motion planning;second order
dynamics;static obstacles;workspace
boundaries;Dynamics;Mobile communication;Mobile
robots;Uncertainty;Vehicle dynamics;Vehicles;Wheels},
month = {May},
pages = {2395-2400},
title = {Fault tolerant control for omni-directional mobile
platforms with 4 mecanum wheels},
year = {2016}
}

• I. Siddique, “Comparative Analysis on Sensor Configurations of Autonomous Vehicles for Robust Obstacle Avoidance,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

Autonomous vehicle working in close proximity to human raises a safety concern. Such vehicles have to be equipped with flawless obstacle avoidance techniques. Robust obstacle avoidance can only be achieved if the environment could be perceived flawlessly. Unfortunately the sensors to perceive the environment are not flawless. This forced modern day autonomous vehicles to use different combination of sensors to compensate each others limitations. This comes with additional agitation of fusing different sensors data. Besides in spite of combining different sensors there might still be situations which hamper the nominal performance of the sensor configuration. This makes successful and safe deployment of autonomous vehicle in public a very challenging task. Hence a comprehensive survey on sensor configurations is of immense importance. Here in this thesis work different sensors and sensor configurations of recent wheeled autonomous vehicles in indoor and outdoor scenarios have been investigated. The investigated sensor conflagrations were analyzed and compared to find a general trend in these different scenarios. Performance of the individual sensors used in those vehicles were also analyzed and compared.

@MastersThesis{ 2016siddiqueisnain,
abstract  = {Autonomous vehicle working in close proximity to human
raises a safety concern. Such vehicles have to be equipped
with flawless obstacle avoidance techniques. Robust
obstacle avoidance can only be achieved if the environment
could be perceived flawlessly. Unfortunately the sensors to
perceive the environment are not flawless. This forced
modern day autonomous vehicles to use different combination
of sensors to compensate each others limitations. This
comes with additional agitation of fusing different sensors
data. Besides in spite of combining different sensors there
might still be situations which hamper the nominal
performance of the sensor configuration. This makes
successful and safe deployment of autonomous vehicle in
public a very challenging task. Hence a comprehensive
survey on sensor configurations is of immense importance.
Here in this thesis work different sensors and sensor
configurations of recent wheeled autonomous vehicles in
indoor and outdoor scenarios have been investigated. The
investigated sensor conflagrations were analyzed and
compared to find a general trend in these different
scenarios. Performance of the individual sensors used in
those vehicles were also analyzed and compared. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS13/14 FH-HBRS - Comparative Analysis on Sensor
Configurations of Autonomous Vehicles for Robust Obstacle
Avoidance, Prassler, Pl{\"o}ger},
author  = {Isnain Siddique},
month = {October},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Comparative Analysis on Sensor Configurations of
Autonomous Vehicles for Robust Obstacle Avoidance},
year = {2016}
}

• S. Sathanam, “Emotion detection from German text by sentiment analysis,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

Social assistive robots(SAR) providing emotional care and support is investigated in Emorobot project to help the patients affected by dementia. One of the functions of the SAR is to detect emotion from the patients to communicate effectively. Detecting emotions directly via facial expressions or vocal features have certain demerits with the elderly people. Detection of emotions from the spoken text is considered in this work. The six basic emotions Happy, Sad, Anger, Fear, Surprise and disgust are considered in addition to emotionless(none) condition. Emotion detection which comes under sentiment analysis has major issues with the complexity of feature vectors representation and preserving semantic meanings. The emergence of word embeddings is a recent breakthrough in natural language processing, where huge amount of data are used to learn semantically preserved word and sentence vectors in an unsupervised manner. Application of word vectors and document vectors in emotion detection is studied in this work. Detecting emotions from text is handled in two phases. The first phase involves generation of feature vectors from vector models. Vector models require huge amount of dataset to generate high quality feature vectors. Dataset containing nearly 10 million sentences are created by streaming online tweets. Another dataset around 18 million wikipedia sentences are also used for comparison of the nature and quality of the vectors generated. Second phase is estimating emotions from the sentences with the use of feature vectors. A sentence could be represented both by document vectors as well as average of word vectors in the sentence. Training dataset is manually generated by applying automatic labeling on twitter dataset to label the tweets for emotions based on the hashtags attached. Evaluation of the model is performed on the annotated sentences from Emorobot project as ground truth. Due to lesser data, a test dataset from twitter is also evaluated. A detailed evaluation of the vector models is provided for the use case considered. In total, two datasets Twitter and Wiki are applied on two word vector models CBOW, SG and two document vector models DBOW and DM to generate feature vectors which are evaluated against two test datasets Twitter and annotated dataset. A result of 16 cases shows that generally averaged word vectors perform better than document vectors. In word vector models, CBOW model outperforms SG model but by only a small value. Document vectors are unpredictable in nature as the results vary drastically for each runs. Eventhough, the vectors generated from wikipedia datasets contains better features as evaluated by analogy test sentences. The vectors generated from twitter datasets achieves better results than the feature vectors learnt from wikipedia datasets.

@MastersThesis{ 2016sathanam,
abstract  = {Social assistive robots(SAR) providing emotional care and
support is investigated in Emorobot project to help the
patients affected by dementia. One of the functions of the
SAR is to detect emotion from the patients to communicate
effectively. Detecting emotions directly via facial
expressions or vocal features have certain demerits with
the elderly people. Detection of emotions from the spoken
text is considered in this work. The six basic emotions
Happy, Sad, Anger, Fear, Surprise and disgust are
considered in addition to emotionless(none) condition.
Emotion detection which comes under sentiment analysis has
major issues with the complexity of feature vectors
representation and preserving semantic meanings. The
emergence of word embeddings is a recent breakthrough in
natural language processing, where huge amount of data are
used to learn semantically preserved word and sentence
vectors in an unsupervised manner. Application of word
vectors and document vectors in emotion detection is
studied in this work. Detecting emotions from text is
handled in two phases. The first phase involves generation
of feature vectors from vector models. Vector models
require huge amount of dataset to generate high quality
feature vectors. Dataset containing nearly 10 million
sentences are created by streaming online tweets. Another
dataset around 18 million wikipedia sentences are also used
for comparison of the nature and quality of the vectors
generated. Second phase is estimating emotions from the
sentences with the use of feature vectors. A sentence could
be represented both by document vectors as well as average
of word vectors in the sentence. Training dataset is
manually generated by applying automatic labeling on
twitter dataset to label the tweets for emotions based on
the hashtags attached. Evaluation of the model is performed
on the annotated sentences from Emorobot project as ground
truth. Due to lesser data, a test dataset from twitter is
also evaluated. A detailed evaluation of the vector models
is provided for the use case considered. In total, two
datasets Twitter and Wiki are applied on two word vector
models CBOW, SG and two document vector models DBOW and DM
to generate feature vectors which are evaluated against two
test datasets Twitter and annotated dataset. A result of 16
cases shows that generally averaged word vectors perform
better than document vectors. In word vector models, CBOW
model outperforms SG model but by only a small value.
Document vectors are unpredictable in nature as the results
vary drastically for each runs. Eventhough, the vectors
generated from wikipedia datasets contains better features
as evaluated by analogy test sentences. The vectors
generated from twitter datasets achieves better results
than the feature vectors learnt from wikipedia datasets.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS2013 - Emotion detection from German text by sentiment
analysis, Pl{\"o}ger, Kraetzschmar, F{\"u}ller supervising},
author  = {Sivasurya Sathanam},
month = {April},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Emotion detection from German text by sentiment analysis},
year = {2016}
}

• A. M. Sundaram, “Planning Realistic Force Interactions for Bimanual Grasping and Manipulation,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

Dexterous robot hands offer a wide variety of grasping and interaction possibilities with objects. In order to select the best grasp, it is critical to count with a reliable grasp quality measure. Traditional grasp analysis methods use quality measures that allow a relative comparison of grasps for the same object, without an associated physical meaning for the resulting quality. The focus of this thesis is to establish an improved grasp analysis method that will result in a quality measure that can be directly interpreted in the force domain. One of the most commonly used grasp qualities is the largest minimum resisted wrench, which indicates the maximum perturbation wrench that a grasp can resist in any direction. Two efficient ways to calculate this quality are identified: (i) incremental grasp wrench space algorithm, and (ii) ray shooting algorithm. However, existing algorithms for such methods make several assumptions to avoid computational complexities in analyzing the 6D wrench space of a grasp. Important properties like hand actuation, realizable contact forces, friction at the contacts, and geometry of the object to be grasped are either neglected or greatly simplified. In this thesis, these assumptions are improved to bring those algorithms closer to reality. In the case of bimanual grasps, the number of contacts significantly increases, which in turn increases the computational complexity of the process. Suitable algorithms to handle a higher number of contacts are also proposed in this thesis. For grasping an object successfully, considering the hand and the object for analysis are necessary but not sufficient requirements. The capabilities of the robotic arm to which the hand is attached are equally important. Different manipulability measures are considered for the arm, corresponding to single and dual hand grasps, and they are later unified with the physically relevant grasp quality to obtain an overall measure of the goodness of a particular grasp. Based on the updated grasp quality, a complete grasp planning architecture is established. It also includes methods for bimanual grasp synthesis and grasp filtering based on properties like collision with the environment and arm reachability. The thesis includes application examples that illustrate the applicability of the approach. Finally, the developed algorithms can be generalized to a different type of force interaction task, namely a humanoid robot balancing with multiple contacts with the environment. A customized ray shooting algorithm is used to find the stability region of a humanoid legged robot standing on an uneven terrain or making multiple contacts with its hands and legs. In contrast to the regular zero-moment point based method, the stability region is found by analyzing the wrench space of the robot, which makes the method independent of the number of contacts or the contact configuration. Different examples show the direct and intuitive interpretation of the results obtained with the proposed method.

@MastersThesis{ 2016meenakshisundaram,
abstract  = {Dexterous robot hands offer a wide variety of grasping and
interaction possibilities with objects. In order to select
the best grasp, it is critical to count with a reliable
grasp quality measure. Traditional grasp analysis methods
use quality measures that allow a relative comparison of
grasps for the same object, without an associated physical
meaning for the resulting quality. The focus of this thesis
is to establish an improved grasp analysis method that will
result in a quality measure that can be directly
interpreted in the force domain.
One of the most commonly used grasp qualities is the
largest minimum resisted wrench, which indicates the
maximum perturbation wrench that a grasp can resist in any
direction. Two efficient ways to calculate this quality are
identified: (i) incremental grasp wrench space algorithm,
and (ii) ray shooting algorithm. However, existing
algorithms for such methods make several assumptions to
avoid computational complexities in analyzing the 6D wrench
space of a grasp. Important properties like hand actuation,
realizable contact forces, friction at the contacts, and
geometry of the object to be grasped are either neglected
or greatly simplified. In this thesis, these assumptions
are improved to bring those algorithms closer to reality.
In the case of bimanual grasps, the number of contacts
significantly increases, which in turn increases the
computational complexity of the process. Suitable
algorithms to handle a higher number of contacts are also
proposed in this thesis.
For grasping an object successfully, considering the hand
and the object for analysis are necessary but not
sufficient requirements. The capabilities of the robotic
arm to which the hand is attached are equally important.
Different manipulability measures are considered for the
arm, corresponding to single and dual hand grasps, and they
are later unified with the physically relevant grasp
quality to obtain an overall measure of the goodness of a
particular grasp. Based on the updated grasp quality, a
complete grasp planning architecture is established. It
also includes methods for bimanual grasp synthesis and
grasp filtering based on properties like collision with the
environment and arm reachability. The thesis includes
application examples that illustrate the applicability of
the approach.
Finally, the developed algorithms can be generalized to a
different type of force interaction task, namely a humanoid
robot balancing with multiple contacts with the
environment. A customized ray shooting algorithm is used to
find the stability region of a humanoid legged robot
standing on an uneven terrain or making multiple contacts
with its hands and legs. In contrast to the regular
zero-moment point based method, the stability region is
found by analyzing the wrench space of the robot, which
makes the method independent of the number of contacts or
the contact configuration. Different examples show the
direct and intuitive interpretation of the results obtained
with the proposed method.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS13 DLR - Planning Realistic Force Interactions for
Bimanual Grasping and Manipulation Kraetzschmar,
Albu-Schaeffer, Roa, Schneider supervising},
author  = {Ashok Meenakshi Sundaram},
month = {June},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Planning Realistic Force Interactions for Bimanual
Grasping and Manipulation},
year = {2016}
}

• O. L. Carrion, “Task Planning, Execution and Monitoring for Mobile Manipulators in Industrial Domains,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

Decision making is a necessary skill for autonomous mobile manipulators working in industrial environments. Task planning is a well-established field with a large research community that continues to produce new heuristics, tools and algorithms that enable agents to produce plans that achieve their goals. In this work we demonstrate the applicability of satisficing task planning with action costs by integrating a state-of-the-art planner, the Mercury planner, into the KUKA youBot mobile manipulator and using it to solve basic transportation and insertion tasks. In contrast to optimal planners which minimize total action costs in a plan, satisficing planners minimize plan generation time, yielding sub-optimal solutions but in less time compared to optimal planners. The planner uses a delete-list relaxation heuristic to quickly prune the search space and generate satisfactory solutions. The developed planning framework is modular, re-usable and well documented. It brings together two major standards in robotics and task planning: ROS and PDDL, similar to ROSPlan. Unlike ROSPlan, this work is able to plan with cost information. Moreover, it is more modular, enabling the community to use the various components at their own discretion. Finally, while ROSPlan allows only one re-planning strategy, our framework enables users to quickly implement their own strategies. The resulting system demonstrates that the agent’s behavior is optimized, it is able to flexibly handle unexpected situations, and it can robustly handle failures by re-planning when needed. It is also easier to maintain and extend. The work presented here also highlights the benefits of conducting a domain analysis to gain the maximum benefit from the use of a given planner and domain.

@MastersThesis{ 2016limaoscar,
abstract  = {Decision making is a necessary skill for autonomous mobile
manipulators working in industrial environments. Task
planning is a well-established field with a large research
community that continues to produce new heuristics, tools
and algorithms that enable agents to produce plans that
achieve their goals. In this work we demonstrate the
applicability of satisficing task planning with action
costs by integrating a state-of-the-art planner, the
Mercury planner, into the KUKA youBot mobile manipulator
and using it to solve basic transportation and insertion
tasks. In contrast to optimal planners which minimize total
action costs in a plan, satisficing planners minimize plan
generation time, yielding sub-optimal solutions but in less
time compared to optimal planners. The planner uses a
delete-list relaxation heuristic to quickly prune the
search space and generate satisfactory solutions. The
developed planning framework is modular, re-usable and well
documented. It brings together two major standards in
robotics and task planning: ROS and PDDL, similar to
ROSPlan. Unlike ROSPlan, this work is able to plan with
cost information. Moreover, it is more modular, enabling
the community to use the various components at their own
discretion. Finally, while ROSPlan allows only one
re-planning strategy, our framework enables users to
quickly implement their own strategies. The resulting
system demonstrates that the agent's behavior is optimized,
it is able to flexibly handle unexpected situations, and it
can robustly handle failures by re-planning when needed. It
is also easier to maintain and extend. The work presented
here also highlights the benefits of conducting a domain
analysis to gain the maximum benefit from the use of a
given planner and domain.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS04/13 FH-BRS - RoboCup@Work Pl{\"o}ger, Kraetzschmar,
author  = {Oscar Lima Carrion},
month = {April},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Task Planning, Execution and Monitoring for Mobile
Manipulators in Industrial Domains},
year = {2016}
}

• A. Mitrevski, “Improving the Manipulation Skills of Service Robots by Refining Action Representations,” Master Thesis, Grantham-Allee 20, 53757 Sankt Augustin, Germany, 2016.
[BibTeX] [Abstract]

Releasing objects is an error-prone robot manipulation activity often due to insufficient knowledge about the preconditions of releasing actions. As the problem has a semantic nature that stems from the variations in physical behaviour between different object categories, neither symbolic nor geometric models alone are enough for specifying how and where a releasing action should be executed. In principle, those two have to be combined into a more general representation that maximises the execution success and minimises the probability of failures; however, symbolic and geometric action representations are frequently studied in isolation and one of them is taken for granted. This thesis investigates the exact nature of releasing actions, the problem of learning reusable models of such actions, and the manner in which those models can be utilised for execution. Starting with an examination of multiple planning domain learning algorithms and geometric reasoning procedures, we develop a template-based representation of actions that simplifies the acquisition and application of symbolic and geometric models; in particular, we show that template models, which we call $\delta$ and $\delta^\varnothing$ problems, can be learned in special configurations and that pairwise object relations should be an inherent part of the models’ nature. The models are organised in an object-centred memory model, called $\Delta$ memory, in which storage and retrieval depend on semantic object information. With the help of a simulated environment that abstracts away various manipulation activities, we show that properly constructed $\delta$ and $\delta^\varnothing$ problems can be a reliable representation of releasing actions, thereby justifying the use of template-based action representations, but also demonstrate that their utility depends on a number of conventions and can benefit from an additional ontology.

@MastersThesis{ 2016mitrevski,
title = {Improving the Manipulation Skills of Service Robots by
Refining Action Representations},
author  = {Aleksandar Mitrevski},
abstract  = {Releasing objects is an error-prone robot manipulation
activity often due to insufficient knowledge about the
preconditions of releasing actions. As the problem has a
semantic nature that stems from the variations in physical
behaviour between different object categories, neither
symbolic nor geometric models alone are enough for
specifying how and where a releasing action should be
executed. In principle, those two have to be combined into
a more general representation that maximises the execution
success and minimises the probability of failures; however,
symbolic and geometric action representations are
frequently studied in isolation and one of them is taken
for granted.
This thesis investigates the exact nature of releasing
actions, the problem of learning reusable models of such
actions, and the manner in which those models can be
utilised for execution. Starting with an examination of
multiple planning domain learning algorithms and geometric
reasoning procedures, we develop a template-based
representation of actions that simplifies the acquisition
and application of symbolic and geometric models; in
particular, we show that template models, which we call
$\delta$ and $\delta^\varnothing$ problems, can be learned
in special configurations and that pairwise object
relations should be an inherent part of the models' nature.
The models are organised in an object-centred memory model,
called $\Delta$ memory, in which storage and retrieval
depend on semantic object information.
With the help of a simulated environment that abstracts
away various manipulation activities, we show that properly
constructed $\delta$ and $\delta^\varnothing$ problems can
be a reliable representation of releasing actions, thereby
justifying the use of template-based action
representations, but also demonstrate that their utility
depends on a number of conventions and can benefit from an
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
address  = {Grantham-Allee 20, 53757 Sankt Augustin, Germany},
month = {January},
year = {2016},
annote  = {WS13/14 HBRS - Pl{\"o}ger, Witt, Kuestenmacher
supervising}
}

• D. Nair, “Predicting Object Locations using Spatio-Temporal Information by a Domestic Service Robot: A Bayesian Learning Approach,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

One of the ways domestic service robots can better assist humans is by providing personalized, predictive and context-aware services. Robots can observe human activities and work patterns and provide time-based contextual assistance. This thesis aims to enable domestic robots to empirically learn about human behaviour and preferences. In the current literature, user preferences are learned for a generic home environment, on the contrary we learn preferences over a specific home. Robots generate a lot of information using the raw data from their sensors, which is often discarded after use. If this information is recorded, it can be used to generate new knowledge. The thesis proposes models to generate knowledge about user preferences using stored information. The developed approaches in this thesis cover the following two knowledge generation topics: (1) learning user location preferences (2) learning user preferences in object placement. All knowledge generation techniques developed in this thesis are based on Bayesian modelling and have been implemented using probabilistic programming languages. The learned user preferences were used by the robot for predicting: (a) location of non-stationary objects (b) location of users in home and (c) room occupancy. The models were evaluated on three datasets collected over several months containing person and object occurrences in home and office environments. The efficiency of the models was accessed by measuring the accuracy score of each models. Our model for predicting location of non-stationary objects (a) was able to predict with 70% accuracy for 26 objects, while The model for user location preference (b) was able to predict with 63% accuracy and model for room occupancy (c) could predict for 3 rooms with more than 80% accuracy.

@MastersThesis{ 2016nairdeebul,
abstract  = {One of the ways domestic service robots can better assist
humans is by providing personalized, predictive and
context-aware services. Robots can observe human activities
and work patterns and provide time-based contextual
assistance. This thesis aims to enable domestic robots to
empirically learn about human behaviour and preferences. In
the current literature, user preferences are learned for a
generic home environment, on the contrary we learn
preferences over a specific home. Robots generate a lot of
information using the raw data from their sensors, which is
often discarded after use. If this information is recorded,
it can be used to generate new knowledge. The thesis
proposes models to generate knowledge about user
preferences using stored information. The developed
approaches in this thesis cover the following two knowledge
generation topics: (1) learning user location preferences
(2) learning user preferences in object placement. All
knowledge generation techniques developed in this thesis
are based on Bayesian modelling and have been implemented
using probabilistic programming languages. The learned user
preferences were used by the robot for predicting: (a)
location of non-stationary objects (b) location of users in
home and (c) room occupancy. The models were evaluated on
three datasets collected over several months containing
person and object occurrences in home and office
environments. The efficiency of the models was accessed by
measuring the accuracy score of each models. Our model for
predicting location of non-stationary objects (a) was able
to predict with 70% accuracy for 26 objects, while The
model for user location preference (b) was able to predict
with 63% accuracy and model for room occupancy (c) could
predict for 3 rooms with more than 80% accuracy.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS14 Pl{\"o}ger, Lakemeyer, Niemueller supervising},
author  = {Deebul Nair},
month = {September},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Predicting Object Locations using Spatio-Temporal
Information by a Domestic Service Robot: A Bayesian
Learning Approach},
year = {2016}
}

• D. E. Ramos Avila, “A Study on Swarm Intelligence: Towards Nature-Inspired Robot Navigation,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

Swarm Intelligence (SI) is a problem-solving approach whose inspiration comes from cooperative behaviours and well-defined social structures observed in different animal species. Within those structures, simple interactions among “unsophisticated” individuals produce complex and interesting properties which aid flocks of birds, schools of fish, packs of predators and colonies of insects in solving daily tasks in a very efficient way. Several meta-heuristics such as the particle swarm optimization (PSO) and the artificial bee colony algorithm (ABC) are based on this metaphor and entail important benefits. One of their most remarkable advantages is their ability to find solutions to NP-hard problems in a very short time. In the field of robotics, motion planning is a well-known problem considered to be NP-hard. This work studies SI as a possible alternative to state-of-the-art global path planners, with the main objective of producing robot-traversable paths in a competitive amount of time without further optimization. In particular, the usage of the novel grey wolf optimizer (GWO) is proposed and tested against other SI-based meta-heuristics, and then again against popular sampling-based planners. The experimental evaluation indicates that GWO usually has a higher success rate and faster convergence than other SI-based algorithms like ABC and the firefly algorithm (FFA). Moreover, the results show that even though rapidly-exploring random trees (RRT) are much faster, the proposed strategy produces shorter and smoother paths and even often outperforms probabilistic roadmaps (PRM) in terms of computational time. In spite of the limitations that the greedy nature of the planner still poses when dealing with complex environments, the results highlight a promising line of research that might alleviate current issues in motion planning. The approach is further extended to address multiple objective functions using the multi-objective grey wolf optimizer (MOGWO) based on Pareto dominance, as a typical requirement of robot motion planning is not to produce the shortest possible path, but rather a short-enough, smooth-enough and safe-enough path.

@MastersThesis{ 2016ramosavila,
title = {A Study on Swarm Intelligence: Towards Nature-Inspired
author  = {Ramos Avila, Diego Enrique},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
year = {2016},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
month = {April},
abstract  = {Swarm Intelligence (SI) is a problem-solving approach
whose inspiration comes from cooperative behaviours and
well-defined social structures observed in different animal
species. Within those structures, simple interactions among
“unsophisticated” individuals produce complex and
interesting properties which aid flocks of birds, schools
of fish, packs of predators and colonies of insects in
solving daily tasks in a very efficient way. Several
meta-heuristics such as the particle swarm optimization
(PSO) and the artificial bee colony algorithm (ABC) are
based on this metaphor and entail important benefits. One
of their most remarkable advantages is their ability to
find solutions to NP-hard problems in a very short time. In
the field of robotics, motion planning is a well-known
problem considered to be NP-hard. This work studies SI as a
possible alternative to state-of-the-art global path
planners, with the main objective of producing
robot-traversable paths in a competitive amount of time
without further optimization. In particular, the usage of
the novel grey wolf optimizer (GWO) is proposed and tested
against other SI-based meta-heuristics, and then again
against popular sampling-based planners. The experimental
evaluation indicates that GWO usually has a higher success
rate and faster convergence than other SI-based algorithms
like ABC and the firefly algorithm (FFA). Moreover, the
results show that even though rapidly-exploring random
trees (RRT) are much faster, the proposed strategy produces
shorter and smoother paths and even often outperforms
probabilistic roadmaps (PRM) in terms of computational
time. In spite of the limitations that the greedy nature of
the planner still poses when dealing with complex
environments, the results highlight a promising line of
research that might alleviate current issues in motion
planning. The approach is further extended to address
multiple objective functions using the multi-objective grey
wolf optimizer (MOGWO) based on Pareto dominance, as a
typical requirement of robot motion planning is not to
produce the shortest possible path, but rather a
short-enough, smooth-enough and safe-enough path.},
annote  = {WS13/14 H-BRS - A Study on Swarm Intelligence: Towards
supervising}
}

• A. Ortega Sáinz, “Multi-Robot Path Planning in Confined Dynamic Environments,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

The automation of transportation tasks in logistics using autonomous robots has been gaining popularity in the last years, due in most part to the advantages they present against other automation technologies or manually operated solutions. The productivity and efficiency of an application employing a Multi-Robot System (MRS) is a direct result of the path planning and coordination strategy used. The Multi-Robot Path Planning (MRPP) problem has been widely researched throughout the years, and given the popularity of the topic and the amount of algorithms available, selecting the best approach for a given application is not as straightforward as it may seem. First of all, there is still a gap between the problems solved by the academic community in the state of the art and the capabilities of modern technologies used in real applications. Numerous efforts are being made to close this gap, but given the complexities of evaluating multi-robot systems with real robots, there is still a very long way to go. Furthermore, the lack of a standard methodology to evaluate MRPP problems makes the selection of an approach for a given application difficult, particularly since the reported results are not directly comparable. Using a hospital transportation task as an example use case, this thesis will benchmark decentralized path planning approaches, keeping in mind real-life conditions, oriented towards implementation rather than theory. A comparative analysis will asses the selected approaches in regards to their scalability in the number of robots and their robustness to dynamic environments populated by moving obstacles. An analysis of the evaluation methodology and performance metrics reported in the state of the art will be use as a basis for the proposed set of guidelines and performance metrics to benchmark MRPP approaches hereafter. Finally, the groundwork for a benchmarking framework will be presented.

@MastersThesis{ 2016ortega,
abstract  = {The automation of transportation tasks in logistics using
autonomous robots has been gaining popularity in the last
years, due in most part to the advantages they present
against other automation technologies or manually operated
solutions. The productivity and efficiency of an
application employing a Multi-Robot System (MRS) is a
direct result of the path planning and coordination
strategy used. The Multi-Robot Path Planning (MRPP) problem
has been widely researched throughout the years, and given
the popularity of the topic and the amount of algorithms
available, selecting the best approach for a given
application is not as straightforward as it may seem. First
of all, there is still a gap between the problems solved by
the academic community in the state of the art and the
capabilities of modern technologies used in real
applications. Numerous efforts are being made to close this
gap, but given the complexities of evaluating multi-robot
systems with real robots, there is still a very long way to
go. Furthermore, the lack of a standard methodology to
evaluate MRPP problems makes the selection of an approach
for a given application difficult, particularly since the
reported results are not directly comparable. Using a
hospital transportation task as an example use case, this
thesis will benchmark decentralized path planning
approaches, keeping in mind real-life conditions, oriented
towards implementation rather than theory. A comparative
analysis will asses the selected approaches in regards to
their scalability in the number of robots and their
robustness to dynamic environments populated by moving
obstacles. An analysis of the evaluation methodology and
performance metrics reported in the state of the art will
be use as a basis for the proposed set of guidelines and
performance metrics to benchmark MRPP approaches hereafter.
Finally, the groundwork for a benchmarking framework will
be presented.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS13/14 HBRS Prassler, Pl{\"o}ger, F{\"u}ller supervising},
author  = {Argentina {Ortega S{\'a}inz}},
keywords  = {multi-robot systems; multi-robot path planning},
month = {July},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Multi-Robot Path Planning in Confined Dynamic
Environments},
year = {2016}
}

• N. Deshpande, “Using Semantic Information for Robot Navigation,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2016.
[BibTeX] [Abstract]

@MastersThesis{ 2016deshpande,
title = {Using Semantic Information for Robot Navigation},
author  = {Niranjan Deshpande},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
year = {2016},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
month = {March},
note = {Advisors: Prof. Dr. Paul G. Pl{\"o}ger, Prof. Dr. Anne
Spalanzani, Mr. Sven Schneider},
abstract  = {Current research in robotics focuses on making autonomous
robots for domestic purposes which have to work along with
humans in real environments. One of the prerequisites for
such robots is the ability to navigate intelligently in
finding a shortest collision free path by avoiding all
obstacles. This approach has worked well in laboratories
and structured environments. However, for robots navigating
in real environments that are filled with humans, often,
finding a shortest collision free path is not the desired
behaviour. The robot needs to navigate in a more
intelligent manner by adapting its behaviour based on the
context at hand. Moreover, real environments are
unstructured and there are often situations where a robot
needs to reposition objects to navigate and reach its goal.
This work addresses the problem of using semantic
situations. Towards this goal, different forms of semantic
information useful for navigation have been identified. An
architecture is proposed to represent this information and
use it for different aspects of navigation. The proposed
architecture also uses contextual information about the
developed and implemented using the Robot Operating
System(ROS) framework. Tests have been performed in
simulation using the Pioneer 3DX robot platform.
Preliminary results prove the validity of the proposed
architecture. The robot was able to navigate in a more
desired and intelligent manner by using semantic and
contextual information.}
}

### 2015

• M. Zolghadr, “Semantic Similarity Between Objects in Home Environments,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2015.
[BibTeX] [Abstract]

When a service robot is acting in a home environment, there are situations where some of the objects necessary for completing the task may not be available, because either they are missing or the preconditions for using them are not met. In such situations, humans show incredible flexibility by making substitutions. However, for a robot identifying the similarity between objects and finding an appropriate substitute for an unavailable object is not a straightforward task. Most of the approaches which tried to solve the problem have focused on lifting, where another instance of the same object class is used as a substitute. These approaches apply a limited range of semantics. Most approaches mea- sure similarity between objects using perception. Perception-based approaches tend to be time-consuming. However, the application of semantic-based similarity measures has been successfully and efficiently used in the genetics domain. The approach presented here combines similarity measurement with knowledge base queries in order to find appropriate substitutes. In our ontological approach, similarity measurement is carried out based on a broad range of objects semantics, including the functional affordances, object features, part-whole relations, and spatial proximity of the objects in the home environment. In this research, existing tools, previously used in the biological domains, for measuring the similarity of individuals within an ontology are evaluated and an analysis of the most useful measures is conducted. The experiments which were conducted were used to guide the creation of a methodology for the modeling of the objects within the knowledge base. The results show that the Jaccard Similarity Coefficient is particularly well-suited to our goal. The results also highlight the limitations of this approach and suggest that a com- bination of ontology-based and perception-based approaches may be optimal in order to find a suitable substitute for the unavailable objects.

@MastersThesis{ 2015zolghadr,
abstract  = { When a service robot is acting in a home environment,
there are situations where some of the objects necessary
for completing the task may not be available, because
either they are missing or the preconditions for using them
are not met. In such situations, humans show incredible
flexibility by making substitutions. However, for a robot
identifying the similarity between objects and finding an
appropriate substitute for an unavailable object is not a
straightforward task. Most of the approaches which tried to
solve the problem have focused on lifting, where another
instance of the same object class is used as a substitute.
These approaches apply a limited range of semantics. Most
approaches mea- sure similarity between objects using
perception. Perception-based approaches tend to be
time-consuming. However, the application of semantic-based
similarity measures has been successfully and efficiently
used in the genetics domain. The approach presented here
combines similarity measurement with knowledge base queries
in order to find appropriate substitutes. In our
ontological approach, similarity measurement is carried out
based on a broad range of objects semantics, including the
functional affordances, object features, part-whole
relations, and spatial proximity of the objects in the home
environment. In this research, existing tools, previously
used in the biological domains, for measuring the
similarity of individuals within an ontology are evaluated
and an analysis of the most useful measures is conducted.
The experiments which were conducted were used to guide the
creation of a methodology for the modeling of the objects
within the knowledge base. The results show that the
Jaccard Similarity Coefficient is particularly well-suited
to our goal. The results also highlight the limitations of
this approach and suggest that a com- bination of
ontology-based and perception-based approaches may be
optimal in order to find a suitable substitute for the
unavailable objects. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS08/09 Kraetzschmar, Pl{\"o}ger, Awaad supervising},
month = {April},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Semantic Similarity Between Objects in Home Environments},
year = {2015}
}

• M. Vayugundla, “Experimental Evaluation and Improvement of a Viewframe-Based Navigation Method,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2015.
[BibTeX] [Abstract]

Insects like ants and bees navigate robustly in their environments in spite of their small brains using vision as their primary sensor. Inspired by this, researchers at DLR are working on a range free navigation system using visual features. This ability is specially useful for autonomous navigation in large environments and also for computationally limited small robots. Each location in the environment is represented as a viewframe. A viewframe is a set of landmark observations where each landmark observation contains landmark I.D, descriptor and corresponding angle with respect to the robot’s location. Binary Robust Invariant Scalable Keypoints (BRISK) features extracted from the omnidirectional images were used as landmarks in this work. The environment is represented as a Trail-Map which preserves the relationship between adjacent viewframes and is efficient at both storing and pruning the map size when required. This work experimentally evaluates the current system and improves it. In this work, as an extension to the Trail-Map representation, topological knowledge was extracted with the help of dimensionality reduction techniques and by defining dissimilarity measures between any two viewframes. Using this topological knowledge, a pose graph is developed adding edges between viewframes based on how close they are in addition to the adjacency connections. With the help of this map, shorter paths were identified for homing. The topological mapping pipeline was implemented on the robot and experiments were performed in both indoor and outdoor environments. The performance of different dissimilarity measures and dimensionality reduction techniques in building a topological map of viewframes was evaluated. The experiments showed that using this pose graph representation, the robot could take shorter paths which are a subset of the long exploration paths by using the intersections of the paths.

@MastersThesis{ 2015vayugundla,
abstract  = {Insects like ants and bees navigate robustly in their
environments in spite of their small brains using vision as
their primary sensor. Inspired by this, researchers at DLR
are working on a range free navigation system using visual
features. This ability is specially useful for autonomous
navigation in large environments and also for
computationally limited small robots.
Each location in the environment is represented as a
viewframe. A viewframe is a set of landmark observations
where each landmark observation contains landmark I.D,
descriptor and corresponding angle with respect to the
robot's location. Binary Robust Invariant Scalable
Keypoints (BRISK) features extracted from the
omnidirectional images were used as landmarks in this work.
The environment is represented as a Trail-Map which
preserves the relationship between adjacent viewframes and
is efficient at both storing and pruning the map size when
required. This work experimentally evaluates the current
system and improves it.
In this work, as an extension to the Trail-Map
representation, topological knowledge was extracted with
the help of dimensionality reduction techniques and by
defining dissimilarity measures between any two viewframes.
Using this topological knowledge, a pose graph is developed
adding edges between viewframes based on how close they are
this map, shorter paths were identified for homing. The
topological mapping pipeline was implemented on the robot
and experiments were performed in both indoor and outdoor
environments. The performance of different dissimilarity
measures and dimensionality reduction techniques in
building a topological map of viewframes was evaluated.
The experiments showed that using this pose graph
representation, the robot could take shorter paths which
are a subset of the long exploration paths by using the
intersections of the paths.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS11 Heiden, Kraetzschmar, Stelzer supervising},
author  = {Mallikarjuna Vayugundla},
month = {December},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Experimental Evaluation and Improvement of a
year = {2015}
}

• J. Sanchez, “Robust and Safe Manipulation by Sensor Fusion of Robotic Manipulators and End-Effectors,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2015.
[BibTeX] [Abstract]

A continuously increasing demand of staff able to take care of the elderly has generated interest in using robots as caregivers. However, the current state of the art limits the complexity of the tasks a robot can achieve. Another important constraint, is the ability of the robot to safely react to unexpected behaviors of a person. Thus, the first step is ensuring the robot can detect these events. In this work, a multi-sensor system based on design patterns is proposed to detect an object (e.g. a person’s arm) slipping from the robot’s grasp. Tactile sensors are used in combination with a force-torque sensor to provide complementary information. Over 500 experiments on a Care-O-bot 3 provided a comparative evaluation of a slip detector implementation. The results show an improved performance when combining both modalities (tactile and force). Furthermore, the proposed implementation proved is able to operate with different grasp shapes while maintaining a high accuracy. Lastly, the evaluation also exhibited the difficulties encountered at detecting motions of a human arm being grasped by a robot.

@MastersThesis{ 2015sanchez,
abstract  = {A continuously increasing demand of staff able to take
care of the elderly has generated interest in using robots
as caregivers. However, the current state of the art limits
the complexity of the tasks a robot can achieve. Another
important constraint, is the ability of the robot to safely
react to unexpected behaviors of a person. Thus, the first
step is ensuring the robot can detect these events. In this
work, a multi-sensor system based on design patterns is
proposed to detect an object (e.g. a person's arm) slipping
from the robot's grasp. Tactile sensors are used in
combination with a force-torque sensor to provide
complementary information. Over 500 experiments on a
Care-O-bot 3 provided a comparative evaluation of a slip
detector implementation. The results show an improved
performance when combining both modalities (tactile and
force). Furthermore, the proposed implementation proved is
able to operate with different grasp shapes while
maintaining a high accuracy. Lastly, the evaluation also
exhibited the difficulties encountered at detecting motions
of a human arm being grasped by a robot.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS12/13 BRSU - RoboCup Pl{\"o}ger, Kraetzschmar,
Schneider, supervising},
author  = {Jose Sanchez},
month = {March},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Robust and Safe Manipulation by Sensor Fusion of Robotic
Manipulators and End-Effectors},
year = {2015}
}

• F. Kilic, “Application and improvement of the TRRL (Transport and Road Research Laboratory) high-speed laser profilometer algorithm with sensor fusion,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2015.
[BibTeX] [Abstract]

@MastersThesis{ 2015kilic,
abstract  = {Maintenance of the highway networks is considered to be
vital in order to ensure transportation safety and quality.
Within different quality criterions used for a highway,
road roughness stands out as the most important factor
which deteriorates the driving comfort and endangers the
passengers. High-speed profilometers are developed in order
to be able to monitor road roughness on the highways
without affecting the normal traffic flow. These devices
record road elevation profiles while driving at highways
speeds.
Transportation and Road Research Laboratory (TRRL) has
developed a unique high- speed profilometer system. Their
design contains four laser displacement transducers which
are placed along a long trailer to be towed by a vehicle.
As the trailer moves forward, the leading sensor measures
the road elevation and the other sensors provide a
reference for the new measurements.
This thesis focuses on the improvement and the
implementation of TRRL-type high speed profilometers. In
order to enable faster and safer operations on the
highways, the geometric design of the original system is
changed to make it more compact. Using a more compact
and the increased angles are expected to introduce a new
source of error for the measurements. The thesis
investigates the effects of vehicle pitching on the
measured profiles, and it suggests a method for pitch angle
estimation in order to correct the laser measurements. The
error analysis seen in the original work is extended and
the algorithm is adjusted for the new geometric design. The
factor that affects the measurement accuracy the most is
determined to be the surface texture due to their
randomness. The performance of the originally suggested
method for eliminating the texture-caused errors is
observed to be insufficient. Therefore, the thesis proposes
a new method with a better performance which eliminates the
texture-caused errors by modelling them with quadratic
functions.
The overall performance of the presented profilometer is
evaluated by conducting experiments on a road with a known
true profile. The accuracy and the repeatability of the
observed results indicate that the developed profilometer
can be used for measuring true profiles with some further
improvement.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS12 measX GmbH & Co. KG - BAST (High-Speed Profilometer)
Ploeger, Breuer, Hilsmann supervising},
author  = {Furkan Kilic},
month = {January},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Application and improvement of the TRRL (Transport and
Road Research Laboratory) high-speed laser profilometer
algorithm with sensor fusion},
year = {2015}
}

• D. Hernandez, “Robust Localization and Path-tracking for a Mobile Outdoor Robot,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2015.
[BibTeX] [Abstract]

The aim of this master’s thesis is the development of a robust localization and path-tracking system for mobile outdoor robots. The system is implemented in the context of a mobile robot as an assistive device. The developed localization system is suitable for robots with applicability in pedestrian and urban areas, running along walking or bicycles paths. The algorithm is an implementation of a particle filter framework. Data from low-cost GPS, odometry sensors, digital maps and the novelty of a visual road-detection algorithm are fused to estimate robot’s location. The results show a consistent estimated robot’s location with a close-loop error of about 1 meter of accuracy. The robustness of the approach is demonstrated by showing experimental results containing different map configurations highlighting the weaknesses of low-cost GPS and a good algorithm performance even with the unavailability of some of the data.

@MastersThesis{ 2015hernandez,
abstract  = {The aim of this master's thesis is the development of a
robust localization and path-tracking system for mobile
outdoor robots. The system is implemented in the context of
a mobile robot as an assistive device. The developed
localization system is suitable for robots with
applicability in pedestrian and urban areas, running along
walking or bicycles paths. The algorithm is an
implementation of a particle filter framework. Data from
low-cost GPS, odometry sensors, digital maps and the
novelty of a visual road-detection algorithm are fused to
estimate robot's location. The results show a consistent
estimated robot's location with a close-loop error of about
1 meter of accuracy. The robustness of the approach is
demonstrated by showing experimental results containing
different map configurations highlighting the weaknesses of
low-cost GPS and a good algorithm performance even with the
unavailability of some of the data.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS12/13 Locomotec - RUFUS Prassler, Asteroth, Blumenthal},
author  = {David Hernandez},
month = {June},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Robust Localization and Path-tracking for a Mobile Outdoor
Robot},
year = {2015}
}

### 2014

• M. Valdenegro, “Fast Text Detection for Road Scenes,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2014.
[BibTeX] [Abstract]

Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and non-character region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features– aspect and occupancy ratios– and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90 % of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions

@MastersThesis{ 2014valdenegro,
abstract  = {Extraction of text information from visual sources is an
important component of many modern applications, for
example, extracting the text from traffic signs on a road
scene in an autonomous vehicle. For natural images or road
scenes this is a unsolved problem. In this thesis the use
of histogram of stroke widths (HSW) for character and
non-character region classification is presented. Stroke
widths are extracted using two methods. One is based on the
Stroke Width Transform and another based on run lengths.
The HSW is combined with two simple region features–
aspect and occupancy ratios– and then a linear SVM is
used as classifier. One advantage of our method over the
state of the art is that it is script-independent and can
also be used to verify detected text regions with the
purpose of reducing false positives. Our experiments on
generated datasets of Latin, CJK, Hiragana and Katakana
characters show that the HSW is able to correctly classify
at least 90 % of the character regions, a similar figure is
obtained for non-character regions. This performance is
also obtained when training the HSW with one script and
testing with a different one, and even when characters are
rotated. On the English and Kannada portions of the
Chars74K dataset we obtained over 95% correctly classified
character regions. The use of raycasting for text line
grouping is also proposed. By combining it with our
HSW-based character classifier, a text detector based on
Maximally Stable Extremal Regions (MSER) was implemented.
The text detector was evaluated on our own dataset of road
scenes from the German Autobahn, where 65% precision, 72%
recall with a f-score of 69% was obtained. Using the HSW as
a text verifier increases precision while slightly reducing
recall. Our HSW feature allows the building of a
script-independent and low parameter count classifier for
character and non-character regions},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS2012 - Fraunhofer IAIS Pl{\"o}ger, Kraetzschmar,
Eickeler supervising},
author  = {Matias Valdenegro},
month = {September},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Fast Text Detection for Road Scenes},
year = {2014}
}

• R. K. Venkat, “Smart Person Counter in Mid-Ranging Environments,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2014.
[BibTeX] [Abstract]

Growing usage of train networks is pushing this vital public transport infrastructure to its physical capacity limitations in major cities globally. Capacity planning and people re-routing is increasingly becoming ubiquitous now a days to manage such increasingly crowded environments. With the current trends of growing technology, robotics and automation is an attractive option for solving this problem. A logical starting point to address the task of capacity planning is to understand the extent of the issue: for exam- ple, building automated algorithm for counting the number of people in such scenarios. Keeping this in mind, this thesis concentrates on achieving a robust people counting sys- tem which can handle real world scenarios posing very intricate and challenging situations. A few of the challenging situations are tracking multiple people walking across the sensor’s field of view(FOV) and counting them correctly within an acceptable time for the system to work in real time, tracking people moving very close to each other, tracking people with a variety of walking behaviors and tracking people walking past obstacles such as pillars etc. Out of the above mentioned challenges, this thesis mainly concentrates on the below two problems: 1. Counting multiple people (8-10 in number appearing simultaneously in a single ob- servation), in real time. 10 is approximately the maximum number of people which the primeSense sensor’s FOV can accommodate in a single observation within its working range. 2. Tracking closely moving people, in which case their corresponding blobs merge and the system no longer detects them belonging to two people. The first problem of counting multiple people appearing simultaneously in real time is handled by building the person counting system on top of already existing person detection systems and an existing particle filter. A background image subtraction step is added to improve the people detection rate. The latter problem of closely moving people is handled using an event graph-based approach with a step of identity mapping for which our approach uses the profile shape of people’s head and shoulders which has been shown to be useful in identifying individuals. The evaluation results show that the person counting system is robust enough to handle 8-10 people walking across the FOV in a single observation with an average execu- tion time of 240 ms and a maximum (worst case) execution time of 470 ms. Additionally, the system is robust to scenarios where people walk very closely to each other, maintaining an accurate person count. An empirical evaluation of the approach in a major Sydney public train station (N=522), and results demonstrating the methods in the complexities of this challenging environment are also presented. Furthermore, these results demon- strate that the novel methods contribute significantly to the person counting system, are real world viable and hence lay a foundation for the idea of people congestion awareness towards the goal of achieving efficient capacity planning.

@MastersThesis{ 2014venkat,
abstract  = {Growing usage of train networks is pushing this vital
public transport infrastructure to its physical capacity
limitations in major cities globally. Capacity planning and
people re-routing is increasingly becoming ubiquitous now a
days to manage such increasingly crowded environments. With
the current trends of growing technology, robotics and
automation is an attractive option for solving this
capacity planning is to understand the extent of the issue:
for exam- ple, building automated algorithm for counting
the number of people in such scenarios. Keeping this in
mind, this thesis concentrates on achieving a robust people
counting sys- tem which can handle real world scenarios
posing very intricate and challenging situations. A few of
the challenging situations are tracking multiple people
walking across the sensor's field of view(FOV) and counting
them correctly within an acceptable time for the system to
work in real time, tracking people moving very close to
each other, tracking people with a variety of walking
behaviors and tracking people walking past obstacles such
as pillars etc. Out of the above mentioned challenges, this
thesis mainly concentrates on the below two problems: 1.
Counting multiple people (8-10 in number appearing
simultaneously in a single ob- servation), in real time. 10
is approximately the maximum number of people which the
primeSense sensor's FOV can accommodate in a single
observation within its working range. 2. Tracking closely
moving people, in which case their corresponding blobs
merge and the system no longer detects them belonging to
two people. The first problem of counting multiple people
appearing simultaneously in real time is handled by
building the person counting system on top of already
existing person detection systems and an existing particle
filter. A background image subtraction step is added to
improve the people detection rate. The latter problem of
closely moving people is handled using an event graph-based
approach with a step of identity mapping for which our
approach uses the profile shape of people's head and
shoulders which has been shown to be useful in identifying
individuals. The evaluation results show that the person
counting system is robust enough to handle 8-10 people
walking across the FOV in a single observation with an
average execu- tion time of 240 ms and a maximum (worst
case) execution time of 470 ms. Additionally, the system is
robust to scenarios where people walk very closely to each
other, maintaining an accurate person count. An empirical
evaluation of the approach in a major Sydney public train
station (N=522), and results demonstrating the methods in
the complexities of this challenging environment are also
presented. Furthermore, these results demon- strate that
the novel methods contribute significantly to the person
counting system, are real world viable and hence lay a
foundation for the idea of people congestion awareness
towards the goal of achieving efficient capacity planning.
},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS11 FH-BRS - UTS,Sydney Pl{\"o}ger, Kraetzschmar,
Kirchner supervising},
author  = {Ravi Kumar Venkat},
month = {May},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Smart Person Counter in Mid-Ranging Environments},
year = {2014}
}

• C. Rörig, A. Heller, D. Hess, and F. Künemund, “Global Localization and Position Tracking of Automatic Guided Vehicles using passive RFID Technology,” in ISR/Robotik 2014; 41st International Symposium on Robotics, 2014, pp. 1-8.
[BibTeX]
@InProceedings{ roehrig,
author  = {R{\"o}rig, C. and Heller, A. and Hess, D. and
K{\"u}nemund, F.},
booktitle  = {ISR/Robotik 2014; 41st International Symposium on
Robotics},
month = {June},
pages = {1-8},
title = {{Global Localization and Position Tracking of Automatic
Guided Vehicles using passive RFID Technology}},
year = {2014}
}

• Pramanujam, “Robust Navigation of a Mobile Manipulator in a Dynamic Environment,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2014.
[BibTeX] [Abstract]

Obstacle avoidance is one of the essential components to achieve autonomy in robot navigation. Algorithms for obstacle avoidance have been developed for over past two decades and the recent advances made in these algorithms do not only deal static but also dynamic obstacles. However, in the case of highly cluttered environments, these algorithms stop navigating the robot and wait for the environment to clear, which is undesirable in crowded environments. Another situation during which these algorithms break is when the external sensors fail or are unable to perceive the environment. The condition of the robot in such scenarios is similar to a human navigating in the dark. The thesis is motivated by the above arguments and addresses the problem similar to how human copes up with trying to find a path in a heavily crowded environment or in the dark. The sense of touch and pressure is used to recover from such scenarios. In order to achieve this, the concept of \emph{compliance} has been adapted from the field of robot manipulation and applied to robot navigation, whereby the collision is considered as an external force being exerted on the system. The thesis proposes a coherent framework to combine compliance in navigation to the existing navigation algorithms, thereby, allowing robots to deal with real as well as virtual forces of the environment. In order to realize the framework, kinetic analysis on a four wheeled omni-directional robot was performed. As a result, the robot can be torque controlled given a certain path to be followed. This provides an estimate of the force exerted by the robot for its motion. A disturbance observer based on robot’s momentum was implemented in order to detect undesirable collisions. The experiments show encouraging results for detecting obstacles at low velocity.

@MastersThesis{ 2014ramanujam,
abstract  = {Obstacle avoidance is one of the essential components to
achieve autonomy in robot navigation. Algorithms for
obstacle avoidance have been developed for over past two
not only deal static but also dynamic obstacles. However,
in the case of highly cluttered environments, these
algorithms stop navigating the robot and wait for the
environment to clear, which is undesirable in crowded
environments. Another situation during which these
algorithms break is when the external sensors fail or are
unable to perceive the environment. The condition of the
robot in such scenarios is similar to a human navigating in
the dark.
The thesis is motivated by the above arguments and
addresses the problem similar to how human copes up with
trying to find a path in a heavily crowded environment or
in the dark. The sense of touch and pressure is used to
recover from such scenarios. In order to achieve this, the
concept of \emph{compliance} has been adapted from the
field of robot manipulation and applied to robot
navigation, whereby the collision is considered as an
external force being exerted on the system.
The thesis proposes a coherent framework to combine
algorithms, thereby, allowing robots to deal with real as
well as virtual forces of the environment. In order to
realize the framework, kinetic analysis on a four wheeled
omni-directional robot was performed. As a result, the
robot can be torque controlled given a certain path to be
followed. This provides an estimate of the force exerted by
the robot for its motion. A disturbance observer based on
robot's momentum was implemented in order to detect
undesirable collisions. The experiments show encouraging
results for detecting obstacles at low velocity. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS11 FH-BRS Pl{\"o}ger, Prassler, Blumenthal},
author  = {Pramanujam},
month = {August},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Robust Navigation of a Mobile Manipulator in a Dynamic
Environment },
year = {2014}
}

• S. Junoh, “Development of a Cognitive UAV for Medical Assistance Application:Integration of a Soar Agent in ROS,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2014.
[BibTeX] [Abstract]

A scenario where a set of small-scale UAVs is used to deliver medical goods to assist injured or sick people in remote locations is proposed. A typical example would be a person having been bitten by a snake in a remote area and desperately needing snake antivenom. The UAV emergency system would then prepare a UAV with the snake antivenom and the UAV would then y to the injured person and deliver the medicine. As this medical assistance system should operate at all times and would often be based in remote locations, it is only feasible when it involves minimum human interaction. This is one of perfect scenarios by which UAV system benefits from a high degree of autonomy. The UAV has to handle the delivery of the medicine even in adverse conditions without relying on human intervention. Fortunately, over the past decades, there has been plenty of progress in the research of autonomy allowing us nowadays to design vehicles with an ever increasing degree of autonomy. One research effort in the eld of autonomy is cognitive architectures, with Soar being among the prominent. Soar tries to model the human cognitive process and ability in a software architecture. Their eorts (since 1983) resulted in a exible cognitive architecture which is available as Open Source software. In this thesis, the use of Soar in the context of the medical assistance UAV is investigated. While all autonomy functionalities are handled by the Soar framework, general capabilities and the middleware of the UAV are modeled using the widely used robot framework ROS. ROS has been providing many modules which allow for ying and navigating small-scale UAVs and therefore enabling a fast prototype development of the UAV software. In this thesis, both the system and the cognitive functionalities of the autonomous medical assistance UAV are designed. Consequently, a major part of this thesis is the integration of Soar into ROS (ROSied Soar). Starting from an overall architecture design including the UAV systems and the cognitive agent, a software implementation was derived. Finally, the UAV model was tested in a simulation environment (ROS/Gazebo) where the UAV performed a delivery mission. The simulation included the straightforward regular delivery as well as missions where stressors were applied which forced the Soar agent to react and make alternative decisions. Preliminary simulation data reveal that this approach has the potential for creating such a medical assistance UAV system. This also means that the synergy between Soar and ROS has been achieved, hence, has shown the usefulness of this integration for the use of UAVs deployed for complex mission.

@MastersThesis{ 2014junoh,
abstract  = {A scenario where a set of small-scale UAVs is used to
deliver medical goods to assist injured or sick people in
remote locations is proposed. A typical example would be a
person having been bitten by a snake in a remote area and
desperately needing snake antivenom. The UAV emergency
system would then prepare a UAV with the snake antivenom
and the UAV would then y to the injured person and deliver
the medicine. As this medical assistance system should
operate at all times and would often be based in remote
locations, it is only feasible when it involves minimum
human interaction. This is one of perfect scenarios by
which UAV system benefits from a high degree of autonomy.
The UAV has to handle the delivery of the medicine even in
adverse conditions without relying on human intervention.
Fortunately, over the past decades, there has been plenty
of progress in the research of autonomy allowing us
nowadays to design vehicles with an ever increasing degree
of autonomy. One research effort in the eld of autonomy is
cognitive architectures, with Soar being among the
prominent. Soar tries to model the human cognitive process
and ability in a software architecture. Their eorts (since
1983) resulted in a exible cognitive architecture which is
available as Open Source software. In this thesis, the use
of Soar in the context of the medical assistance UAV is
investigated. While all autonomy functionalities are
handled by the Soar framework, general capabilities and the
middleware of the UAV are modeled using the widely used
robot framework ROS. ROS has been providing many modules
which allow for ying and navigating small-scale UAVs and
therefore enabling a fast prototype development of the UAV
software. In this thesis, both the system and the cognitive
functionalities of the autonomous medical assistance UAV
are designed. Consequently, a major part of this thesis is
the integration of Soar into ROS (ROSied Soar). Starting
systems and the cognitive agent, a software implementation
was derived. Finally, the UAV model was tested in a
simulation environment (ROS/Gazebo) where the UAV performed
a delivery mission. The simulation included the
straightforward regular delivery as well as missions where
stressors were applied which forced the Soar agent to react
and make alternative decisions. Preliminary simulation data
reveal that this approach has the potential for creating
such a medical assistance UAV system. This also means that
the synergy between Soar and ROS has been achieved, hence,
has shown the usefulness of this integration for the use of
UAVs deployed for complex mission.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS09 Silver Atena Electronic Systems Engineering GmbH -
Autonomy Kreatzschmar, Heni, Stenger supervising},
author  = {Shahmi Junoh},
month = {November},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Development of a Cognitive UAV for Medical Assistance
Application:Integration of a Soar Agent in ROS},
year = {2014}
}

• N. Giftsun, “‘Stack of Tasks’ Controller for Mobile Manipulators with a Path Planner,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2014.
[BibTeX] [Abstract]

Mobile Manipulators are usually redundant systems providing the flexibility of handling task space constraints. Prioritized Task Function based control schemes are quite popu- lar among the community of Humanoid Robots equipped with five times more than the degrees of freedom in a Mobile Manipulator. These control schemes don’t require a closed loop inverse kinematic solution and can avoid conflicts between multiple tasks. ‘Stack of Tasks'(SOT) is one such framework supported by state-of-the-art solver to handle equality and inequality constraints much efficiently. Though the task function based control strat- egy are quite attractive in terms of handling multiple tasks, they are only locally optimal. The controller needs the help of a global planner in avoiding the local minimum which is inherent in all Jacobian based controllers. This thesis focuses on developing a generic SOT controller for Mobile Manipulators capable of handling both motion-generation and path following tasks in real time scenarios. A study is done on the solver and the planning modules to bridge the planning component and the SOT controller. The performance of the controller is evaluated by experiments both in simulation and real PR2 robot.

@MastersThesis{ 2014giftsun,
abstract  = {Mobile Manipulators are usually redundant systems
providing the flexibility of handling task space
constraints. Prioritized Task Function based control
schemes are quite popu- lar among the community of Humanoid
Robots equipped with five times more than the degrees of
freedom in a Mobile Manipulator. These control schemes
don't require a closed loop inverse kinematic solution and
can avoid conflicts between multiple tasks. 'Stack of
Tasks'(SOT) is one such framework supported by
state-of-the-art solver to handle equality and inequality
constraints much efficiently. Though the task function
based control strat- egy are quite attractive in terms of
handling multiple tasks, they are only locally optimal. The
controller needs the help of a global planner in avoiding
the local minimum which is inherent in all Jacobian based
controllers. This thesis focuses on developing a generic
SOT controller for Mobile Manipulators capable of handling
both motion-generation and path following tasks in real
time scenarios. A study is done on the solver and the
planning modules to bridge the planning component and the
SOT controller. The performance of the controller is
evaluated by experiments both in simulation and real PR2
robot.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS2011 LAAS-CNRS - Factory in a Day Ploeger, Lamiraux,
Kahl supervising},
month = {October},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {‘Stack of Tasks' Controller for Mobile Manipulators with
a Path Planner},
year = {2014}
}

• T. C. Hassan, “Dynamic Facial Expression Estimation by means of Model Fitting,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2014.
[BibTeX] [Abstract]

Analysis of facial expressions is an integral part of human behavioural research. The Facial Action Coding System (FACS) manual guides researchers in identifying and coding facial expressions in terms of basic facial movements called Action Units (AUs). However, coding faces manually, based on FACS, is a tedious process. Automating the FACS-based analysis of faces in images and image sequences would save a great amount of time, and thereby accelerate behavioural research. Automatic facial AU analysis would also be of value in developing technologies for affect-based human-computer (robot) interaction. This thesis deals with the problem of fully automatic estimation of AUs in image sequences. A model-based approach is pursued. The shape of the human face is represented in the form of an interconnected mesh of vertices. Linear models describe the shape in terms of deformation vectors controlled by a set of parameters. These deformation vectors correspond to changes in facial shape resulting from individual AUs. The parameters that control these deformations denote the intensity at which the AUs are expressed. Existing methods for model fitting can be used to determine the AU model parameters. However, these methods follow a frame-by-frame strategy and do not incorporate the dynamics of underlying motion. This causes two main problems. Firstly, the trajectories of estimated parameters are noisy. Secondly, ambiguities in AU parameter estimates cannot be resolved correctly. As a result, the AU estimation performance is poor. State estimation methods allow dynamic models of parameter evolution to be combined with noisy observations of textures or model vertices given by the model-fitting methods. In this thesis, the use of state estimation methods to improve AU estimation performance is investigated.

@MastersThesis{ 2014hassan,
abstract  = {Analysis of facial expressions is an integral part of
human behavioural research. The Facial Action Coding System
(FACS) manual guides researchers in identifying and coding
facial expressions in terms of basic facial movements
called Action Units (AUs). However, coding faces manually,
based on FACS, is a tedious process. Automating the
FACS-based analysis of faces in images and image sequences
would save a great amount of time, and thereby accelerate
behavioural research. Automatic facial AU analysis would
also be of value in developing technologies for
affect-based human-computer (robot) interaction. This
thesis deals with the problem of fully automatic estimation
of AUs in image sequences. A model-based approach is
pursued. The shape of the human face is represented in the
form of an interconnected mesh of vertices. Linear models
describe the shape in terms of deformation vectors
controlled by a set of parameters. These deformation
vectors correspond to changes in facial shape resulting
from individual AUs. The parameters that control these
deformations denote the intensity at which the AUs are
expressed. Existing methods for model fitting can be used
to determine the AU model parameters. However, these
methods follow a frame-by-frame strategy and do not
incorporate the dynamics of underlying motion. This causes
two main problems. Firstly, the trajectories of estimated
parameters are noisy. Secondly, ambiguities in AU parameter
estimates cannot be resolved correctly. As a result, the AU
estimation performance is poor. State estimation methods
allow dynamic models of parameter evolution to be combined
with noisy observations of textures or model vertices given
by the model-fitting methods. In this thesis, the use of
state estimation methods to improve AU estimation
performance is investigated.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS12 Fraunhofer IIS Prassler, Pl\"{o}ger, F\"{u}ller,
Seuss supervising},
author  = {Teena Chakkalayil Hassan},
month = {June},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Dynamic Facial Expression Estimation by means of Model
Fitting},
year = {2014}
}

• O. Alaqtash, “Creating a Focused HTN Planning Problem using DL Inferencing,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2014.
[BibTeX] [Abstract]

Automated planning allows systems to automatically generate plans in order to execute tasks or achieve goals. To accomplish this, planners use knowledge of the actions that can be performed, when they can be carried out as well as how these actions affect the world. This knowledge represents the planning domain. Moreover, planners need a description of the world. A planning problem consists of these two items as well as the task to be achieved. Hierarchical Task Network (HTN) planning is a popular automated planning approach. It encodes domain information that defines recipes on how to carry out a task. This results in a pruned search space and provides quality plans. However, the search space remains large and this can lead to intractability. Many real-world planning problems are still difficult to solve in a reasonable time by autonomous agents. The reason for this is that the planning problems are not preprocessed to include only the relevant parts of the domain and the description of the world. This preprocessing is carried out by a human for systems without full autonomy. Researchers continue to optimize planning approaches and planners, but very few have tackled the problem of identifying relevant parts of the planning problem. The work of Hartanto (2011) is among the first to propose a solution which involves modeling the planning domain in the Web Ontology Language (OWL). However, his solution has a number of deficits. Most importantly, his modeling of the planning domain is not flexible enough to be reused. In this work, we adopt Hartanto’s approach of representing the planning domain in OWL to enable inference and create a focused planning problem. To do this, we model the planning domain and the description of the world in a decidable fragment of OWL, namely OWL 2 DL, and create a component that queries this knowledge by generating a set of SPARQL-DL queries to identify the relevant parts of the description of the world and the domain. The solution is able to generate a focused planning problem in OWL 2 DL, which can later be transformed into the JSHOP2 syntax for the planner to use. The results of our experiments showed a reduction of between 58% to 81% from the original size of the planning problem. Such a decrease in the problem size should lead to a much faster plan generation process.

@MastersThesis{ 2014alaqtash,
abstract  = {Automated planning allows systems to automatically
generate plans in order to execute tasks or achieve goals.
To accomplish this, planners use knowledge of the actions
that can be performed, when they can be carried out as well
as how these actions affect the world. This knowledge
represents the planning domain. Moreover, planners need a
description of the world. A planning problem consists of
these two items as well as the task to be achieved.
Hierarchical Task Network (HTN) planning is a popular
automated planning approach. It encodes domain information
that defines recipes on how to carry out a task. This
results in a pruned search space and provides quality
plans. However, the search space remains large and this can
lead to intractability. Many real-world planning problems
are still difficult to solve in a reasonable time by
autonomous agents. The reason for this is that the planning
problems are not preprocessed to include only the relevant
parts of the domain and the description of the world. This
preprocessing is carried out by a human for systems without
full autonomy. Researchers continue to optimize planning
approaches and planners, but very few have tackled the
problem of identifying relevant parts of the planning
problem. The work of Hartanto (2011) is among the first to
propose a solution which involves modeling the planning
domain in the Web Ontology Language (OWL). However, his
solution has a number of deficits. Most importantly, his
modeling of the planning domain is not flexible enough to
be reused. In this work, we adopt Hartanto's approach of
representing the planning domain in OWL to enable inference
and create a focused planning problem. To do this, we model
the planning domain and the description of the world in a
decidable fragment of OWL, namely OWL 2 DL, and create a
component that queries this knowledge by generating a set
of SPARQL-DL queries to identify the relevant parts of the
description of the world and the domain. The solution is
able to generate a focused planning problem in OWL 2 DL,
which can later be transformed into the JSHOP2 syntax for
the planner to use. The results of our experiments showed a
reduction of between 58% to 81% from the original size of
the planning problem. Such a decrease in the problem size
should lead to a much faster plan generation process.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS12/13 H-BRS - Creating a Focused HTN Planning Problem
using DL Inferencing Kr{\"a}tzschmar, Pl{\"o}ger, Awaad
supervising},
month = {December},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Creating a Focused HTN Planning Problem using DL
Inferencing},
year = {2014}
}

### 2013

• M. W. Butt, “A computational visual attention system for diver’s hand signs and gestures recognition,” Master Thesis, Grantham Allee 20, 53757 St. Augustin, Germany, 2013.
[BibTeX]
@MastersThesis{ butt2013a-computational,
address  = {Grantham Allee 20, 53757 St. Augustin, Germany},
annote  = {[2013] [Fintrope], [Ploeger] supervising},
author  = {Mohsin Wahab Butt},
date-modified  = {2016-08-28 08:13:31 +0000},
month = {August},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {A computational visual attention system for diver's hand
signs and gestures recognition},
year = {2013}
}

• R. Liu, A. Koch, and A. Zell, “Mapping UHF RFID tags with a mobile robot using a 3D sensor model,” in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013, pp. 1589-1594.
[BibTeX]
@InProceedings{ liu_2013,
author  = {Liu, R. and Koch, A. and Zell, A.},
booktitle  = {2013 IEEE/RSJ International Conference on Intelligent
Robots and Systems},
month = {Nov.},
pages = {1589-1594},
title = {{Mapping UHF RFID tags with a mobile robot using a 3D
sensor model}},
year = {2013}
}

• S. Schneider, “Design of a declarative language for task-oriented grasping and tool-use with dextrous robotic hands,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2013.
[BibTeX] [Abstract]

Apparently simple manipulation tasks for a human such as transportation or tool use are challenging to replicate in an autonomous service robot. Nevertheless, dextrous ma- nipulation is an important aspect for a robot in many daily tasks. While it is possible to manufacture special-purpose hands for one specific task in industrial settings, a general- purpose service robot in households must have flexible hands which can adapt to many tasks. Intelligently using tools enables the robot to perform tasks more efficiently and even beyond the designed capabilities. In this work a declarative domain-specific language, called Grasp Domain Definition Language (GDDL), is presented that allows the specification of grasp planning problems independently of a specific grasp planner. This design goal resembles the idea of the Planning Domain Definition Language (PDDL). The specification of GDDL requires a detailed analysis of the research in grasping in order to identify best practices in different domains that contribute to a grasp. These domains describe for instance physical as well as semantic properties of objects and hands. Grasping always has a purpose which is captured in the task domain definition. It enables the robot to grasp an object in a task- dependent manner. Suitable representations in these domains have to be identified and formalized for which a domain-driven software engineering approach is applied. This kind of modeling allows the specification of constraints which guide the composition of domain entity specifications. The domain-driven approach fosters reuse of domain concepts while the constraints enable the validation of models already during design time. A proof of concept implementation of GDDL into the GraspIt! grasp planner is developed. Preliminary results of this thesis have been published and presented on the IEEE International Conference on Robotics and Automation (ICRA).

@MastersThesis{ 2013schneider,
abstract  = {Apparently simple manipulation tasks for a human such as
transportation or tool use are challenging to replicate in
an autonomous service robot. Nevertheless, dextrous ma-
nipulation is an important aspect for a robot in many daily
tasks. While it is possible to manufacture special-purpose
hands for one specific task in industrial settings, a
general- purpose service robot in households must have
using tools enables the robot to perform tasks more
efficiently and even beyond the designed capabilities. In
this work a declarative domain-specific language, called
Grasp Domain Definition Language (GDDL), is presented that
allows the specification of grasp planning problems
independently of a specific grasp planner. This design goal
resembles the idea of the Planning Domain Definition
Language (PDDL). The specification of GDDL requires a
detailed analysis of the research in grasping in order to
identify best practices in different domains that
contribute to a grasp. These domains describe for instance
physical as well as semantic properties of objects and
hands. Grasping always has a purpose which is captured in
the task domain definition. It enables the robot to grasp
an object in a task- dependent manner. Suitable
representations in these domains have to be identified and
formalized for which a domain-driven software engineering
approach is applied. This kind of modeling allows the
specification of constraints which guide the composition of
domain entity specifications. The domain-driven approach
fosters reuse of domain concepts while the constraints
enable the validation of models already during design time.
A proof of concept implementation of GDDL into the GraspIt!
grasp planner is developed. Preliminary results of this
thesis have been published and presented on the IEEE
International Conference on Robotics and Automation
(ICRA).},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
note = {WS09/10 H-BRS - RoboCup Kraetzschmar, Pl{\"o}ger,
Hochgeschwender},
author  = {Sven Schneider},
month = {May},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Design of a declarative language for task-oriented
grasping and tool-use with dextrous robotic hands},
year = {2013}
}

• F. Rouatbi, “Two SVM-based methods for the classification of airborne LiDAR data,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2013.
[BibTeX] [Abstract]

Airborne Light Detection And Ranging (LiDAR) is a remote sensing method used to collect high resolution information of the earth’s surface. This technology uses laser light beams emitted from a LiDAR system mounted on an airborne platform to scan the landscape topology and the different objects above the bare-earth such as buildings, vegetation, cars and roads. The collected data is an elevation model in the form of a point cloud which is useful for a wide range of applications such as 3D modeling, change detection analysis and objects recognition. In order to extract relevant information from the scanned areas, the generated point cloud needs to be classified. In this thesis, we propose two different methods for the classification of airborne LiDAR data into ground, trees, and buildings. The first method is a point-based classification approach which works on each point independently and uses geometrical features derived from the local neighborhood to make a decision about the class attributes. Therefore, a classification system composed of a cascade of three independent binary classifiers (tree, ground and buildings) has been implemented. Each classifier is based on a support vector machine (SVM) which recognizes the points of a particular class and removes them from the dataset. First, the tree classifier is used to detects the tree points as they have the most discriminating features. Then, the remaining non-tree points are passed to the ground classifier which recognizes the ground points and eliminates them from the dataset. Finally, the building classifier is applied in order to separate the points belonging to the buildings from other small objects which have no predefined class such as cars. The second method is an object-based classification approach which segments the data and performs a classification of of the resulting segments. The surface growing algorithm was used to segment the proximal points having geometrical similarities into a set of disjoint objects. The ground objects are first identified based on the size and the median height of the points in each segment. Then, a SVM-based classifier is used to distinguish the objects corresponding to the buildings. Therefore, a set of object-based features derived from the geometrical attributes of the points within the same segment has been used. Both methods have been tested on a labeled dataset composed of more than one million LiDAR points and applied to a set of real data.

@MastersThesis{ 2013rouatbi,
abstract  = {Airborne Light Detection And Ranging (LiDAR) is a remote
sensing method used to collect high resolution information
of the earth's surface. This technology uses laser light
beams emitted from a LiDAR system mounted on an airborne
platform to scan the landscape topology and the different
objects above the bare-earth such as buildings, vegetation,
cars and roads. The collected data is an elevation model in
the form of a point cloud which is useful for a wide range
of applications such as 3D modeling, change detection
analysis and objects recognition. In order to extract
relevant information from the scanned areas, the generated
point cloud needs to be classified. In this thesis, we
propose two different methods for the classification of
airborne LiDAR data into ground, trees, and buildings. The
first method is a point-based classification approach which
works on each point independently and uses geometrical
features derived from the local neighborhood to make a
decision about the class attributes. Therefore, a
classification system composed of a cascade of three
independent binary classifiers (tree, ground and buildings)
has been implemented. Each classifier is based on a support
vector machine (SVM) which recognizes the points of a
particular class and removes them from the dataset. First,
the tree classifier is used to detects the tree points as
they have the most discriminating features. Then, the
remaining non-tree points are passed to the ground
classifier which recognizes the ground points and
eliminates them from the dataset. Finally, the building
classifier is applied in order to separate the points
belonging to the buildings from other small objects which
have no predefined class such as cars. The second method is
an object-based classification approach which segments the
data and performs a classification of of the resulting
segments. The surface growing algorithm was used to segment
the proximal points having geometrical similarities into a
set of disjoint objects. The ground objects are first
identified based on the size and the median height of the
points in each segment. Then, a SVM-based classifier is
used to distinguish the objects corresponding to the
buildings. Therefore, a set of object-based features
derived from the geometrical attributes of the points
within the same segment has been used. Both methods have
been tested on a labeled dataset composed of more than one
million LiDAR points and applied to a set of real data.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS08 FH-BRS - Fraunhofer FKIE Two SVM-based methods for
the classification of airborne LiDAR data Ploeger, Koch
supervising},
author  = {Fahmi Rouatbi},
month = {November},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Two SVM-based methods for the classification of airborne
LiDAR data},
year = {2013}
}

• M. Pathare, “Household Device Recognition & Information Retrieval using a Smart-phone,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2013.
[BibTeX] [Abstract]

Present household devices are not like their predecessors, having a single on/off switch, instead they have numerous settings and functionalities. User’s need to have manual of such devices handy. In case they forget which, and how to adjust the device parameters or how to carry out certain ‘not so common tasks’ on the device. Sometimes the device interfaces can be difficult to understand, not only for elders but also for common people. Having the device’s operational information on our finger tips is a major requirement and also the information should be easily accessible. This issue is addressed in our work. A smart-phone can be a very handy tool in such situations. We have developed an application on a smart-phone that can recognize known household devices, retrieve its operational information, and present it to the user in an interactive way. Smaller, vision deprived robots can also benefit from our application. They can outsource the device recognition task to our application and will receive a feedback about the device and its details. Vision based techniques are used to find the device, that user needs information on. Feature base approach is used to recognize the device. The device’s operational information is stored in local database and is retrieved after its successful recognition. This information contains functionality of the controls present on the device and the tasks that can be performed with it. Users can view this information interactively. Robots can access the location information of the recognized device in the scene image. After evaluation, it was found that our application is robust in recognizing devices under illumination, scale and rotation variations and also under slight occlusions. No false positive detections were observed during tests, due to geometrical and text based validation checks. For five devices in database, application takes about 20 seconds to recognize a device, including focusing and image capturing time. The observed mean square error in estimating distance to device and horizontal angle to the device measured from optical axis, were 0.3097cm and 1.393$^\circ$ respectively. Our application proves useful to users, as well as robots in getting to know a device’s operational information. An end-to-end prototype was also developed, showing a robot mounted with a smart-phone running our application, searching for a device, recognizing it, localizing it and grasping it successfully.

@MastersThesis{ 2013pathare,
abstract  = {Present household devices are not like their predecessors,
having a single on/off switch, instead they have numerous
settings and functionalities. User's need to have manual of
such devices handy. In case they forget which, and how to
adjust the device parameters or how to carry out certain
'not so common tasks' on the device. Sometimes the device
interfaces can be difficult to understand, not only for
elders but also for common people.
Having the device's operational information on our finger
tips is a major requirement and also the information should
be easily accessible. This issue is addressed in our work.
A smart-phone can be a very handy tool in such situations.
We have developed an application on a smart-phone that can
recognize known household devices, retrieve its operational
information, and present it to the user in an interactive
way. Smaller, vision deprived robots can also benefit from
our application. They can outsource the device recognition
the device and its details.
Vision based techniques are used to find the device, that
user needs information on. Feature base approach is used to
recognize the device. The device's operational information
is stored in local database and is retrieved after its
successful recognition. This information contains
functionality of the controls present on the device and the
tasks that can be performed with it. Users can view this
information interactively. Robots can access the location
information of the recognized device in the scene image.
After evaluation, it was found that our application is
robust in recognizing devices under illumination, scale and
rotation variations and also under slight occlusions. No
false positive detections were observed during tests, due
to geometrical and text based validation checks. For five
devices in database, application takes about 20 seconds to
recognize a device, including focusing and image capturing
time. The observed mean square error in estimating distance
to device and horizontal angle to the device measured from
optical axis, were 0.3097cm and 1.393$^\circ$ respectively.
Our application proves useful to users, as well as robots
in getting to know a device's operational information. An
end-to-end prototype was also developed, showing a robot
mounted with a smart-phone running our application,
searching for a device, recognizing it, localizing it and
grasping it successfully.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS10 H-BRS Pl{\"o}ger,Breuer supervising},
author  = {Mandar Pathare},
month = {April},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Household Device Recognition & Information Retrieval using
a Smart-phone},
year = {2013}
}

• Z. B. Kasim, “HMM-Based Diagnosis of Known Exogenous Interventions in Mobile Manipulators,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2013.
[BibTeX] [Abstract]

A mobile manipulator lives in a dynamic environment where changes occur over time. The changes are external, where they occur outside the mobile manipulator’s control, however, they affect the results for a task assigned for the mobile manipulator. These external events are called exogenous interventions. The main purpose of this thesis is to provide a model based, probabilistic method as an approach for known exogenous interventions in mobile manipulators. With the thought of the mobile platform and manipulator can work in parallel, we proposed two variations of continuous observation hidden Markov model, namely the classical hidden Markov model and the parallel hidden Markov model. We elaborated several use cases for exogenous interventions and with these we collected data from real and simulated mobile manipulators. Due the characteristics of exogenous interventions which are rare, random and versatile; we also obtained statistics for exogenous interventions to build up the model. We then tested the models on a simulated environment. The result shows that diagnosing exogenous interventions are precise and sensitive. With correct modeling, the hidden Markov Model is proven to be able to diagnose the exogenous interventions correctly.

@MastersThesis{ 2013kasim,
abstract  = {A mobile manipulator lives in a dynamic environment where
changes occur over time. The changes are external, where
they occur outside the mobile manipulator's control,
however, they affect the results for a task assigned for
the mobile manipulator. These external events are called
exogenous interventions. The main purpose of this thesis is
to provide a model based, probabilistic method as an
approach for known exogenous interventions in mobile
manipulators. With the thought of the mobile platform and
manipulator can work in parallel, we proposed two
variations of continuous observation hidden Markov model,
namely the classical hidden Markov model and the parallel
hidden Markov model. We elaborated several use cases for
exogenous interventions and with these we collected data
from real and simulated mobile manipulators. Due the
characteristics of exogenous interventions which are rare,
random and versatile; we also obtained statistics for
exogenous interventions to build up the model. We then
tested the models on a simulated environment. The result
shows that diagnosing exogenous interventions are precise
and sensitive. With correct modeling, the hidden Markov
Model is proven to be able to diagnose the exogenous
interventions correctly.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS09 Pl\"oger, von der Hude, K\"ustenmacher supervising},
author  = {Zinnirah Binti Kasim},
month = {July},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {HMM-Based Diagnosis of Known Exogenous Interventions in
Mobile Manipulators},
year = {2013}
}

### 2012

• T. Mathew, “A Computer Game based Motivation System for Human Muscle Strength Testing,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

The objective of this thesis is to implement a computer game based motivation system for maximal strength testing on the Biodex System 3 Isokinetic Dynamometer. The prototype game has been designed to improve the peak torque produced in an isometric knee extensor strength test. An extensive analysis is performed on a torque data set from a previous study. The torque responses for five second long maximal voluny contractions of the knee extensor are analyzed to understand torque response characteristics of di fferent subjects. The parameters identified in the data analysis are used in the implementation of the ‘Shark and School of Fish’ game. The behavior of the game for different torque responses is analyzed on a different torque data set from the previous study. The evaluationshows that the game rewards and motivates continuously over a repetition to reach the peak torque value. The evaluation also shows that the game rewards the user more if he overcomes a baseline torque value within the first second and then gradually increase the torque to reach peak torque.

@MastersThesis{ 2012mathew,
abstract  = {The objective of this thesis is to implement a computer
game based motivation system for maximal strength testing
on the Biodex System 3 Isokinetic Dynamometer. The
prototype game has been designed to improve the peak torque
produced in an isometric knee extensor strength test. An
extensive analysis is performed on a torque data set from a
previous study. The torque responses for five second long
maximal voluny contractions of the knee extensor are
analyzed to understand torque response characteristics of
di fferent subjects. The parameters identified in the data
analysis are used in the implementation of the 'Shark and
School of Fish' game. The behavior of the game for
different torque responses is analyzed on a different
torque data set from the previous study. The
evaluationshows that the game rewards and motivates
continuously over a repetition to reach the peak torque
value. The evaluation also shows that the game rewards the
user more if he overcomes a baseline torque value within
the first second and then gradually increase the torque to
reach peak torque.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[Semester you sted the MAS Program] [Institute of
Aerospace Medicine,German Aerospace Center] - [A Computer
Game based Motivation System for Human Muscle Strength
Testing] [Herpers], [Rittweger] supervising},
author  = {Tintu Mathew},
month = {},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {A Computer Game based Motivation System for Human Muscle
Strength Testing},
year = {2012}
}

• S. Sharma, “Unified Approach to Motion Planning by Coordinating Mobility and Manipulability for the KUKA youBot,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

Recent trend in robotics is the shift from using fixed base manipulators to mobile manipulators. Mobile manipulators have become of high interest to industry and service robotics sector because of the increased flexibility and effectiveness they offer due to added mobility. However, the combination of the mobility provided by a mobile platform and the manipulation capabilities provided by a robot arm leads to complex analytical problems for research. These problems can be studied very well on the KUKA youBot, a mobile manipulator designed for education and research applications. This thesis aims to achieve seamless integration and synchronization of mobility and manipulation capabilities for performing mobile manipulation tasks with the KUKA youBot. To do this, we propose a novel approach to perform unified motion planning for the KUKA youBot based on Inverse Kinematics Bi-directional Rapidly Exploring Random Trees (IKBiRRT) algorithm using Workspace Goal Regions. A closed form inverse kinematics solution for the unified kinematics of the KUKA youBot is used to implement the planner and resolve redundancies. A motion planning framework is developed that is capable of updating the elements in the world model in an online manner. Experiments are performed both in simulation and on the robot. To test the coordination between arm and base motions, the mobile manipulator plans a collision free path so that the it goes under a table or bar to simulate a limbo movement. A new approach is described to represent the workspace capabilities of the youBot into a map. This map containing the useful workspace of the mobile manipulator is termed as feasibility map. The full six dimensional continuous workspace of the youBot has been mapped to a reduced subspace in two dimensions without loss of information. The feasibility map describes the reachability and redundancy due to mobility of the youBot, in its workspace. The reachability is represented in a reachability map and is constrained by joint limits and singularities whereas a redundancy map describes the redundancy range at a particular point in the workspace. Applications of the feasibility map in motion planning, grasp planning and online obstacle avoidance scenarios are discussed.

@MastersThesis{ 2012sharma,
abstract  = {Recent trend in robotics is the shift from using fixed
base manipulators to mobile manipulators. Mobile
manipulators have become of high interest to industry and
service robotics sector because of the increased
flexibility and effectiveness they offer due to added
mobility. However, the combination of the mobility provided
by a mobile platform and the manipulation capabilities
provided by a robot arm leads to complex analytical
problems for research. These problems can be studied very
well on the KUKA youBot, a mobile manipulator designed for
education and research applications. This thesis aims to
achieve seamless integration and synchronization of
mobility and manipulation capabilities for performing
mobile manipulation tasks with the KUKA youBot. To do this,
we propose a novel approach to perform unified motion
planning for the KUKA youBot based on Inverse Kinematics
Bi-directional Rapidly Exploring Random Trees (IKBiRRT)
algorithm using Workspace Goal Regions. A closed form
inverse kinematics solution for the unified kinematics of
the KUKA youBot is used to implement the planner and
resolve redundancies. A motion planning framework is
developed that is capable of updating the elements in the
world model in an online manner. Experiments are performed
both in simulation and on the robot. To test the
coordination between arm and base motions, the mobile
manipulator plans a collision free path so that the it goes
under a table or bar to simulate a limbo movement. A new
approach is described to represent the workspace
capabilities of the youBot into a map. This map containing
the useful workspace of the mobile manipulator is termed as
feasibility map. The full six dimensional continuous
workspace of the youBot has been mapped to a reduced
subspace in two dimensions without loss of information. The
feasibility map describes the reachability and redundancy
due to mobility of the youBot, in its workspace. The
reachability is represented in a reachability map and is
constrained by joint limits and singularities whereas a
redundancy map describes the redundancy range at a
particular point in the workspace. Applications of the
feasibility map in motion planning, grasp planning and
online obstacle avoidance scenarios are discussed.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS 09 HBRS, KUKA Laboratories GmbH - BRICS Kraetzschmar,
Scheurer},
author  = {Shashank Sharma},
month = {September},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Unified Approach to Motion Planning by Coordinating
Mobility and Manipulability for the KUKA youBot},
year = {2012}
}

• M. Thosar, “A Naive Approach For Learning The Semantics Of Effects Of An Action,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

Classical artificial intelligence emphasizes that cognitive abilities such as inferring conclusions from the sensor information acquired over time, devising suitable plans, reasoning about the environment etc. are grounded in a mental representation of the world and the resulting intelligent behavior is an outcome of the correct reasoning using these representations. One of the use of these abilities is to reason about effects of an action where an agent reasons how the environment changes after executing an action. In this work, we have proposed an approach to learn the effects of an action. The effects of an action can be learned either by simply stating what changes in the environment given the current state and action description or by stating how these changes are related to the action. The latter part forms our approach. We have developed a system called EVOLT (EVOLving Theories) which for a given set of state descriptions (before and after an action is executed) and a set of action parameters of the action, represented in a predefined format, learns a theory consisting of a formal description of effects of an action in stages such that a theory relates the common changes in the state to the action parameter/s according to the given mathematical/logical operator definitions, expressed in a logic language which is referred as a semantic of the action in this setting. The main contribution of this work is twofold: one, we have proposed an extended Inductive Logic Programming(ILP) technique in a way that the extended ILP does not rely on labeled examples unlike the conventional ILP. Moreover, it is provided with a background knowledge which is general in nature and is not limited to a single action-effect theory learning task, but can be reused for a variety of learning action-effect theory tasks. Second, the representation formalism proposed in this task. The formalism offers a uniform representation for the input data in a sense that the input data is distributed in one or more groups according to some property such that the property wraps systematically the data according to a representation rule. Due to this uniformity, programmer do not have to worry about what and how much data is enlisted in each group while programming. Also, using this representation formalism, variety of inductive learning problems can be aimed. The experimental evaluation is done to examine the applicability and behavior of EVOLT in different environmental settings. The other aim was to gain insights during the experimentation which can further be used to improve the system. As the system is in its early developmental stages, we have discussed the strengths and weaknesses of the system realized during the development and evaluation. The results of the evaluation were quiet reasonable which has formed the basis for the future work we have discussed at the end of this report.

@MastersThesis{ 2012thosar,
abstract  = {Classical artificial intelligence emphasizes that
cognitive abilities such as inferring conclusions from the
sensor information acquired over time, devising suitable
plans, reasoning about the environment etc. are grounded in
a mental representation of the world and the resulting
intelligent behavior is an outcome of the correct reasoning
using these representations. One of the use of these
abilities is to reason about effects of an action where an
agent reasons how the environment changes after executing
an action. In this work, we have proposed an approach to
learn the effects of an action. The effects of an action
can be learned either by simply stating what changes in the
environment given the current state and action description
or by stating how these changes are related to the action.
The latter part forms our approach. We have developed a
system called EVOLT (EVOLving Theories) which for a given
set of state descriptions (before and after an action is
executed) and a set of action parameters of the action,
represented in a predefined format, learns a theory
consisting of a formal description of effects of an action
in stages such that a theory relates the common changes in
the state to the action parameter/s according to the given
mathematical/logical operator definitions, expressed in a
logic language which is referred as a semantic of the
action in this setting. The main contribution of this work
is twofold: one, we have proposed an extended Inductive
Logic Programming(ILP) technique in a way that the extended
ILP does not rely on labeled examples unlike the
conventional ILP. Moreover, it is provided with a
background knowledge which is general in nature and is not
limited to a single action-effect theory learning task, but
can be reused for a variety of learning action-effect
theory tasks. Second, the representation formalism proposed
in this task. The formalism offers a uniform representation
for the input data in a sense that the input data is
distributed in one or more groups according to some
property such that the property wraps systematically the
data according to a representation rule. Due to this
uniformity, programmer do not have to worry about what and
how much data is enlisted in each group while programming.
Also, using this representation formalism, variety of
inductive learning problems can be aimed. The experimental
evaluation is done to examine the applicability and
behavior of EVOLT in different environmental settings. The
other aim was to gain insights during the experimentation
which can further be used to improve the system. As the
system is in its early developmental stages, we have
discussed the strengths and weaknesses of the system
realized during the development and evaluation. The results
of the evaluation were quiet reasonable which has formed
the basis for the future work we have discussed at the end
of this report.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[WS08/09] [Kahl], [Mueller], [Ploeger] supervising},
month = {November},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {A Naive Approach For Learning The Semantics Of Effects Of
An Action},
year = {2012}
}

• R. Liu, A. Koch, and A. Zell, “Path following with passive UHF RFID received signal strength in unknown environments,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 2250-2255.
[BibTeX]
@InProceedings{ liu_2012,
author  = {Liu, R. and Koch, A. and Zell, A.},
booktitle  = {2012 IEEE/RSJ International Conference on Intelligent
Robots and Systems},
month = {Oct.},
pages = {2250-2255},
title = {{Path following with passive UHF RFID received signal
strength in unknown environments}},
year = {2012}
}

• M. Füller, “Multi-step motion planning of climbing robots in 3D environments under kinodynamic constraints,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

New locomotion techniques for service robots aim to overcome the difficulties with obstacle avoidance and dynamic settings in household environments by changing the robot’s working plane to the wall and ceiling. Besides the question of the mechanical implementation of these kind of robots, the motion planning of the robot requires another view and special attention. A spider-type robot is assumed in this thesis that is able to move along given footholds in the environment. The motion planning task is to determine the required steps to move to the goal along available docking points at the walls and the ceiling. A previous study of the author developed a multi-step motion planner for such kind of robot. The planner is able to plan multi-step motions while being aware of kinematic constraints and object collisions. One aspect that was identified during that work but not implemented is the awareness of dynamic constraints of such a robot. The motion of a legged robot – especially of climbing ones – requires special attention to the dynamic limitations. These dynamic limitations are for example maximum possible joint torques during the climbing motions. This work is the next step toward a more realistic multi-step motion planner for 3D indoor environments along walls and ceilings. It extends the existing kinematic multi-step motion planner presented by Füller (2011) with additional awareness of dynamic constraints. The planner is further integrated into a real-world physic simulation environment. The final multi-step planner can be used to evaluate required components for a real implementation of such kind of robot. It can identify problems in the planning process due to limits on the robot design and the used components before and during the development of such a robot. Additionally, the thesis presents a possible approach for an extension of a pure kinematic sample-based motion planner toward dynamic awareness.

@MastersThesis{ 2012fueller,
author  = {Matthias F{\"u}ller},
title = {Multi-step motion planning of climbing robots in 3D
environments under kinodynamic constraints},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
year = {2012},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
month = {March},
abstract  = {New locomotion techniques for service robots aim to
overcome the difficulties with obstacle avoidance and
dynamic settings in household environments by changing the
robot's working plane to the wall and ceiling. Besides the
question of the mechanical implementation of these kind of
robots, the motion planning of the robot requires another
view and special attention. A spider-type robot is assumed
in this thesis that is able to move along given footholds
in the environment. The motion planning task is to
determine the required steps to move to the goal along
available docking points at the walls and the ceiling.
A previous study of the author developed a multi-step
motion planner for such kind of robot. The planner is able
to plan multi-step motions while being aware of kinematic
constraints and object collisions. One aspect that was
identified during that work but not implemented is the
awareness of dynamic constraints of such a robot. The
motion of a legged robot - especially of climbing ones -
requires special attention to the dynamic limitations.
These dynamic limitations are for example maximum possible
joint torques during the climbing motions. This work is the
next step toward a more realistic multi-step motion planner
for 3D indoor environments along walls and ceilings. It
extends the existing kinematic multi-step motion planner
presented by F{\"u}ller (2011) with additional awareness of
dynamic constraints. The planner is further integrated into
a real-world physic simulation environment. The final
multi-step planner can be used to evaluate required
components for a real implementation of such kind of robot.
It can identify problems in the planning process due to
limits on the robot design and the used components before
and during the development of such a robot. Additionally,
the thesis presents a possible approach for an extension of
a pure kinematic sample-based motion planner toward dynamic
awareness.},
annote  = {W09, Prassler, Forsman supervising}
}

• R. Dwiputra, “Dynamic Modeling of KUKA youBot Manipulator,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

Models and simulation tools are crucial in robotic research. Experimentation with models is cost and time efficient due to its flexibility to be automated, conditioned, and accelerated. In this master thesis, a model of the KUKA youBot manipulator was developed with Modelica. Modelica is a multi-domain modeling language description which is capable of bridging mechanical, electrical, hydraulic, and thermodynamic domain in a single model. With Modelica, the model developed incorporated dynamic parameters, motor specifications, and control system. The advantages of a robot model only holds true when its accuracy has been validated with the real robot. Before further research involving the model is executed, it should be ensured that the model behaviours reflect the actual system behaviours. Therefore, experiments have been performed in parallel with the model development to assure the model’s accuracy. The experiments performed in this master thesis are: 1. Validation of the robot controller model; 2. Friction model approximation; and 3. Validation of the robot model for different motions and modes. The results show that the model reflects the actual system behaviour to a certain extent. The experiment results and the model can be further used for experiment involving control system, load identification, and trajectory generation algorithm.

@MastersThesis{ 2012dwiputra,
author  = {Rhama Dwiputra},
title = {Dynamic Modeling of KUKA youBot Manipulator},
year = {2012},
month = {October},
abstract  = {Models and simulation tools are crucial in robotic
research. Experimentation with models is cost and time
efficient due to its flexibility to be automated,
conditioned, and accelerated. In this master thesis, a
model of the KUKA youBot manipulator was developed with
Modelica. Modelica is a multi-domain modeling language
description which is capable of bridging mechanical,
electrical, hydraulic, and thermodynamic domain in a single
model. With Modelica, the model developed incorporated
dynamic parameters, motor specifications, and control
system.
The advantages of a robot model only holds true when its
accuracy has been validated with the real robot. Before
further research involving the model is executed, it should
be ensured that the model behaviours reflect the actual
system behaviours. Therefore, experiments have been
performed in parallel with the model development to assure
the model's accuracy. The experiments performed in this
master thesis are: 1. Validation of the robot controller
model; 2. Friction model approximation; and 3. Validation
of the robot model for different motions and modes. The
results show that the model reflects the actual system
behaviour to a certain extent. The experiment results and
the model can be further used for experiment involving
control system, load identification, and trajectory
generation algorithm.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[Semester you started the MAS Program] [Project
Affiliation] - [Project Name] [Last name of 1st
Supervisor], [Last name of 2nd supervisor], [last name of
third supervisor (if applicable)] supervising},
school  = {Bonn-Rhein-Sieg University of Applied Sciences}
}

• M. Arasi, “Particle Filter Based Approach for the Diagnosis of Unknown External Faults in a Mobile Manipulator,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

Monitoring complex systems (e.g. Mobile Manipulator) for the sake of estimation and fault diagnosis, is a topic of considerable dominance in scientific literature. Autonomous systems often stop performing their tasks because of occurrences of unexpected faults. Here, unexpected or unknown fault is defined as the fault that occurs in the system’s dynamic environment and cannot be observed by the system’s sensors because of their limitations. Fault diagnosis is a prerequisite for robust operation of a system in a hazardous or changing environment. Fault diagnosis comprises of different procedures represented by fault detection, isolation and fault identification. In recent years, many researchers have investigated diagnosis problems, and many of them proposed particle filter (PF) as a solution. Particle filter is a Bayesian approach used in many applications such as fault diagnosis to approximate the belief distribution of the state of the system as soon as the observation is available. It has successfully implemented on a complete system model that is running under internal faults. In the recent works, for the systems dealing with unknown faults, researchers assume that the approach (i.e. Particle Filter) is working on un-modeled or imperfectly modeled system. This assumption leads only to the detection of the unknown faults but cannot identify the main cause of the failure. However, this information is not enough, especially for the recovery procedure. Since it is important and useful for the fault diagnosis approach to incorporate the disturbances and unexpected faults that occur in a system, this is the main issue handled in the thesis. In brief, developing an efficient fault diagnosis approach for the unknown external faults is suggested to be the solution for the aforementioned problems. In addition, robustness, correctness and efficiency of the proposed approach has been investigated, in diagnosing the state of the system. Our fault diagnosis approach is based on Particle Filter (PF). As an evidence of working of the approach, we use a set of test case scenarios for a simulated mobile manipulator, in openRAVE [22]. On the same set of test case scenarios, we applied the Classical Particle Filter (CPF) and the Gaussian Particle Filter (GPF). The approach is applied on a hybrid dynamic system in the application of reliable navigation for a mobile robot and efficient object transfer from the initial to the target position for a robot manipulator. In our approach, we have assumed that during the occurrences of unknown fault the system’s hardware and software keeps functioning perfectly. Extensive simulations have been carried out to test the performance of the approach under different number of particles. The experimental results show the effectiveness of the approach on the given set of use case scenarios. The results show that both the approaches, CPF and GPF, are equally good at diagnosing unknown external faults but the GPF functions a lot better for the diagnosis of the unknown external faults even under less number of particles. The proposed diagnosis approach is able not only to diagnose the fault, but also to estimate some valuable informations needed for the recovery process, such as collision position for a navigated mobile robot.

@MastersThesis{ 2012arasi,
abstract  = {Monitoring complex systems (e.g. Mobile Manipulator) for
the sake of estimation and fault diagnosis, is a topic of
considerable dominance in scientific literature. Autonomous
systems often stop performing their tasks because of
occurrences of unexpected faults. Here, unexpected or
unknown fault is defined as the fault that occurs in the
system's dynamic environment and cannot be observed by the
system's sensors because of their limitations. Fault
diagnosis is a prerequisite for robust operation of a
system in a hazardous or changing environment. Fault
diagnosis comprises of different procedures represented by
fault detection, isolation and fault identification. In
recent years, many researchers have investigated diagnosis
problems, and many of them proposed particle filter (PF) as
a solution. Particle filter is a Bayesian approach used in
many applications such as fault diagnosis to approximate
the belief distribution of the state of the system as soon
as the observation is available. It has successfully
implemented on a complete system model that is running
under internal faults. In the recent works, for the systems
dealing with unknown faults, researchers assume that the
approach (i.e. Particle Filter) is working on un-modeled or
imperfectly modeled system. This assumption leads only to
the detection of the unknown faults but cannot identify the
main cause of the failure. However, this information is not
enough, especially for the recovery procedure. Since it is
important and useful for the fault diagnosis approach to
incorporate the disturbances and unexpected faults that
occur in a system, this is the main issue handled in the
thesis. In brief, developing an efficient fault diagnosis
approach for the unknown external faults is suggested to be
the solution for the aforementioned problems. In addition,
robustness, correctness and efficiency of the proposed
approach has been investigated, in diagnosing the state of
the system. Our fault diagnosis approach is based on
Particle Filter (PF). As an evidence of working of the
approach, we use a set of test case scenarios for a
simulated mobile manipulator, in openRAVE [22]. On the same
set of test case scenarios, we applied the Classical
Particle Filter (CPF) and the Gaussian Particle Filter
(GPF). The approach is applied on a hybrid dynamic system
in the application of reliable navigation for a mobile
robot and efficient object transfer from the initial to the
target position for a robot manipulator. In our approach,
we have assumed that during the occurrences of unknown
fault the system's hardware and software keeps functioning
perfectly. Extensive simulations have been carried out to
test the performance of the approach under different number
of particles. The experimental results show the
effectiveness of the approach on the given set of use case
scenarios. The results show that both the approaches, CPF
and GPF, are equally good at diagnosing unknown external
faults but the GPF functions a lot better for the diagnosis
of the unknown external faults even under less number of
particles. The proposed diagnosis approach is able not only
to diagnose the fault, but also to estimate some valuable
informations needed for the recovery process, such as
collision position for a navigated mobile robot.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS08/09 FH-BRS - Particle Filter Based Approach for the
Diagnosis of Unknown External Faults in a Mobile
Manipulator Pl{\"o}ger, M{\"u}ller, K{\"u}stenmacher
supervising},
author  = {Musherah Arasi},
month = {October},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Particle Filter Based Approach for the Diagnosis of
Unknown External Faults in a Mobile Manipulator},
year = {2012}
}

• P. Banerjee, “Design and Implementation of a Software Development Toolkit to provide Perception Functionalities for Robots,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

Robotics is a vast, interdisciplinary field of study that demands expertise of multiple disciplines like com- puter science, mathematics, electronics, mechanics, etc. Leaving the hardware part aside, even robotic software development involves integration of multiple processing components focused on different chal- lenges like, control, manipulation, navigation, perception, planning, etc. In recent development scenarios, a group of researchers usually focus on different sub-domains of expertise individually and then the sub- domain applications are integrated into complete robotic application software. To ease the software development process and reduce requirement of comprehensive knowledge of multiple sub-domains, Software Development Toolkits (SDKs) can be used. The SDKs will provide a set of high-level Application Programming Interfaces (APIs) for different robotic-domains. The APIs can be used to trigger predefined processing steps. To use the APIs only working knowledge of the corre- sponding functionalities will be required. This will enable development of a complete robotic application with comprehensive knowledge of certain sub-domains and abstract working information for the rest. For example, a planning domain expert can implement the software logics for task planning and simply reuse the APIs from perception and manipulation domain to trigger the perception and manipulation processes as governed by the implemented planning module. However, in the current state of the art of open-source robotic softwares, Software Development Toolkits for robotic manipulation, control and perception are very limited. This work aims to develop a SDK in the field of 3D robotic perception that should be able to provide dedicated interfaces to trigger high level perception functionalities like ”object detection”, ”6D pose estimation” and so on. The interface will be designed considering two classes of users for the SDK, domain-users 1 and domain-experts 2 . Based on available domain knowledge the developers can use high-level APIs to parametrize and trigger a processing step or low-level APIs to re-configure the software module and setup the algorithms and parameters to be used for processing.

@MastersThesis{ 2012banerjee,
abstract  = {Robotics is a vast, interdisciplinary field of study that
demands expertise of multiple disciplines like com- puter
science, mathematics, electronics, mechanics, etc. Leaving
the hardware part aside, even robotic software development
involves integration of multiple processing components
focused on different chal- lenges like, control,
manipulation, navigation, perception, planning, etc. In
recent development scenarios, a group of researchers
usually focus on different sub-domains of expertise
individually and then the sub- domain applications are
integrated into complete robotic application software. To
ease the software development process and reduce
requirement of comprehensive knowledge of multiple
sub-domains, Software Development Toolkits (SDKs) can be
used. The SDKs will provide a set of high-level Application
Programming Interfaces (APIs) for different
robotic-domains. The APIs can be used to trigger predefined
processing steps. To use the APIs only working knowledge of
the corre- sponding functionalities will be required. This
will enable development of a complete robotic application
with comprehensive knowledge of certain sub-domains and
abstract working information for the rest. For example, a
planning domain expert can implement the software logics
for task planning and simply reuse the APIs from perception
and manipulation domain to trigger the perception and
manipulation processes as governed by the implemented
planning module. However, in the current state of the art
of open-source robotic softwares, Software Development
Toolkits for robotic manipulation, control and perception
are very limited. This work aims to develop a SDK in the
field of 3D robotic perception that should be able to
provide dedicated interfaces to trigger high level
perception functionalities like ”object detection”,
”6D pose estimation” and so on. The interface will be
designed considering two classes of users for the SDK,
domain-users 1 and domain-experts 2 . Based on available
domain knowledge the developers can use high-level APIs to
parametrize and trigger a processing step or low-level APIs
to re-configure the software module and setup the
algorithms and parameters to be used for processing. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {WS-09,GPS EU-BRICS, Prassler, Blumenthal, Zakharov},
author  = {Pinaki Banerjee},
month = {February},
year = {2012},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Design and Implementation of a Software Development
Toolkit to provide Perception Functionalities for Robots}
}

• E. Dayangac, “Vision-based 6DoF Input Device Development with Ground Truth Evaluation,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

Different visual tracking approaches exists, among them feature-based and template- based are prominent, for determining position and orientation of an object or camera. Visual tracking has been used for several applications ranging from visual odometry, augmented reality, 3D object modeling, medical imaging, visual simultaneous localization and mapping to surveillance. The work presented in this thesis designs a visual tracking system to recover the full 6 degree-of-freedom pose of a camera with developing an evaluation methodology for performance analysis of the system. The work presented in this thesis includes design of a visual tracking system to recover the full 6 degree-of-freedom pose of a camera and also development of an evaluation methodology to analyze the performance of the tracking system. Marker-based tracking and visualization system is implemented for CAVE-type virtual reality environments. The system consists of a wired CCD camera as the only sensor and multiple unique markers. The markers are placed into a video picture and projected within a scene using a projector Except the calibration of projector and camera, the system does not need additional calibration between the scene and the tracking system. The system is affordable comparing to other existing tracking systems. The drawbacks of the system are the high power consumption and cabling requirement due to the CCD camera and the visibility of the markers within the scene. However, rapid development in imaging hardware makes the system affordable, and useful in the near future. The LCD projectors are used, hence the visualized marker within the scene is in the visible range of human eye under the current implementation. The system still needs a solution of a light module at invisible wavelength. Camera pose estimation is very common in the field of Computer Vision, which makes the implementation of a tracking system a trivial task mentioned above except; performance analysis is a challenging task. The extensive evaluation of a tracking algorithm is still an open issue in research, even though there are some attempts to address the problem. In this thesis, we categorized evaluation methods on visual tracking systems. We can conclude that the existing approaches solves the problem partially, and insufficient to discover weaknesses and strengths of an algorithm. Knowledge from the experiments does not clearly show the failures of the algorithms to improve the results, because their main purpose was to compare some particular approaches. Also, an evaluation methodology has been proposed and a framework is implemented. The framework includes the camera and visualization system additionally with a 5DoF computer-controlled robotic arm and an optical tracking system. The arm is used to move the camera systematically, while optical tracking system is used to get a rough idea about the pose of the robot-base. A new system and distributed software architectures are presented in the report. This framework is capable of collecting static image sequences repeatedly around a 3D point-list. Featuring different texture images can be collected in a short time. During run-time, it does on-line image processing with stitching reference data. The data analysis part is done manually afterwards, which is one of the incapabilities of the system. Due to the deficient robotic arm, the motion pattern is limited and collected images are static. Consequently, the implemented marker based tracking system and the optical tracking system are evaluated partially.

@MastersThesis{ 2012dayangac,
abstract  = {Different visual tracking approaches exists, among them
feature-based and template- based are prominent, for
determining position and orientation of an object or
camera. Visual tracking has been used for several
applications ranging from visual odometry, augmented
reality, 3D object modeling, medical imaging, visual
simultaneous localization and mapping to surveillance.
The work presented in this thesis designs a visual tracking
system to recover the full 6 degree-of-freedom pose of a
camera with developing an evaluation methodology for
performance analysis of the system. The work presented in
this thesis includes design of a visual tracking system to
recover the full 6 degree-of-freedom pose of a camera and
also development of an evaluation methodology to analyze
the performance of the tracking system.
Marker-based tracking and visualization system is
implemented for CAVE-type virtual reality environments. The
system consists of a wired CCD camera as the only sensor
and multiple unique markers. The markers are placed into a
video picture and projected within a scene using a projector
Except the calibration of projector and camera, the system
does not need additional calibration between the scene and
the tracking system. The system is affordable comparing to
other existing tracking systems. The drawbacks of the
system are the high power consumption and cabling
requirement due to the CCD camera and the visibility of the
markers within the scene. However, rapid development in
imaging hardware makes the system affordable, and useful in
the near future. The LCD projectors are used, hence the
visualized marker within the scene is in the visible range
of human eye under the current implementation. The system
still needs a solution of a light module at invisible
wavelength.
Camera pose estimation is very common in the field of
Computer Vision, which makes the implementation of a
tracking system a trivial task mentioned above except;
performance analysis is a challenging task.
The extensive evaluation of a tracking algorithm is still
an open issue in research, even though there are some
attempts to address the problem. In this thesis, we
categorized evaluation methods on visual tracking systems.
We can conclude that the existing approaches solves the
problem partially, and insufficient to discover weaknesses
and strengths of an algorithm. Knowledge from the
experiments does not clearly show the failures of the
algorithms to improve the results, because their main
purpose was to compare some particular approaches.
Also, an evaluation methodology has been proposed and a
framework is implemented. The framework includes the camera
and visualization system additionally with a 5DoF
computer-controlled robotic arm and an optical tracking
system. The arm is used to move the camera systematically,
while optical tracking system is used to get a rough idea
about the pose of the robot-base. A new system and
distributed software architectures are presented in the
report.
This framework is capable of collecting static image
sequences repeatedly around a 3D point-list. Featuring
different texture images can be collected in a short time.
During run-time, it does on-line image processing with
stitching reference data. The data analysis part is done
manually afterwards, which is one of the incapabilities of
the system. Due to the deficient robotic arm, the motion
pattern is limited and collected images are static.
Consequently, the implemented marker based tracking system
and the optical tracking system are evaluated partially.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {SS08/09,Herpers, Hinkenjann, Saitov supervising},
author  = {Enes Dayangac},
month = {April},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Vision-based 6DoF Input Device Development with Ground
Truth Evaluation},
year = {2012}
}

• N. Akhtar, “Improving reliability of mobile manipulators against unknown external faults,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2012.
[BibTeX] [Abstract]

A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the objects manipulated by the robot deviates from its expected course because of unforeseeable circumstances. These deviations are experienced by the robot as unknown external faults which result in failures of its actions. In this work we present an approach that increases the reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The approach presented in this work is formulated as a three step scheme that takes an example simulation and a definition of a planning operator as its inputs. The example simulation is a simulation that shows the expected/desired behavior of the object upon the execution of the action corresponded by the planning operator. In its first step, the scheme finds a description of the behavior of the objects in the example simulation in terms of logical atoms. We refer to these logical atoms collectively as description vocabulary. The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. These examples are labelled as desired or undesired according to the behavior of the object in the simulation. The description vocabulary is also used in labeling the examples. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of any unknown external faults. The proposed N-Bins algorithm can also be used for binary classification problem. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm show reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.

@MastersThesis{ 2012akhtar,
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
author  = {Naveed Akhtar},
date-modified  = {2012-02-18 17:07:45 +0100},
keywords  = {external faults, mobile manipulators, binary
classification},
month = {February},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Improving reliability of mobile manipulators against
unknown external faults},
year = {2012},
abstract  = {A robot (e.g. mobile manipulator) that interacts with its
environment to perform its tasks, often faces situations in
which it is unable to achieve its goals despite perfect
functioning of its sensors and actuators. These situations
occur when the behavior of the objects manipulated by the
robot deviates from its expected course because of
unforeseeable circumstances. These deviations are
experienced by the robot as unknown external faults which
result in failures of its actions. In this work we present
an approach that increases the reliability of mobile
manipulators against the unknown external faults. This
approach focuses on the actions of manipulators which
involve releasing of an object. The approach presented in
this work is formulated as a three step scheme that takes
an example simulation and a definition of a planning
operator as its inputs. The example simulation is a
simulation that shows the expected/desired behavior of the
object upon the execution of the action corresponded by the
planning operator. In its first step, the scheme finds a
description of the behavior of the objects in the example
simulation in terms of logical atoms. We refer to these
logical atoms collectively as description vocabulary. The
description of the simulation is used by the second step to
find limits of the parameters of the manipulated object.
Using randomly chosen values of the parameters within these
limits, this step creates different examples of the
releasing state of the object. These examples are labelled
as desired or undesired according to the behavior of the
object in the simulation. The description vocabulary is
also used in labeling the examples. In the third step, an
algorithm (i.e. N-Bins) uses the labelled examples to
suggest the state for the object in which releasing it
avoids the occurrence of any unknown external faults.
The proposed N-Bins algorithm can also be used for binary
classification problem. Therefore, in our experiments with
the proposed approach we also test its prediction ability
along with the analysis of the results of our approach. The
results show that under the circumstances peculiar to our
approach, N-Bins algorithm show reasonable prediction
accuracy where other state of the art classification
algorithms fail to do so. Thus, N-Bins also extends the
ability of a robot to predict the behavior of the object to
avoid unknown external faults. In this work we use
simulation environment OPENRave that uses physics engine
ODE to simulate the dynamics of rigid bodies.
},
annote  = {WS 09/10, H-BRS, Pl{\"o}ger, Asteroth, Kuestenmacher
supervising }
}

### 2011

• M. Shahzad, “Detection and tracking of pointing hand gestures for AR applications,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2011.
[BibTeX] [Abstract]

In the field of augmented reality (AR) user interfaces and service robotics, directly using users hand provide natural way to interact with the computers. Apart from other gestures, pointing gestures are one of the most commonly used gestures in everyday life. They can enable natural and convenient way for humans to interact with computers. In this thesis, a real time approach for detecting such natural pointing hand gestures for user interaction is presented. It uses data provided by a stereoscopic camera system capable of giving dense disparity maps and an estimation of 3D point cloud information. The developed approach allows the user to conveniently interact using natural pointing hand gestures with different components/modules of the biological laboratory Biolab. The Biolab is an International Standard Payload Rack (ISPR) of European Space Agency (ESA) and is operated by German Aerospace Center (Deutsches Zentrum fr Luft- und Raumfahrt e.V. – DLR) during space missions. It is an integral part of the European science laboratory, Columbus, which is part of the International Space Station (ISS). The approach uses skin color and 3D data to extract probable hand regions in an image. These regions are further refined to detect the pointing hand contour using minimum distance approach by projecting the 3D contour data onto virtual plane representing the Biolab. Feature points (hand center and fingertip) are found in the detected contour to estimate the pointing direction and a 3D virtual beam is fitted between them. The fingertip projection is used to switch between two semantically modeled Biolab representations in two different levels. The virtual plane and estimated 3D beam is used in the identification of pointing targets in both the levels. The developed approach is able to work in real time without any markers or other sensors attached to the pointing hand. It is person independent and has the potential to cope with different skin colors and complex background and does not require any manual initialization procedure. Moreover, it does not put any constrain on the user to wear special clothing and detects pointing hand and identifies the pointed targets even in cases if the user is not wearing full sleeves shirt. The algorithm is thoroughly evaluated with 8 different persons under two different lighting conditions in sitting and standing postures. The pointing targets are correctly identified as can be verified by inspection for natural pointing hand gestures performed by different test subjects. Identification results presented later in the thesis illustrates the effectiveness of the approach.

@MastersThesis{ 2011shahzad,
abstract  = {In the field of augmented reality (AR) user interfaces and
service robotics, directly using users hand provide
natural way to interact with the computers. Apart from
other gestures, pointing gestures are one of the most
commonly used gestures in everyday life. They can enable
natural and convenient way for humans to interact with
computers. In this thesis, a real time approach for
detecting such natural pointing hand gestures for user
interaction is presented. It uses data provided by a
stereoscopic camera system capable of giving dense
disparity maps and an estimation of 3D point cloud
information. The developed approach allows the user to
conveniently interact using natural pointing hand gestures
with different components/modules of the biological
laboratory Biolab. The Biolab is an International Standard
Payload Rack (ISPR) of European Space Agency (ESA) and is
operated by German Aerospace Center (Deutsches Zentrum fr
Luft- und Raumfahrt e.V. - DLR) during space missions. It
is an integral part of the European science laboratory,
Columbus, which is part of the International Space Station
(ISS). The approach uses skin color and 3D data to extract
probable hand regions in an image. These regions are
further refined to detect the pointing hand contour using
minimum distance approach by projecting the 3D contour data
onto virtual plane representing the Biolab. Feature points
(hand center and fingertip) are found in the detected
contour to estimate the pointing direction and a 3D virtual
beam is fitted between them. The fingertip projection is
used to switch between two semantically modeled Biolab
representations in two different levels. The virtual plane
and estimated 3D beam is used in the identification of
pointing targets in both the levels. The developed approach
is able to work in real time without any markers or other
sensors attached to the pointing hand. It is person
independent and has the potential to cope with different
skin colors and complex background and does not require any
manual initialization procedure. Moreover, it does not put
any constrain on the user to wear special clothing and
detects pointing hand and identifies the pointed targets
even in cases if the user is not wearing full sleeves
shirt. The algorithm is thoroughly evaluated with 8
different persons under two different lighting conditions
in sitting and standing postures. The pointing targets are
correctly identified as can be verified by inspection for
natural pointing hand gestures performed by different test
subjects. Identification results presented later in the
thesis illustrates the effectiveness of the approach. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {ws08 DLR - Detection and tracking of pointing hand
gestures for AR applications Herpers, Plger, Mittag},
month = {March},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Detection and tracking of pointing hand gestures for AR
applications},
year = {2011}
}

• R. Liu, P. Vorst, A. Koch, and A. Zell, “Path following for indoor robots with RFID received signal strength,” in SoftCOM 2011, 19th International Conference on Software, Telecommunications and Computer Networks, 2011, pp. 1-7.
[BibTeX]
@InProceedings{ liu_2011,
author  = {Liu, R. and Vorst, P. and Koch, A. and Zell, A.},
booktitle  = {SoftCOM 2011, 19th International Conference on Software,
Telecommunications and Computer Networks},
month = {Sept.},
pages = {1-7},
title = {{Path following for indoor robots with RFID received
signal strength}},
year = {2011}
}

• P. Vorst, A. Koch, and A. Zell, “Efficient self-adjusting, similarity-based location fingerprinting with passive UHF RFID,” in 2011 IEEE International Conference on RFID-Technologies and Applications, 2011, pp. 160-167.
[BibTeX]
@InProceedings{ vorst,
author  = {Vorst, P. and Koch, A. and Zell, A.},
booktitle  = {2011 IEEE International Conference on RFID-Technologies
and Applications},
month = {Sept.},
pages = {160-167},
title = {{Efficient self-adjusting, similarity-based location
fingerprinting with passive UHF RFID}},
year = {2011}
}

• C. A. Mueller, “3D Object Shape Categorization in Domestic Environments,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2011.
[BibTeX] [Abstract]

In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object’s identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories. Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classifica- tion accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.

@MastersThesis{ 2011mueller,
author  = {Christian Atanas Mueller},
title = {3D Object Shape Categorization in Domestic Environments},
school  = {Bonn-Rhine-Sieg University of Applied Sciences},
year = {2011},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
month = {December},
abstract  = {In service robotics, tasks without the involvement of
objects are barely applicable, like in searching, fetching
or delivering tasks. Service robots are supposed to capture
efficiently object related information in real world scenes
while for instance considering clutter and noise, and also
being flexible and scalable to memorize a large set of
objects. Besides object perception tasks like object
recognition where the object's identity is analyzed, object
categorization is an important visual object perception cue
that associates unknown object instances based on their
e.g. appearance or shape to a corresponding category. We
present a pipeline from the detection of object candidates
in a domestic scene over the description to the final shape
categorization of detected candidates. In order to detect
object related information in cluttered domestic
environments an object detection method is proposed that
copes with multiple plane and object occurrences like in
cluttered scenes with shelves. Further a surface
reconstruction method based on Growing Neural Gas (GNG) in
combination with a shape distribution-based descriptor is
proposed to reflect shape characteristics of object
candidates. Beneficial properties provided by the GNG such
as smoothing and denoising effects support a stable
description of the object candidates which also leads
towards a more stable learning of categories. Based on the
presented descriptor a dictionary approach combined with a
supervised shape learner is presented to learn prediction
models of shape categories. Experimental results, of
different shapes related to domestically appearing object
shape categories such as cup, can, box, bottle, bowl, plate
and ball, are shown. A classifica- tion accuracy of about
90% and a sequential execution time of lesser than two
seconds for the categorization of an unknown object is
achieved which proves the reasonableness of the proposed
system design. Additional results are shown towards object
tracking and false positive handling to enhance the
robustness of the categorization. Also an initial approach
towards incremental shape category learning is proposed
that learns a new category based on the set of previously
learned shape categories.},
annote  = {[Winter term 2008][BRSU] - [RoboCup@Home][Ploeger],
[Kraetzschmar], [Hochgeschwender] supervising}
}

• M. Hoffmann, “A simulation environment for distributed stream analysis,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2011.
[BibTeX] [Abstract]

This work is part of the European research project LIFT – Using Local Inference in Massively Distributed Systems. The goal of LIFT is to apply local inference on nodes of a massively distributed system in order to analyze the systems global phenomena in near real-time. Nowadays distributed systems i.e. large data centers or huge distributed networks grow larger and larger. Out of these circumstances, the problem of monitoring and analyzing these systems becomes more and more complex and almost unmanageable. The goal of this work is to develop a simulation framework for the LIFT context. This framework will support and ease the implementation, testing and evaluation of different data stream filters and global phenomena- (or state-) determination algorithms. This simulation environment intends to make the platform independent prototyping process for distributed stream analysis manageable. The adaptation of an existing multi agent simulation environment will be shown, together with techniques on how to overcome limitation to enable simulation of massively distributed systems. Additionally a framework will be developed which will support the rapid prototyping of local filters. The adapted simulation environment will be evaluated by discussing its computational and parameter dependant scalability. Additionally a filter model, the privacy preserving spatial filter model will be introduced and applied to a real world scenario for validation.

@MastersThesis{ 2011hoffmannmarius,
author  = {Marius Hoffmann},
title = {A simulation environment for distributed stream analysis},
school  = {Bonn-Rhein-Sieg University of Applied Science},
year = {2011},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
month = {September},
annote  = {[SS09][LIFT] - [Using Local Inference in Massively
Distributed Systems][PD Dr. Michael Mock (IAIS)][Prof. Dr.
Paul G. Pl\"oger (H-BRS)]},
abstract  = {This work is part of the European research project LIFT --
Using Local Inference in Massively Distributed Systems. The
goal of LIFT is to apply local inference on nodes of a
massively distributed system in order to analyze the
systems global phenomena in near real-time. Nowadays
distributed systems i.e. large data centers or huge
distributed networks grow larger and larger. Out of these
circumstances, the problem of monitoring and analyzing
these systems becomes more and more complex and almost
unmanageable. The goal of this work is to develop a
simulation framework for the LIFT context. This framework
will support and ease the implementation, testing and
evaluation of different data stream filters and global
phenomena- (or state-) determination algorithms. This
simulation environment intends to make the platform
independent prototyping process for distributed stream
analysis manageable. The adaptation of an existing multi
agent simulation environment will be shown, together with
techniques on how to overcome limitation to enable
simulation of massively distributed systems. Additionally a
framework will be developed which will support the rapid
prototyping of local filters. The adapted simulation
environment will be evaluated by discussing its
computational and parameter dependant scalability.
Additionally a filter model, the privacy preserving spatial
filter model will be introduced and applied to a real world
scenario for validation.}
}

• Q. Fu, “The CUDA Acceleration of SIFT algorithm on GPU,” Master Thesis, Grantham Allee 20, 53757 St. Augustin, Germany, 2011.
[BibTeX] [Abstract]

Since the SIFT algorithm was invented in 1999, it has been well applied in several fields such as object recognition, robotic mapping and navigation, image stitching, 3D model- ing, gesture recognition, video tracking, and match moving. Because of the steadiness and distinctiveness of SIFT, it has become the most popular research topic or tool in computer vision. However, in order to apply SIFT in real time, speed is the biggest difficulty. Since in the SIFT algorithm, the original image has to be modified several times, such as convolution and calculate the Difference of Gaussian. Thus, if the image size is big, the workload would grow exponentially. It would take seconds to perform SIFT on an image with resolution 1024*768. In this thesis, a GPU is used to accelerate SIFT. By taking the advantage of parallel computing and proper management of memory on graphic card, the algorithm is accelerate about 4 times and the result shows that the size of image would not influence the time cost as big as in the CPU implementation.

@MastersThesis{ 2011fu,
abstract  = {Since the SIFT algorithm was invented in 1999, it has been
well applied in several fields such as object recognition,
robotic mapping and navigation, image stitching, 3D model-
ing, gesture recognition, video tracking, and match moving.
Because of the steadiness and distinctiveness of SIFT, it
has become the most popular research topic or tool in
computer vision. However, in order to apply SIFT in real
time, speed is the biggest difficulty. Since in the SIFT
algorithm, the original image has to be modified several
times, such as convolution and calculate the Difference of
Gaussian. Thus, if the image size is big, the workload
would grow exponentially. It would take seconds to perform
SIFT on an image with resolution 1024*768. In this thesis,
a GPU is used to accelerate SIFT. By taking the advantage
of parallel computing and proper management of memory on
graphic card, the algorithm is accelerate about 4 times and
the result shows that the size of image would not influence
the time cost as big as in the CPU implementation. },
address  = {Grantham Allee 20, 53757 St. Augustin, Germany},
annote  = {WS 2006/07, Kraetschmar, Pl\"{o}ger supervising},
author  = {Quiang Fu},
month = {February},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {The CUDA Acceleration of SIFT algorithm on GPU },
year = {2011}
}

• G. Giorgana, “Facial Expression Recognition from Video Sequences using Spatial and Spatiotemporal Face Descriptors,” Master Thesis, Grantham Allee 20, 53757 St. Augustin, Germany, 2011.
[BibTeX] [Abstract]

In the last decades, more natural and efficient channels of communication between humans and robots have been investigated. Most of the efforts have been toward analysis of spoken language, but the existing speech-based solutions are still highly error-prone. As a result, researchers have started to consider human facial expressions as an important form to improve the spoken communication. In this thesis we describe three effective methods for the extraction of features that can be used for the recognition of human facial expressions: 2D Gabor filters, Local Binary Patterns (LBP) and Local Binary Patterns from Three Orthogonal Planes (LBP-TOP). Moreover, we investigate the effect of using different sets of parameters on the three approaches. Depending on the technique, the recognition of the expressions is done in still images or complete video sequences. The Cohn-Kanade AU-Coded Facial Expression Database is used throughout this work. All the experiments in this report are carried out using the AdaBoost.MH algorithm. We describe how this ensemble learner can cope with our multi-class problem, while also reducing the number of features. Furthermore, the system is evaluated for two different kinds of weak learners, namely multi-threshold decision stumps and single-threshold decision stumps. Another question we address in this project is the effect of histogram equalization for the face images. Therefore, the impact of this technique in the three methods is analyzed. From the three explored approaches, LBP-TOP takes into account the facial motion that occurs due to facial expressions. On the other hand, 2D Gabor Filters and LBP only describe the instantaneous face appearance, ignoring temporal information. Taking the previous information into account, we compare the approaches and show the advantages of considering motion cues into the analysis. Finally, we present experimental evidence that the three techniques are suitable to recognize facial expressions from live video.

@MastersThesis{ 2011giorgana,
author  = {Geovanny Giorgana},
title = {Facial Expression Recognition from Video Sequences using
Spatial and Spatiotemporal Face Descriptors},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
year = {2011},
address  = {Grantham Allee 20, 53757 St. Augustin, Germany},
month = {February},
abstract  = {In the last decades, more natural and efficient channels
of communication between humans and robots have been
investigated. Most of the efforts have been toward analysis
of spoken language, but the existing speech-based solutions
are still highly error-prone. As a result, researchers have
started to consider human facial expressions as an
important form to improve the spoken communication.
In this thesis we describe three effective methods for the
extraction of features that can be used for the recognition
of human facial expressions: 2D Gabor filters, Local Binary
Patterns (LBP) and Local Binary Patterns from Three
Orthogonal Planes (LBP-TOP). Moreover, we investigate the
effect of using different sets of parameters on the three
approaches. Depending on the technique, the recognition of
the expressions is done in still images or complete video
sequences. The Cohn-Kanade AU-Coded Facial Expression
Database is used throughout this work.
All the experiments in this report are carried out using
the AdaBoost.MH algorithm. We describe how this ensemble
learner can cope with our multi-class problem, while also
reducing the number of features. Furthermore, the system is
evaluated for two different kinds of weak learners, namely
multi-threshold decision stumps and single-threshold
decision stumps.
Another question we address in this project is the effect
of histogram equalization for the face images. Therefore,
the impact of this technique in the three methods is
analyzed.
From the three explored approaches, LBP-TOP takes into
account the facial motion that occurs due to facial
expressions. On the other hand, 2D Gabor Filters and LBP
only describe the instantaneous face appearance, ignoring
temporal information. Taking the previous information into
account, we compare the approaches and show the advantages
of considering motion cues into the analysis. Finally, we
present experimental evidence that the three techniques are
suitable to recognize facial expressions from live video.},
annote  = {WS07/08 H-BRS - RoboCup@home Pl\"{o}ger, Kraetzschmar
supervising}
}

• F. Hegger, “3D People Detection in Domestic Environments,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2011.
[BibTeX] [Abstract]

The ability of detecting people has become a crucial subtask, especially in robotic systems which aim an application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29% and sitting people with 74.94% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.

@MastersThesis{ 2011hegger,
abstract  = {The ability of detecting people has become a crucial
subtask, especially in robotic systems which aim an
application in public or domestic environments. Robots
already provide their services e.g. in real home
improvement markets and guide people to a desired product.
In such a scenario many robot internal tasks would benefit
from the knowledge of knowing the number and positions of
people in the vicinity. The navigation for example could
treat them as dynamical moving objects and also predict
their next motion directions in order to compute a much
safer path. Or the robot could specifically approach
customers and offer its services. This requires to detect a
person or even a group of people in a reasonable range in
front of the robot. Challenges of such a real-world task
are e.g. changing lightning conditions, a dynamic
environment and different people shapes. In this thesis a
3D people detection approach based on point cloud data
provided by the Microsoft Kinect is implemented and
integrated on mobile service robot. A Top-Down/Bottom-Up
segmentation is applied to increase the systems flexibility
and provided the capability to the detect people even if
they are partially occluded. A feature set is proposed to
detect people in various pose configurations and motions
using a machine learning technique. The system can detect
people up to a distance of 5 meters. The experimental
evaluation compared different machine learning techniques
and showed that standing people can be detected with a rate
of 87.29% and sitting people with 74.94% using a Random
Forest classifier. Certain objects caused several false
detections. To elimante those a verification is proposed
which further evaluates the persons shape in the 2D space.
The detection component has been implemented as s
sequential (frame rate of 10 Hz) and a parallel application
(frame rate of 16 Hz). Finally, the component has been
embedded into complete people search task which explorates
the environment, find all people and approach each detected
person. },
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[Winter term 2008] [BRSU] - [RoboCup@Home] [Ploeger],
[Kraetzschmar], [Hochgeschwender] supervising},
author  = {Frederik Hegger},
month = {October},
school  = {Bonn-Rhine-Sieg University of Applied Sciences},
title = {3D People Detection in Domestic Environments},
year = {2011}
}

• M. U. Awais, “An adaptive search engine for Power Point objects,” Master Thesis, Grantham-Allee 20, 53757 St. Augustin, Germany, 2011.
[BibTeX] [Abstract]

In large scale organizations, with active information technology infrastructure, there is almost no concept of inception of a group discussion, without a formal presentation. In large organizations, volume of these presentations increases rapidly. In order to reuse these presentations, there is a need to develop a knowledge management application which intelligently manages this large amount of data. Intelligent searching should be one part of this application. This report presents a search methodology which is adaptive and intelligent. Using this search methodology a search engine is developed, to search Power Point objects. This engine may be used in conjunction with Microsoft Power Point and other softwares for preparing presentations. To make this search intelligent, two things are introduced. One is, latent semantic analysis, while second is, incorporating user feedback in a way that desires and expectations of users can vary the ranking of results. An innovative design is presented, which delicately inter relates both of these ideas, to present an adaptive behavior of the search system. While testing this application, a new idea for testing a search system is presented. The idea is to use artificial agents for testing, by automating user behaviors.

@MastersThesis{ 2011awais,
abstract  = {In large scale organizations, with active information
technology infrastructure, there is almost no concept of
inception of a group discussion, without a formal
presentation. In large organizations, volume of these
presentations increases rapidly. In order to reuse these
presentations, there is a need to develop a knowledge
management application which intelligently manages this
large amount of data. Intelligent searching should be one
part of this application. This report presents a search
methodology which is adaptive and intelligent. Using this
search methodology a search engine is developed, to search
Power Point objects. This engine may be used in conjunction
with Microsoft Power Point and other softwares for
preparing presentations. To make this search intelligent,
two things are introduced. One is, latent semantic
analysis, while second is, incorporating user feedback in a
way that desires and expectations of users can vary the
ranking of results. An innovative design is presented,
which delicately inter relates both of these ideas, to
present an adaptive behavior of the search system. While
testing this application, a new idea for testing a search
system is presented. The idea is to use artificial agents
for testing, by automating user behaviors.},
address  = {Grantham-Allee 20, 53757 St. Augustin, Germany},
annote  = {[2008] [Text mining] - [An adaptive search engine for
Power Point objects] [Herpers], [Ploeger], [Willebrand]
supervising},
month = {March},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {An adaptive search engine for Power Point objects},
year = {2011}
}

### 2010

• F. K. Heinstein, “Improving the Processing and Visualization of Ultrasound Imaging on GPU with CUDA,” Master Thesis, 2010.
[BibTeX] [Abstract]

Phased Array Ultrasound Technologies (PAUT) techniques is one the Non-Destructive Testing (NDT) methods where we have the possibility to perform inspections with ultrasonic beams of various angles and focal lengths using a single array of transducers. There are software application that precisely controlled the delay of both the emission pulse and the receive signal for each element in an array of transducers. However the processing of the receive beam forming requires a lot of computing power during the image reconstruction phase. The tremendous growth in Graphics Processors Unit (GPU) performance and flexibility has led to an increased interest in performing general-purpose computation on GPU. GPU provides a vast number of simple, data parallel, deeply multithreaded cores and high memory bandwidths. GPU architectures are becoming increasingly programmable, offering the potential for dramatic speedups for a variety of general-purpose applications compared to contemporary general-purpose processors (CPU). That is why with this master thesis we want to explore the area of general purpose computing on GPU, by looking at how the GPU can be utilized to accelerate the processing of ultrasound image. We will use CUDA (Compute Uni ed Device Architecture) which is a language from NVIDIA close to C programming language as a programming model for this GPU implementation. CUDA is a novel technology of general-purpose computing on the GPU allowing users to develop general GPU programs easily. In this thesis I will explore the e ffectiveness of GPU in Ultrasound Image and describes some specific coding idioms that improve their performance on the GPU. GPU performance will be compared to both single-thread version executed on the single-core CPU and multi-threaded OpenMP version executed on the multi-core CPU.

@MastersThesis{ heinstein2010,
abstract  = {Phased Array Ultrasound Technologies (PAUT) techniques is
one the Non-Destructive Testing (NDT) methods where we have
the possibility to perform inspections with ultrasonic
beams of various angles and focal lengths using a single
array of transducers. There are software application that
precisely controlled the delay of both the emission pulse
and the receive signal for each element in an array of
transducers. However the processing of the receive beam
forming requires a lot of computing power during the image
reconstruction phase.
The tremendous growth in Graphics Processors Unit (GPU)
performance and flexibility has led to an increased
interest in performing general-purpose computation on GPU.
GPU provides a vast number of simple, data parallel, deeply
multithreaded cores and high memory bandwidths. GPU
architectures are becoming increasingly programmable,
offering the potential for dramatic speedups for a variety
of general-purpose applications compared to contemporary
general-purpose processors (CPU). That is why with this
master thesis we want to explore the area of general
purpose computing on GPU, by looking at how the GPU can be
utilized to accelerate the processing of ultrasound image.
We will use CUDA (Compute Uni ed Device Architecture) which
is a language from NVIDIA close to C programming language
as a programming model for this GPU implementation. CUDA is
a novel technology of general-purpose computing on the GPU
allowing users to develop general GPU programs easily.
In this thesis I will explore the e ffectiveness of GPU in
Ultrasound Image and describes some specific coding idioms
that improve their performance on the GPU. GPU performance
will be compared to both single-thread version executed on
the single-core CPU and multi-threaded OpenMP version
executed on the multi-core CPU.},
author  = {Heinstein, Fotso Kamgne},
keywords  = {Non-Destructive Testing NDT, Phased Array Testing
Ultrasound Technologies (PAUT), Ultrasound Images, High
Performance Computing(HPC), General-Purpose Graphics
Processing Units (GPGPU), CUDA, OpenMP, Parallel
month = {July},
owner = {108012516},
school  = {Hochschule Bonn-Rhein-Sieg},
timestamp  = {2010.09.08},
title = {Improving the Processing and Visualization of Ultrasound
Imaging on GPU with CUDA},
year = {2010}
}

• C. Hekimian-Williams, B. Grant, X. Liu, Z. Zhang, and P. Kumar, “Accurate localization of RFID tags using phase difference,” in 2010 IEEE International Conference on RFID (IEEE RFID 2010), 2010, pp. 89-96.
[BibTeX]
@InProceedings{ hekimian,
author  = {Hekimian-Williams, C. and Grant, B. and Liu, X. and Zhang,
Z. and Kumar, P.},
booktitle  = {2010 IEEE International Conference on RFID (IEEE RFID
2010)},
month = {Apr.},
pages = {89-96},
title = {{Accurate localization of RFID tags using phase
difference}},
year = {2010}
}

• U. Köckemann, “A Relational Robotics Workbench for Learning Experiments with Autonomous Robots,” Master Thesis, 2010.
[BibTeX] [Abstract]

Even though relational learning approaches, as Inductive Logic Programming and Statistical Relational Learning have great potential benefits in the area of robotic learning, there only have been few applications. We belief one of the reasons to be the large effort involved with performing relational learning experiments in real robotic domains. To resolve these issues and facilite research in this direction we propose the \textit{Relational Robotics Workbench}, a tool that aids researchers in conducting Inductive logic Programming and Statistical Relational Learning experiments in robotic domains. By maximizing re-usability of all components, as learning and inference components on the relational level, or algorithms used for robot control, the \textit{Relational Robotics Workbench} significantly reduces the amount of work in creating relational learning experiments. Furthermore, the learning experiments themselves are made explicit, allowing to store the robots setup, execution of plans (to gather training and test data), setup and execution of learning algorithms and evaluation of the final hypotheses in an easy-to-read configuration file. Functionality provided by the \textit{Workbench} includes: Inductive Logic Programming with Aleph, Statistical Relational Learning With Alchemy, Logic Programming Inference with Prolog, and Statistical Relational Inference with Alchemy. We will provide an analysis of requirements of the proposed \textit{Workbench} and derive a four layered architecture to fulfill them. Along with the description of the architecture, we will provide a full learning experiment from the XPERO project as a running example. To further demonstrate usefulness of the \textit{Workbench}, we then will perform another series of experiments in the ego-motion scenario of the XPERO project. The experiments include descriptions of some additional algorithms implemented in the Workbench, as for instance, a simple scheme to automatically extract learning examples to apply supervised learning methods in an unsupervised fashion. Results for all learning experiments will be presented and examples of the theories that were learned in each scenario will be discussed.

@MastersThesis{ koeckemann10relational,
author  = {K{\"o}ckemann, Uwe},
title = {A Relational Robotics Workbench for Learning Experiments
with Autonomous Robots},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
year = {2010},
abstract  = {Even though relational learning approaches, as Inductive
Logic Programming and Statistical Relational Learning have
great potential benefits in the area of robotic learning,
there only have been few applications. We belief one of the
reasons to be the large effort involved with performing
relational learning experiments in real robotic domains. To
resolve these issues and facilite research in this
direction we propose the \textit{Relational Robotics
Workbench}, a tool that aids researchers in conducting
Inductive logic Programming and Statistical Relational
Learning experiments in robotic domains.
By maximizing re-usability of all components, as learning
and inference components on the relational level, or
algorithms used for robot control, the \textit{Relational
Robotics Workbench} significantly reduces the amount of
work in creating relational learning experiments.
Furthermore, the learning experiments themselves are made
explicit, allowing to store the robots setup, execution of
plans (to gather training and test data), setup and
execution of learning algorithms and evaluation of the
final hypotheses in an easy-to-read configuration file.
Functionality provided by the \textit{Workbench} includes:
Inductive Logic Programming with Aleph, Statistical
Relational Learning With Alchemy, Logic Programming
Inference with Prolog, and Statistical Relational Inference
with Alchemy.
We will provide an analysis of requirements of the proposed
\textit{Workbench} and derive a four layered architecture
to fulfill them. Along with the description of the
architecture, we will provide a full learning experiment
from the XPERO project as a running example. To further
demonstrate usefulness of the \textit{Workbench}, we then
will perform another series of experiments in the
ego-motion scenario of the XPERO project. The experiments
include descriptions of some additional algorithms
implemented in the Workbench, as for instance, a simple
scheme to automatically extract learning examples to apply
supervised learning methods in an unsupervised fashion.
Results for all learning experiments will be presented and
examples of the theories that were learned in each scenario
will be discussed. }
}

• N. Khayat, “Monitoring and Analysis of Workflows in Learning Environments,” Master Thesis, Grantham Allee 20, 53757 St. Augustin, Germany, 2010.
[BibTeX] [Abstract]

With the prosperous spread of new technologies, Technology-Enabled Learning (TEL) Environments are playing a more signicant role, as a mean of delivering an intelligent learning process to the learners. In the context of the European SCY project, a collaborative, learners-centric TEL environment is being developed. Using the SCY system, the learners would execute a mission, represented as a workow, which could be considered as a provided plan for the learners to follow. In this thesis, we have developed a monitoring and analysis system, to be integrated with the SCY system. With the developed system, reference workows and workow executions will be analysed to extract meaningful patterns. The extracted patterns will be used to provide the teachers and pedagogical experts with knowledge insights about the missions executions. Meaningful behavioral attributes are extracted automatically from those patterns, which expresses the behavior model of the learner.

@MastersThesis{ 2010khayat,
abstract  = {With the prosperous spread of new technologies,
Technology-Enabled Learning (TEL) Environments are playing
a more signicant role, as a mean of delivering an
intelligent learning process to the learners.
In the context of the European SCY project, a
collaborative, learners-centric TEL environment is being
developed. Using the SCY system, the learners would execute
a mission, represented as a workow, which could be
considered as a provided plan for the learners to follow.
In this thesis, we have developed a monitoring and analysis
system, to be integrated with the SCY system. With the
developed system, reference workows and workow executions
will be analysed to extract meaningful patterns. The
extracted patterns will be used to provide the teachers and
pedagogical experts with knowledge insights about the
missions executions. Meaningful behavioral attributes are
extracted automatically from those patterns, which
expresses the behavior model of the learner.},
address  = {Grantham Allee 20, 53757 St. Augustin, Germany},
annote  = {WS07/08 Fraunhofer IAIS - SCY "Science Created by You"
Mock, Pl{\"o}ger supervising},
author  = {Noury Khayat},
month = {May},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Monitoring and Analysis of Workflows in Learning
Environments},
year = {2010}
}

• U. Kayani, “ESNs with sparse output connections,” Master Thesis, Grantham Allee 20, 53757 St. Augustin, Germany, 2010.
[BibTeX] [Abstract]

@MastersThesis{ 2010kayani,
abstract  = {This is your abstract.},
address  = {Grantham Allee 20, 53757 St. Augustin, Germany},
annote  = {[Summer Semester 2006] [Project Affiliation] - [DEGENA]
[Dr. Kobialka] and [Prof. Dr. Ploeger]},
author  = {Umer Kayani},
date-modified  = {2016-09-04 11:54:23 +0000},
month = {January},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {ESNs with sparse output connections},
year = {2010}
}

• N. Kharecha, “Robotic Perception for Object Grasping: A ToF Camera based Approach for Environment Sensing,” Master Thesis, Sankt Augustin, Germany, 2010.
[BibTeX] [Abstract]

If 3D model of the grasping body is available, object surface can be evaluated for given gripper using force-closure criteria. In real-life house hold scenario, such models are not available. To determine grasp for any object, skill to interpret object geometry is required. As grasping of the object occur in 3D world, range sensors are considered better choice of perception. Different ranging devices provide range data but are either time consuming to capture (row-by-row laser scanning) or sparse (stereo camera). Such requirements of skill of grasp determination of model-free objects considering limitation of different ranging device, task of the thesis aim to develop Time-of-flight camera based robotic perception system to determine grasp for unknown object. Time-of-flight camera provides full scale range information at video frame rate but depth measurements are affected by different parameters. Different distance correction schemes are evaluated to suppress noise level. To interpret viewing scene, segmentation of table-top scenario is done to hypothesize different object segments. Single part grasping body is considered to be placed on top of the surface; elongated representation of the footprint of the object segment is presented. As an earlier stage of implementation, a very primitive grasp pose is computed for given elongated representation of the object segment.

@MastersThesis{ 2010kharecha,
author  = {Kharecha, Nimeshkumar},
title = {Robotic Perception for Object Grasping: A ToF Camera based
Approach for Environment Sensing},
school  = {Bonn-Rhine-Sieg University of Applied Sciences},
year = {2010},
month = {May},
abstract  = {If 3D model of the grasping body is available, object
surface can be evaluated for given gripper using
force-closure criteria. In real-life house hold scenario,
such models are not available. To determine grasp for any
object, skill to interpret object geometry is required. As
grasping of the object occur in 3D world, range sensors are
considered better choice of perception. Different ranging
devices provide range data but are either time consuming to
capture (row-by-row laser scanning) or sparse (stereo
camera).
Such requirements of skill of grasp determination of
model-free objects considering limitation of different
ranging device, task of the thesis aim to develop
Time-of-flight camera based robotic perception system to
determine grasp for unknown object. Time-of-flight camera
provides full scale range information at video frame rate
but depth measurements are affected by different
parameters. Different distance correction schemes are
evaluated to suppress noise level. To interpret viewing
scene, segmentation of table-top scenario is done to
hypothesize different object segments. Single part grasping
body is considered to be placed on top of the surface;
elongated representation of the footprint of the object
segment is presented. As an earlier stage of
implementation, a very primitive grasp pose is computed for
given elongated representation of the object segment.},
keywords  = {ToF camera, computational geometry, grasp determination}
}

• J. Kang, “Interactive Medical Image Segmentation via User-guided Adaptation of Implicit 3D Surfaces,” Master Thesis, Grantham Allee 20, 53757 St. Augustin, Germany, 2010.
[BibTeX] [Abstract]

Medical image segmentation plays a very important role in medical image analysis and is considered as a very challenging problem. In this work, a new interactive medical image segmentation framework via user-guided adaptation of implicit 3D surfaces is proposed. The surfaces are mathematically described by a radial basis function that interpolates some user-defined points. This framework allows two kinds of user interactions: Firstly, via mouse clicks the user can define points on the boundary of objects in 3D images. For each point, a 3D surface which interpolates the user-determined points is computed. The user can add or remove points with instant visual feedback until the desired accuracy is reached. Secondly, the user can start an automatic adaption. In this step, the surface computed in step one automatically evolves by maximizing a certain energy function with only external energy. Strong image edges are interpolated without losing the interpolation property of the user-defined points. Our framework is validated by segmenting lung tumors on CT data. Some good segmentation results by this framework show its potential for practical usage.

@MastersThesis{ 2010kang,
abstract  = {Medical image segmentation plays a very important role in
medical image analysis and is considered as a very
challenging problem. In this work, a new interactive
medical image segmentation framework via user-guided
adaptation of implicit 3D surfaces is proposed. The
surfaces are mathematically described by a radial basis
function that interpolates some user-defined points. This
framework allows two kinds of user interactions: Firstly,
via mouse clicks the user can define points on the boundary
of objects in 3D images. For each point, a 3D surface which
interpolates the user-determined points is computed. The
user can add or remove points with instant visual feedback
until the desired accuracy is reached. Secondly, the user
can start an automatic adaption. In this step, the surface
computed in step one automatically evolves by maximizing a
certain energy function with only external energy. Strong
image edges are interpolated without losing the
interpolation property of the user-defined points. Our
framework is validated by segmenting lung tumors on CT
data. Some good segmentation results by this framework show
its potential for practical usage.},
address  = {Grantham Allee 20, 53757 St. Augustin, Germany},
annote  = {WS06/07 Philips Hamburg Research Center Opfer,
Kraetzschmar},
author  = {Jingang Kang},
month = {June},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Interactive Medical Image Segmentation via User-guided
year = {2010}
}

### 2009

• D. Joho, C. Plagemann, and W. Burgard, “Modeling RFID signal strength and tag detection for localization and mapping,” in 2009 IEEE International Conference on Robotics and Automation, 2009, pp. 3160-3165.
[BibTeX]
@InProceedings{ joho,
author  = {Joho, D. and Plagemann, C. and Burgard, W.},
booktitle  = {2009 IEEE International Conference on Robotics and
Automation},
month = {May},
pages = {3160-3165},
title = {{Modeling RFID signal strength and tag detection for
localization and mapping}},
year = {2009}
}

### 2008

• M. Bouet and A. L. dos Santos, “RFID tags: Positioning principles and localization techniques,” in 2008 1st IFIP Wireless Days, 2008, pp. 1-5.
[BibTeX]
@InProceedings{ rfid_localization,
author  = {Bouet, M. and dos Santos, A. L.},
booktitle  = {2008 1st IFIP Wireless Days},
month = {Nov.},
pages = {1-5},
title = {{RFID tags: Positioning principles and localization
techniques}},
year = {2008}
}

• A. Zakharov, “Robust navigation in everyday environment,” Master Thesis, Grantham Allee 20, 53757 St. Augustin, Germany, 2008.
[BibTeX] [Abstract]

Autonomous navigation is one of the challenging task in robotics. In order to navigate safely, a robot needs robust and reliable sensing mechanisms. Though there are a variety of range sensors for obstacle detection on the market, there is no straight forward solution, which sensor’s system and data processing mechanism apply in order to get reliable and low cost robot navigation for everyday environment. There is no ultimate range sensor, each type of sensors has it own advantages and disadvantages. The goal of this project is to design a navigation system, which is a combination of affordable range sensors (hardware) and data processing and enhancement algorithms (software), which would eliminate the sensors’ shortcomings.

@MastersThesis{ 2008zakharov,
robotics. In order to navigate safely, a robot needs robust
and reliable sensing mechanisms. Though there are a variety
of range sensors for obstacle detection on the market,
there is no straight forward solution, which sensor's
system and data processing mechanism apply in order to get
reliable and low cost robot navigation for everyday
environment. There is no ultimate range sensor, each type
goal of this project is to design a navigation system,
which is a combination of affordable range sensors
(hardware) and data processing and enhancement algorithms
(software), which would eliminate the sensors'
shortcomings.},
address  = {Grantham Allee 20, 53757 St. Augustin, Germany},
annote  = {SS06 [Prassler], [Shakhimardanov]},
author  = {Alexey Zakharov},
month = {June},
school  = {Bonn-Rhein-Sieg University of Applied Sciences},
title = {Robust navigation in everyday environment},
year = {2008}
}

### 2007

• S. A. Weis, “RFID (Radio Frequency Identification): Principles and Applications,” System, vol. 2, iss. 3, 2007.
[BibTeX]
@Article{ rfid_tags,
author  = {Weis, S. A.},
journal  = {System},
number  = {3},
title = {{RFID (Radio Frequency Identification): Principles and
Applications}},
volume  = {2},
year = {2007}
}

### 2005

• S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics, MIT Press, 2005.
[BibTeX]
@Book{ thrun,
author  = {Thrun, S. and Burgard, W. and Fox, D.},
publisher  = {MIT Press},
title = {{Probabilistic Robotics}},
year = {2005}
}

### 2004

• I. Rekleitis, “A Particle Filter Tutorial for Mobile Robot Localization,” Centre for Intelligent Machines, McGill University, 3480 University St., Montreal, Québec, CANADA H3A 2A7, TR-CIM-04-02, 2004.
[BibTeX]
@TechReport{ rekleitis04,
author  = {Rekleitis, Ioannis},
date-modified  = {2017-12-01 11:48:12 +0000},
institution  = {Centre for Intelligent Machines, McGill University},
month = {Jan.},
number  = {TR-CIM-04-02},
title = {A Particle Filter Tutorial for Mobile Robot Localization},
year = {2004}
}

• D. Hähnel, W. Burgard, D. Fox, K. Fishkin, and M. Philipose, “Mapping and localization with RFID technology,” in Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004 IEEE International Conference on, 2004, pp. 1015-1020.
[BibTeX]
@InProceedings{ haehnel,
author  = {H{\"a}hnel, D. and Burgard, W. and Fox, D. and Fishkin, K.
and Philipose, M.},
booktitle  = {Robotics and Automation, 2004. Proceedings. ICRA '04. 2004
IEEE International Conference on},
month = {April},
pages = {1015-1020},
title = {{Mapping and localization with RFID technology}},
volume  = {1},
year = {2004}
}

• A. I. Eliazar and R. Parr, “Learning Probabilistic Motion Models for Mobile Robots,” in Proceedings of the Twenty-first International Conference on Machine Learning, 2004.
[BibTeX]
@InProceedings{ eliazar,
author  = {Eliazar, A. I. and Parr, R.},
booktitle  = {Proceedings of the Twenty-first International Conference
on Machine Learning},
title = {{Learning Probabilistic Motion Models for Mobile Robots}},
year = {2004}
}

### 2001

• D. Fox, S. Thrun, W. Burgard, and F. Dellaert, “Particle Filters for Mobile Robot Localization,” in Sequential Monte Carlo Methods in Practice, A. Doucet, N. de Freitas, and N. Gordon, Eds., New York, NY: Springer New York, 2001, p. 401–428.
[BibTeX] [Abstract]

This chapter investigates the utility of particle filters in the context of mobile robotics. In particular, we report results of applying particle filters to the problem of mobile robot localization, which is the problem of estimating a robot’s pose relative to a map of its environment. The localization problem is a key one in mobile robotics, because it plays a fundamental role in various successful mobile robot systems; see e.g., (Cox and Wilfong 1990, Fukuda, Ito, Oota, Arai, Abe, Tanake and Tanaka 1993, Hinkel and Knieriemen 1988, Leonard, Durrant-Whyte and Cox 1992, Rencken 1993, Simmons, Goodwin, Haigh, Koenig and O’Sullivan 1997, Weiß, Wetzler and von Puttkamer 1994) and various chapters in (Borenstein, Everett and Feng 1996) and (Kortenkamp, Bonasso and Murphy 1998). Occasionally, it has been referred to as “the most fundamental problem to providing a mobile robot with autonomous capabilities” (Cox 1991).

@InBook{ fox2001,
abstract  = {This chapter investigates the utility of particle filters
in the context of mobile robotics. In particular, we report
results of applying particle filters to the problem of
mobile robot localization, which is the problem of
estimating a robot's pose relative to a map of its
environment. The localization problem is a key one in
mobile robotics, because it plays a fundamental role in
various successful mobile robot systems; see e.g., (Cox and
Wilfong 1990, Fukuda, Ito, Oota, Arai, Abe, Tanake and
Tanaka 1993, Hinkel and Knieriemen 1988, Leonard,
Durrant-Whyte and Cox 1992, Rencken 1993, Simmons, Goodwin,
Haigh, Koenig and O'Sullivan 1997, Wei{\ss}, Wetzler and
von Puttkamer 1994) and various chapters in (Borenstein,
Everett and Feng 1996) and (Kortenkamp, Bonasso and Murphy
1998). Occasionally, it has been referred to as the most
fundamental problem to providing a mobile robot with
autonomous capabilities'' (Cox 1991).},
}