Technical Reports

2018

  • D. Vukcevic, “Extending a constrained hybrid dynamics solver for energy-optimal robot motions in the presence of static friction,” Hochschule Bonn-Rhein-Sieg 2018. doi:10.18418/978-3-96043-063-6
    [BibTeX]
    @techreport{Vukcevic2018,
    Author = {Vukcevic, Djordje},
    Doi = {10.18418/978-3-96043-063-6},
    Institution = {Hochschule Bonn-Rhein-Sieg},
    Title = {Extending a constrained hybrid dynamics solver for energy-optimal robot motions in the presence of static friction},
    Year = {2018},
    Bdsk-Url-1 = {https://dx.doi.org/10.18418/978-3-96043-063-6}}

2017

  • L. O. Arriaga Camargo, “Scene understanding through Deep Learning,” Hochschule Bonn-Rhein-Sieg 2017. doi:10.18418/978-3-96043-045-2
    [BibTeX]
    @techreport{ArriagaCamargo2017,
    Author = {Arriaga Camargo, Luis Octavio},
    Doi = {10.18418/978-3-96043-045-2},
    Institution = {Hochschule Bonn-Rhein-Sieg},
    Title = {{Scene understanding through Deep Learning}},
    Year = {2017},
    Bdsk-Url-1 = {https://dx.doi.org/10.18418/978-3-96043-045-2}}

  • T. C. Hassan, “Recognizing Emotions Conveyed through Facial Expressions,” Hochschule Bonn-Rhein-Sieg 2017. doi:10.18418/978-3-96043-047-6
    [BibTeX]
    @techreport{Hassan2017a,
    Author = {Hassan, Teena Chakkalayil},
    Doi = {10.18418/978-3-96043-047-6},
    Institution = {Hochschule Bonn-Rhein-Sieg},
    Title = {{Recognizing Emotions Conveyed through Facial Expressions}},
    Year = {2017},
    Bdsk-Url-1 = {https://dx.doi.org/10.18418/978-3-96043-047-6}}

  • T. C. Hassan, “Taxonomy and Technology Mapping of Mobility Assistance Systems,” Hochschule Bonn-Rhein-Sieg 2017. doi:10.18418/978-3-96043-046-9
    [BibTeX]
    @techreport{Hassan2017,
    Author = {Hassan, Teena Chakkalayil},
    Doi = {10.18418/978-3-96043-046-9},
    Institution = {Hochschule Bonn-Rhein-Sieg},
    Title = {{Taxonomy and Technology Mapping of Mobility Assistance Systems}},
    Year = {2017},
    Bdsk-Url-1 = {https://dx.doi.org/10.18418/978-3-96043-046-9}}

2015

  • M. A. Valdenegro Toro, “Fast Radial Symmetry Detection for Traffic Sign Recognition,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany 2015.
    [BibTeX] [Abstract] [Download PDF]

    Advanced driver assistance systems (ADAS) are technology systems and devices designed as an aid to the driver of a vehicle. One of the critical components of any ADAS is the traffic sign recognition module. For this module to achieve real-time performance, some preprocessing of input images must be done, which consists of a traffic sign detection (TSD) algorithm to reduce the possible hypothesis space. Performance of TSD algorithm is critical. One of the best algorithms used for TSD is the Radial Symmetry Detector (RSD), which can detect both Circular [7] and Polygonal traffic signs [5]. This algorithm runs in real-time on high end personal computers, but computational performance of must be improved in order to be able to run in real-time in embedded computer platforms. To improve the computational performance of the RSD, we propose a multiscale approach and the removal of a gaussian smoothing filter used in this algorithm. We evaluate the performance on both computation times, detection and false positive rates on a synthetic image dataset and on the german traffic sign detection benchmark [29]. We observed significant speedups compared to the original algorithm. Our Improved Radial Symmetry Detector is up to 5.8 times faster than the original on detecting Circles, up to 3.8 times faster on Triangle detection, 2.9 times faster on Square detection and 2.4 times faster on Octagon detection. All of this measurements were observed with better detection and false positive rates than the original RSD. When evaluated on the GTSDB, we observed smaller speedups, in the range of 1.6 to 2.3 times faster for Circle and Regular Polygon detection, but for Circle detection we observed a decreased detection rate than the original algorithm, while for Regular Polygon detection we always observed better detection rates. False positive rates were high, in the range of 80% to 90%. We conclude that our Improved Radial Symmetry Detector is a significant improvement of the Radial Symmetry Detector, both for Circle and Regular polygon detection. We expect that our improved algorithm will lead the way to obtain real-time traffic sign detection and recognition in embedded computer platforms.

    @techreport{ValdenegroToro2015a,
    Abstract = {Advanced driver assistance systems (ADAS) are technology systems and devices designed as an aid to the driver of a vehicle. One of the critical components of any ADAS is the traffic sign recognition module. For this module to achieve real-time performance, some preprocessing of input images must be done, which consists of a traffic sign detection (TSD) algorithm to reduce the possible hypothesis space. Performance of TSD algorithm is critical. One of the best algorithms used for TSD is the Radial Symmetry Detector (RSD), which can detect both Circular [7] and Polygonal traffic signs [5]. This algorithm runs in real-time on high end personal computers, but computational performance of must be improved in order to be able to run in real-time in embedded computer platforms. To improve the computational performance of the RSD, we propose a multiscale approach and the removal of a gaussian smoothing filter used in this algorithm. We evaluate the performance on both computation times, detection and false positive rates on a synthetic image dataset and on the german traffic sign detection benchmark [29]. We observed significant speedups compared to the original algorithm. Our Improved Radial Symmetry Detector is up to 5.8 times faster than the original on detecting Circles, up to 3.8 times faster on Triangle detection, 2.9 times faster on Square detection and 2.4 times faster on Octagon detection. All of this measurements were observed with better detection and false positive rates than the original RSD. When evaluated on the GTSDB, we observed smaller speedups, in the range of 1.6 to 2.3 times faster for Circle and Regular Polygon detection, but for Circle detection we observed a decreased detection rate than the original algorithm, while for Regular Polygon detection we always observed better detection rates. False positive rates were high, in the range of 80% to 90%. We conclude that our Improved Radial Symmetry Detector is a significant improvement of the Radial Symmetry Detector, both for Circle and Regular polygon detection. We expect that our improved algorithm will lead the way to obtain real-time traffic sign detection and recognition in embedded computer platforms.},
    Address = {Sankt Augustin, Germany},
    Author = {Valdenegro Toro, Matias Alejandro},
    Date-Added = {2015-08-26 07:28:08 +0000},
    Date-Modified = {2015-08-26 07:28:08 +0000},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Month = {August},
    Title = {{Fast Radial Symmetry Detection for Traffic Sign Recognition}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/1592/brsu_techreport_04_2015_Matias_Valdenegro_pdf1-4.pdf},
    Year = {2015},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/1592/brsu_techreport_04_2015_Matias_Valdenegro_pdf1-4.pdf},
    Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QMS4uL0RvY3VtZW50cy9QYXBlcnMvUGhEL1ZhbGRlbmVncm8gVG9yby8yMDE1YS5iaWLSFwsYGVdOUy5kYXRhTxEBvAAAAAABvAACAAAMTWFjaW50b3NoIEhEAAAAAAAAAAAAAAAAAAAA01DFtkgrAAAAt/O6CTIwMTVhLmJpYgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC39sLXTSydAAAAAAAAAAAAAQAFAAAJIAAAAAAAAAAAAAAAAAAAAA9WYWxkZW5lZ3JvIFRvcm8AABAACAAA01CplgAAABEACAAA100QfQAAAAEAGAC387oAH0aJAB9AEwASHxAACdBDAAYwvAACAFFNYWNpbnRvc2ggSEQ6VXNlcnM6AGltYW5hd2FhZDoARG9jdW1lbnRzOgBQYXBlcnM6AFBoRDoAVmFsZGVuZWdybyBUb3JvOgAyMDE1YS5iaWIAAA4AFAAJADIAMAAxADUAYQAuAGIAaQBiAA8AGgAMAE0AYQBjAGkAbgB0AG8AcwBoACAASABEABIAPlVzZXJzL2ltYW5hd2FhZC9Eb2N1bWVudHMvUGFwZXJzL1BoRC9WYWxkZW5lZ3JvIFRvcm8vMjAxNWEuYmliABMAAS8AABUAAgAQ//8AAIAG0hscHR5aJGNsYXNzbmFtZVgkY2xhc3Nlc11OU011dGFibGVEYXRhox0fIFZOU0RhdGFYTlNPYmplY3TSGxwiI1xOU0RpY3Rpb25hcnmiIiBfEA9OU0tleWVkQXJjaGl2ZXLRJidUcm9vdIABAAgAEQAaACMALQAyADcAQABGAE0AVQBgAGcAagBsAG4AcQBzAHUAdwCEAI4AwgDHAM8CjwKRApYCoQKqArgCvALDAswC0QLeAuEC8wL2AvsAAAAAAAACAQAAAAAAAAAoAAAAAAAAAAAAAAAAAAAC/Q==}}

  • M. A. Valdenegro Toro, “Fast Text Detection for Road Scenes,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany 2015.
    [BibTeX] [Abstract] [Download PDF]

    Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and noncharacter region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features– aspect and occupancy ratios– and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90 % of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions.

    @techreport{ValdenegroToro2015,
    Abstract = {Extraction of text information from visual sources is an important component of many modern applications, for example, extracting the text from traffic signs on a road scene in an autonomous vehicle. For natural images or road scenes this is a unsolved problem. In this thesis the use of histogram of stroke widths (HSW) for character and noncharacter region classification is presented. Stroke widths are extracted using two methods. One is based on the Stroke Width Transform and another based on run lengths. The HSW is combined with two simple region features-- aspect and occupancy ratios-- and then a linear SVM is used as classifier. One advantage of our method over the state of the art is that it is script-independent and can also be used to verify detected text regions with the purpose of reducing false positives. Our experiments on generated datasets of Latin, CJK, Hiragana and Katakana characters show that the HSW is able to correctly classify at least 90 % of the character regions, a similar figure is obtained for non-character regions. This performance is also obtained when training the HSW with one script and testing with a different one, and even when characters are rotated. On the English and Kannada portions of the Chars74K dataset we obtained over 95% correctly classified character regions. The use of raycasting for text line grouping is also proposed. By combining it with our HSW-based character classifier, a text detector based on Maximally Stable Extremal Regions (MSER) was implemented. The text detector was evaluated on our own dataset of road scenes from the German Autobahn, where 65% precision, 72% recall with a f-score of 69% was obtained. Using the HSW as a text verifier increases precision while slightly reducing recall. Our HSW feature allows the building of a script-independent and low parameter count classifier for character and non-character regions.},
    Address = {Sankt Augustin, Germany},
    Author = {Valdenegro Toro, Matias Alejandro},
    Date-Added = {2015-08-26 07:28:08 +0000},
    Date-Modified = {2015-08-26 07:28:08 +0000},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Month = {August},
    Title = {{Fast Text Detection for Road Scenes}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/1593/brsu_techreport_05_2015_Matias_Valdenegro_pdf1-4.pdf},
    Year = {2015},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/1593/brsu_techreport_05_2015_Matias_Valdenegro_pdf1-4.pdf},
    Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QMC4uL0RvY3VtZW50cy9QYXBlcnMvUGhEL1ZhbGRlbmVncm8gVG9yby8yMDE1LmJpYtIXCxgZV05TLmRhdGFPEQG4AAAAAAG4AAIAAAxNYWNpbnRvc2ggSEQAAAAAAAAAAAAAAAAAAADTUMW2SCsAAAC387oIMjAxNS5iaWIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALfy59dNKy0AAAAAAAAAAAABAAUAAAkgAAAAAAAAAAAAAAAAAAAAD1ZhbGRlbmVncm8gVG9ybwAAEAAIAADTUKmWAAAAEQAIAADXTQ8NAAAAAQAYALfzugAfRokAH0ATABIfEAAJ0EMABjC8AAIAUE1hY2ludG9zaCBIRDpVc2VyczoAaW1hbmF3YWFkOgBEb2N1bWVudHM6AFBhcGVyczoAUGhEOgBWYWxkZW5lZ3JvIFRvcm86ADIwMTUuYmliAA4AEgAIADIAMAAxADUALgBiAGkAYgAPABoADABNAGEAYwBpAG4AdABvAHMAaAAgAEgARAASAD1Vc2Vycy9pbWFuYXdhYWQvRG9jdW1lbnRzL1BhcGVycy9QaEQvVmFsZGVuZWdybyBUb3JvLzIwMTUuYmliAAATAAEvAAAVAAIAEP//AACABtIbHB0eWiRjbGFzc25hbWVYJGNsYXNzZXNdTlNNdXRhYmxlRGF0YaMdHyBWTlNEYXRhWE5TT2JqZWN00hscIiNcTlNEaWN0aW9uYXJ5oiIgXxAPTlNLZXllZEFyY2hpdmVy0SYnVHJvb3SAAQAIABEAGgAjAC0AMgA3AEAARgBNAFUAYABnAGoAbABuAHEAcwB1AHcAhACOAMEAxgDOAooCjAKRApwCpQKzArcCvgLHAswC2QLcAu4C8QL2AAAAAAAAAgEAAAAAAAAAKAAAAAAAAAAAAAAAAAAAAvg=}}

2014

  • S. Alexandrov, “Geometric Segmentation of Point Cloud Data by Spectral Analysis,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany 2014.
    [BibTeX] [Abstract] [Download PDF]

    A principal step towards solving diverse perception problems is segmentation. Many algorithms benefit from initially partitioning input point clouds into objects and their parts. In accordance with cognitive sciences, segmentation goal may be formulated as to split point clouds into locally smooth convex areas, enclosed by sharp concave boundaries. This goal is based on purely geometrical considerations and does not incorporate any constraints, or semantics, of the scene and objects being segmented, which makes it very general and widely applicable. In this work we perform geometrical segmentation of point cloud data according to the stated goal. The data is mapped onto a graph and the task of graph partitioning is considered. We formulate an objective function and derive a discrete optimization problem based on it. Finding the globally optimal solution is an NP-complete problem; in order to circumvent this, spectral methods are applied. Two algorithms that implement the divisive hierarchical clustering scheme are proposed. They derive graph partition by analyzing the eigenvectors obtained through spectral relaxation. The specifics of our application domain are used to automatically introduce cannot-link constraints in the clustering problem. The algorithms function in completely unsupervised manner and make no assumptions about shapes of objects and structures that they segment. Three publicly available datasets with cluttered real-world scenes and an abundance of box-like, cylindrical, and free-form objects are used to demonstrate convincing performance. Preliminary results of this thesis have been contributed to the International Conference on Autonomous Intelligent Systems (IAS-13).

    @techreport{Alexandrov2014,
    Abstract = {A principal step towards solving diverse perception problems is segmentation. Many algorithms benefit from initially partitioning input point clouds into objects and their parts. In accordance with cognitive sciences, segmentation goal may be formulated as to split point clouds into locally smooth convex areas, enclosed by sharp concave boundaries. This goal is based on purely geometrical considerations and does not incorporate any constraints, or semantics, of the scene and objects being segmented, which makes it very general and widely applicable. In this work we perform geometrical segmentation of point cloud data according to the stated goal. The data is mapped onto a graph and the task of graph partitioning is considered. We formulate an objective function and derive a discrete optimization problem based on it. Finding the globally optimal solution is an NP-complete problem; in order to circumvent this, spectral methods are applied. Two algorithms that implement the divisive hierarchical clustering scheme are proposed. They derive graph partition by analyzing the eigenvectors obtained through spectral relaxation. The specifics of our application domain are used to automatically introduce cannot-link constraints in the clustering problem. The algorithms function in completely unsupervised manner and make no assumptions about shapes of objects and structures that they segment. Three publicly available datasets with cluttered real-world scenes and an abundance of box-like, cylindrical, and free-form objects are used to demonstrate convincing performance. Preliminary results of this thesis have been contributed to the International Conference on Autonomous Intelligent Systems (IAS-13).},
    Address = {Sankt Augustin, Germany},
    Author = {Alexandrov, Sergey},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Month = {September},
    Timestamp = {2014.07.10},
    Title = {{Geometric Segmentation of Point Cloud Data by Spectral Analysis}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/22/brsu_techreport_02_2014_Sergey_Alexandrov_pdf_1_4.pdf},
    Year = {2014},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/22/brsu_techreport_02_2014_Sergey_Alexandrov_pdf_1_4.pdf},
    Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QKy4uL0RvY3VtZW50cy9QYXBlcnMvUGhEL0FsZXhhbmRyb3YvMjAxNC5iaWLSFwsYGVdOUy5kYXRhTxEBqAAAAAABqAACAAAMTWFjaW50b3NoIEhEAAAAAAAAAAAAAAAAAAAA01DFtkgrAAAAt/ajCDIwMTQuYmliAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC39prXTSxcAAAAAAAAAAAAAQAFAAAJIAAAAAAAAAAAAAAAAAAAAApBbGV4YW5kcm92ABAACAAA01CplgAAABEACAAA100QPAAAAAEAGAC39qMAH0aJAB9AEwASHxAACdBDAAYwvAACAEtNYWNpbnRvc2ggSEQ6VXNlcnM6AGltYW5hd2FhZDoARG9jdW1lbnRzOgBQYXBlcnM6AFBoRDoAQWxleGFuZHJvdjoAMjAxNC5iaWIAAA4AEgAIADIAMAAxADQALgBiAGkAYgAPABoADABNAGEAYwBpAG4AdABvAHMAaAAgAEgARAASADhVc2Vycy9pbWFuYXdhYWQvRG9jdW1lbnRzL1BhcGVycy9QaEQvQWxleGFuZHJvdi8yMDE0LmJpYgATAAEvAAAVAAIAEP//AACABtIbHB0eWiRjbGFzc25hbWVYJGNsYXNzZXNdTlNNdXRhYmxlRGF0YaMdHyBWTlNEYXRhWE5TT2JqZWN00hscIiNcTlNEaWN0aW9uYXJ5oiIgXxAPTlNLZXllZEFyY2hpdmVy0SYnVHJvb3SAAQAIABEAGgAjAC0AMgA3AEAARgBNAFUAYABnAGoAbABuAHEAcwB1AHcAhACOALwAwQDJAnUCdwJ8AocCkAKeAqICqQKyArcCxALHAtkC3ALhAAAAAAAAAgEAAAAAAAAAKAAAAAAAAAAAAAAAAAAAAuM=}}

  • S. Schneider, “Design of a declarative language for task-oriented grasping and tool-use with dextrous robotic hands,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany 2014.
    [BibTeX] [Abstract] [Download PDF]

    Apparently simple manipulation tasks for a human such as transportation or tool use are challenging to replicate in an autonomous service robot. Nevertheless, dextrous manipulation is an important aspect for a robot in many daily tasks. While it is possible to manufacture special-purpose hands for one specific task in industrial settings, a generalpurpose service robot in households must have flexible hands which can adapt to many tasks. Intelligently using tools enables the robot to perform tasks more efficiently and even beyond the designed capabilities. In this work a declarative domain-specific language, called Grasp Domain Definition Language (GDDL), is presented that allows the specification of grasp planning problems independently of a specific grasp planner. This design goal resembles the idea of the Planning Domain Definition Language (PDDL). The specification of GDDL requires a detailed analysis of the research in grasping in order to identify best practices in different domains that contribute to a grasp. These domains describe for instance physical as well as semantic properties of objects and hands. Grasping always has a purpose which is captured in the task domain definition. It enables the robot to grasp an object in a taskdependent manner. Suitable representations in these domains have to be identified and formalized for which a domain-driven software engineering approach is applied. This kind of modeling allows the specification of constraints which guide the composition of domain entity specifications. The domain-driven approach fosters reuse of domain concepts while the constraints enable the validation of models already during design time. A proof of concept implementation of GDDL into the GraspIt! grasp planner is developed. Preliminary results of this thesis have been published and presented on the IEEE International Conference on Robotics and Automation (ICRA).

    @techreport{Schneider2014b,
    Abstract = {Apparently simple manipulation tasks for a human such as transportation or tool use are challenging to replicate in an autonomous service robot. Nevertheless, dextrous manipulation is an important aspect for a robot in many daily tasks. While it is possible to manufacture special-purpose hands for one specific task in industrial settings, a generalpurpose service robot in households must have flexible hands which can adapt to many tasks. Intelligently using tools enables the robot to perform tasks more efficiently and even beyond the designed capabilities. In this work a declarative domain-specific language, called Grasp Domain Definition Language (GDDL), is presented that allows the specification of grasp planning problems independently of a specific grasp planner. This design goal resembles the idea of the Planning Domain Definition Language (PDDL). The specification of GDDL requires a detailed analysis of the research in grasping in order to identify best practices in different domains that contribute to a grasp. These domains describe for instance physical as well as semantic properties of objects and hands. Grasping always has a purpose which is captured in the task domain definition. It enables the robot to grasp an object in a taskdependent manner. Suitable representations in these domains have to be identified and formalized for which a domain-driven software engineering approach is applied. This kind of modeling allows the specification of constraints which guide the composition of domain entity specifications. The domain-driven approach fosters reuse of domain concepts while the constraints enable the validation of models already during design time. A proof of concept implementation of GDDL into the GraspIt! grasp planner is developed. Preliminary results of this thesis have been published and presented on the IEEE International Conference on Robotics and Automation (ICRA).},
    Address = {Sankt Augustin, Germany},
    Author = {Schneider, Sven},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Keywords = {Grasping, Grasp Planner, Grasp Domain Definition Language, GDDL, Domain-Specific Language},
    Month = {September},
    School = {Bonn-Rhein-Sieg University of Applied Sciences},
    Timestamp = {2013.07.13},
    Title = {{Design of a declarative language for task-oriented grasping and tool-use with dextrous robotic hands}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/17/brsu_techreport_01_2014_Sven_Schneider_pdf1_4_online.pdf},
    Year = {2014},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/17/brsu_techreport_01_2014_Sven_Schneider_pdf1_4_online.pdf},
    Bdsk-File-1 = {YnBsaXN0MDDUAQIDBAUGJCVYJHZlcnNpb25YJG9iamVjdHNZJGFyY2hpdmVyVCR0b3ASAAGGoKgHCBMUFRYaIVUkbnVsbNMJCgsMDxJXTlMua2V5c1pOUy5vYmplY3RzViRjbGFzc6INDoACgAOiEBGABIAFgAdccmVsYXRpdmVQYXRoWWFsaWFzRGF0YV8QKi4uL0RvY3VtZW50cy9QYXBlcnMvUGhEL1NjaG5laWRlci8yMDE0LmJpYtIXCxgZV05TLmRhdGFPEQGmAAAAAAGmAAIAAAxNYWNpbnRvc2ggSEQAAAAAAAAAAAAAAAAAAADTUMW2SCsAAAC39rcIMjAxNC5iaWIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALf2sNdNLIYAAAAAAAAAAAABAAUAAAkgAAAAAAAAAAAAAAAAAAAACVNjaG5laWRlcgAAEAAIAADTUKmWAAAAEQAIAADXTRBmAAAAAQAYALf2twAfRokAH0ATABIfEAAJ0EMABjC8AAIASk1hY2ludG9zaCBIRDpVc2VyczoAaW1hbmF3YWFkOgBEb2N1bWVudHM6AFBhcGVyczoAUGhEOgBTY2huZWlkZXI6ADIwMTQuYmliAA4AEgAIADIAMAAxADQALgBiAGkAYgAPABoADABNAGEAYwBpAG4AdABvAHMAaAAgAEgARAASADdVc2Vycy9pbWFuYXdhYWQvRG9jdW1lbnRzL1BhcGVycy9QaEQvU2NobmVpZGVyLzIwMTQuYmliAAATAAEvAAAVAAIAEP//AACABtIbHB0eWiRjbGFzc25hbWVYJGNsYXNzZXNdTlNNdXRhYmxlRGF0YaMdHyBWTlNEYXRhWE5TT2JqZWN00hscIiNcTlNEaWN0aW9uYXJ5oiIgXxAPTlNLZXllZEFyY2hpdmVy0SYnVHJvb3SAAQAIABEAGgAjAC0AMgA3AEAARgBNAFUAYABnAGoAbABuAHEAcwB1AHcAhACOALsAwADIAnICdAJ5AoQCjQKbAp8CpgKvArQCwQLEAtYC2QLeAAAAAAAAAgEAAAAAAAAAKAAAAAAAAAAAAAAAAAAAAuA=}}

2012

  • N. Akhtar, “Improving reliability of mobile manipulators against unknown external faults,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, March, 2012.
    [BibTeX] [Abstract] [Download PDF]

    A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the object(s) manipulated by the robot deviates from its expected course because of unforeseeable ircumstances. These deviations are experienced by the robot as unknown external faults. In this work we present an approach that increases reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The proposed approach, which is triggered after detection of a fault, is formulated as a three-step scheme that takes a definition of a planning operator and an example simulation as its inputs. The planning operator corresponds to the action that fails because of the fault occurrence, whereas the example simulation shows the desired/expected behavior of the objects for the same action. In its first step, the scheme finds a description of the expected behavior of the objects in terms of logical atoms (i.e. description vocabulary). The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. These parameters are the variables that define the releasing state of the object. Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. Each one of these examples is labelled as desired or undesired according to the behavior exhibited by the object (in the simulation), when the object is released in the state corresponded by the example. The description vocabulary is also used in labeling the examples autonomously. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of unknown external faults. The proposed N-Bins algorithm can also be used for binary classification problems. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm shows reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.

    @techreport{Akhtar2012,
    Abstract = {A robot (e.g. mobile manipulator) that interacts with its environment to perform its tasks, often faces situations in which it is unable to achieve its goals despite perfect functioning of its sensors and actuators. These situations occur when the behavior of the object(s) manipulated by the robot deviates from its expected course because of unforeseeable ircumstances. These deviations are experienced by the robot as unknown external faults. In this work we present an approach that increases reliability of mobile manipulators against the unknown external faults. This approach focuses on the actions of manipulators which involve releasing of an object. The proposed approach, which is triggered after detection of a fault, is formulated as a three-step scheme that takes a definition of a planning operator and an example simulation as its inputs. The planning operator corresponds to the action that fails because of the fault occurrence, whereas the example simulation shows the desired/expected behavior of the objects for the same action. In its first step, the scheme finds a description of the expected behavior of the objects in terms of logical atoms (i.e. description vocabulary). The description of the simulation is used by the second step to find limits of the parameters of the manipulated object. These parameters are the variables that define the releasing state of the object. Using randomly chosen values of the parameters within these limits, this step creates different examples of the releasing state of the object. Each one of these examples is labelled as desired or undesired according to the behavior exhibited by the object (in the simulation), when the object is released in the state corresponded by the example. The description vocabulary is also used in labeling the examples autonomously. In the third step, an algorithm (i.e. N-Bins) uses the labelled examples to suggest the state for the object in which releasing it avoids the occurrence of unknown external faults. The proposed N-Bins algorithm can also be used for binary classification problems. Therefore, in our experiments with the proposed approach we also test its prediction ability along with the analysis of the results of our approach. The results show that under the circumstances peculiar to our approach, N-Bins algorithm shows reasonable prediction accuracy where other state of the art classification algorithms fail to do so. Thus, N-Bins also extends the ability of a robot to predict the behavior of the object to avoid unknown external faults. In this work we use simulation environment OPENRave that uses physics engine ODE to simulate the dynamics of rigid bodies.},
    Address = {Sankt Augustin, Germany},
    Author = {Akhtar, Naveed},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Keywords = {binary classification,external faults,mobile manipulators},
    Number = {March},
    Title = {{Improving reliability of mobile manipulators against unknown external faults}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/7/Naveedthesis.pdf},
    Year = {2012},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/7/Naveedthesis.pdf}}

  • C. A. Mueller, “3D Object Shape Categorization in Domestic Environments,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, February, 2012.
    [BibTeX] [Abstract] [Download PDF]

    In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object’s identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories. Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classification accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.

    @techreport{Muller2012,
    Abstract = {In service robotics, tasks without the involvement of objects are barely applicable, like in searching, fetching or delivering tasks. Service robots are supposed to capture efficiently object related information in real world scenes while for instance considering clutter and noise, and also being flexible and scalable to memorize a large set of objects. Besides object perception tasks like object recognition where the object's identity is analyzed, object categorization is an important visual object perception cue that associates unknown object instances based on their e.g. appearance or shape to a corresponding category. We present a pipeline from the detection of object candidates in a domestic scene over the description to the final shape categorization of detected candidates. In order to detect object related information in cluttered domestic environments an object detection method is proposed that copes with multiple plane and object occurrences like in cluttered scenes with shelves. Further a surface reconstruction method based on Growing Neural Gas (GNG) in combination with a shape distribution-based descriptor is proposed to reflect shape characteristics of object candidates. Beneficial properties provided by the GNG such as smoothing and denoising effects support a stable description of the object candidates which also leads towards a more stable learning of categories. Based on the presented descriptor a dictionary approach combined with a supervised shape learner is presented to learn prediction models of shape categories. Experimental results, of different shapes related to domestically appearing object shape categories such as cup, can, box, bottle, bowl, plate and ball, are shown. A classification accuracy of about 90% and a sequential execution time of lesser than two seconds for the categorization of an unknown object is achieved which proves the reasonableness of the proposed system design. Additional results are shown towards object tracking and false positive handling to enhance the robustness of the categorization. Also an initial approach towards incremental shape category learning is proposed that learns a new category based on the set of previously learned shape categories.},
    Address = {Sankt Augustin, Germany},
    Author = {Mueller, Christian Atanas},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Number = {February},
    Title = {{3D Object Shape Categorization in Domestic Environments}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/9/brsu_techreport_01_2012_Christian_Atanas_Mueller.pdf},
    Year = {2012},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/9/brsu_techreport_01_2012_Christian_Atanas_Mueller.pdf}}

  • A. Kuestenmacher, “Methods for Failure Detection for Mobile Manipulation,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, March, 2012.
    [BibTeX] [Abstract] [Download PDF]

    The work presented in this paper focuses on the comparison of well-known and new fault-diagnosis algorithms in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well. Based on a study of literature covering fault-diagnosis algorithms, I selected four of these methods based on both linear and non-linear models, analysed and implemented them in a mathematical robot-model, representing a four-wheels-OMNI robot. In experiments I tested the ability of the algorithms to detect and identify abnormal behaviour and to optimize the model parameters for the given training data. The final goal was to point out the strengths of each algorithm and to figure out which method would best suit the demands of fault diagnosis for a particular robot.

    @techreport{Kustenmacher2012,
    Abstract = {The work presented in this paper focuses on the comparison of well-known and new fault-diagnosis algorithms in the robot domain. The main challenge for fault diagnosis is to allow the robot to effectively cope not only with internal hardware and software faults but with external disturbances and errors from dynamic and complex environments as well. Based on a study of literature covering fault-diagnosis algorithms, I selected four of these methods based on both linear and non-linear models, analysed and implemented them in a mathematical robot-model, representing a four-wheels-OMNI robot. In experiments I tested the ability of the algorithms to detect and identify abnormal behaviour and to optimize the model parameters for the given training data. The final goal was to point out the strengths of each algorithm and to figure out which method would best suit the demands of fault diagnosis for a particular robot.},
    Address = {Sankt Augustin, Germany},
    Author = {Kuestenmacher, Anastassia},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Number = {March},
    Title = {{Methods for Failure Detection for Mobile Manipulation}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/11/brsu_techreport_04_2012_Anastassia_Kuestenmacher.pdf},
    Year = {2012},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/11/brsu_techreport_04_2012_Anastassia_Kuestenmacher.pdf}}

  • F. Hegger, “3D People Detection in Domestic Environments,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, March, 2012.
    [BibTeX] [Abstract] [Download PDF]

    The ability of detecting people has become a crucial subtask, especially in robotic systems which aiman application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product1. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29\% and sitting people with 74.94\% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.

    @techreport{Hegger2012,
    Abstract = {The ability of detecting people has become a crucial subtask, especially in robotic systems which aiman application in public or domestic environments. Robots already provide their services e.g. in real home improvement markets and guide people to a desired product1. In such a scenario many robot internal tasks would benefit from the knowledge of knowing the number and positions of people in the vicinity. The navigation for example could treat them as dynamical moving objects and also predict their next motion directions in order to compute a much safer path. Or the robot could specifically approach customers and offer its services. This requires to detect a person or even a group of people in a reasonable range in front of the robot. Challenges of such a real-world task are e.g. changing lightning conditions, a dynamic environment and different people shapes. In this thesis a 3D people detection approach based on point cloud data provided by the Microsoft Kinect is implemented and integrated on mobile service robot. A Top-Down/Bottom-Up segmentation is applied to increase the systems flexibility and provided the capability to the detect people even if they are partially occluded. A feature set is proposed to detect people in various pose configurations and motions using a machine learning technique. The system can detect people up to a distance of 5 meters. The experimental evaluation compared different machine learning techniques and showed that standing people can be detected with a rate of 87.29\% and sitting people with 74.94\% using a Random Forest classifier. Certain objects caused several false detections. To elimante those a verification is proposed which further evaluates the persons shape in the 2D space. The detection component has been implemented as s sequential (frame rate of 10 Hz) and a parallel application (frame rate of 16 Hz). Finally, the component has been embedded into complete people search task which explorates the environment, find all people and approach each detected person.},
    Address = {Sankt Augustin, Germany},
    Annote = { From Duplicate 1 ( 3D People Detection in Domestic Environments - Hegger, Frederik ) },
    Author = {Hegger, Frederik},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Keywords = {3D segmentation,local surface normals,random forest},
    Number = {March},
    Title = {{3D People Detection in Domestic Environments}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/8/brsu_techreport_02_2012_Frederik_Hegger.pdf},
    Year = {2012},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/8/brsu_techreport_02_2012_Frederik_Hegger.pdf}}

2011

  • A. Bochem, “Active Tracking with Accelerated Image Processing in Hardware,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, January, 2011.
    [BibTeX] [Abstract] [Download PDF]

    This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user’s position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user’s position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system’s performance and precision.

    @techreport{Bochem2011,
    Abstract = {This thesis work presents the implementation and validation of image processing problems in hardware to estimate the performance and precision gain. It compares the implementation for the addressed problem on a Field Programmable Gate Array (FPGA) with a software implementation for a General Purpose Processor (GPP) architecture. For both solutions the implementation costs for their development is an important aspect in the validation. The analysis of the flexibility and extendability that can be achieved by a modular implementation for the FPGA design was another major aspect. This work is based upon approaches from previous work, which included the detection of Binary Large OBjects (BLOBs) in static images and continuous video streams [13, 15]. One addressed problem of this work is the tracking of the detected BLOBs in continuous image material. This has been implemented for the FPGA platform and the GPP architecture. Both approaches have been compared with respect to performance and precision. This research project is motivated by the MI6 project of the Computer Vision research group, which is located at the Bonn-Rhein-Sieg University of Applied Sciences. The intent of the MI6 project is the tracking of a user in an immersive environment. The proposed solution is to attach a light emitting device to the user for tracking the created light dots on the projection surface of the immersive environment. Having the center points of those light dots would allow the estimation of the user's position and orientation. One major issue that makes Computer Vision problems computationally expensive is the high amount of data that has to be processed in real-time. Therefore, one major target for the implementation was to get a processing speed of more than 30 frames per second. This would allow the system to realize feedback to the user in a response time which is faster than the human visual perception. One problem that comes with the idea of using a light emitting device to represent the user, is the precision error. Dependent on the resolution of the tracked projection surface of the immersive environment, a pixel might have a size in cm2. Having a precision error of only a few pixels, might lead to an offset in the estimated user's position of several cm. In this research work the development and validation of a detection and tracking system for BLOBs on a Cyclone II FPGA from Altera has been realized. The system supports different input devices for the image acquisition and can perform detection and tracking for five to eight BLOBs. A further extension of the design has been evaluated and is possible with some constraints. Additional modules for compressing the image data based on run-length encoding and sub-pixel precision for the computed BLOB center-points have been designed. For the comparison of the FPGA approach for BLOB tracking a similar implementation in software using a multi-threaded approach has been realized. The system can transmit the detection or tracking results on two available communication interfaces, USB and RS232. The analysis of the hardware solution showed a similar precision for the BLOB detection and tracking as the software approach. One problem is the strong increase of the allocated resources when extending the system to process more BLOBs. With one of the applied target platforms, the DE2-70 board from Altera, the BLOB detection could be extended to process up to thirty BLOBs. The implementation of the tracking approach in hardware required much more effort than the software solution. The design of high level problems in hardware for this case are more expensive than the software implementation. The search and match steps in the tracking approach could be realized more efficiently and reliably in software. The additional pre-processing modules for sub-pixel precision and run-length-encoding helped to increase the system's performance and precision.},
    Address = {Sankt Augustin, Germany},
    Author = {Bochem, Alexander},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Number = {January},
    Publisher = {University of Applied Sciences Bonn-Rhein-Sieg},
    Title = {{Active Tracking with Accelerated Image Processing in Hardware}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/10/brsu_techreport_01_2011_Alexander_Bochem.pdf},
    Year = {2011},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/10/brsu_techreport_01_2011_Alexander_Bochem.pdf}}

  • N. Akhtar, “Fault reasoning based on Naive Physics,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, April, 2011.
    [BibTeX] [Abstract] [Download PDF]

    A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.

    @techreport{Akhtar2011b,
    Abstract = {A system that interacts with its environment can be much more robust if it is able to reason about the faults that occur in its environment, despite perfect functioning of its internal components. For robots, which interact with the same environment as human beings, this robustness can be obtained by incorporating human-like reasoning abilities in them. In this work we use naive physics to enable reasoning about external faults in robots. We propose an approach for diagnosing external faults that uses qualitative reasoning on naive physics concepts for diagnosis. These concepts are mainly individual properties of objects that define their state qualitatively. The reasoning process uses physical laws to generate possible states of the concerned object(s), which could result into a detected external fault. Since effective reasoning about any external fault requires the information of relevant properties and physical laws, we associate different properties and laws to different types of faults which can be detected by a robot. The underlying ontology of this association is proposed on the basis of studies conducted (by other researchers) on reasoning of physics novices about everyday physical phenomena. We also formalize some definitions of properties of objects into a small framework represented in First-Order logic. These definitions represent naive concepts behind the properties and are intended to be independent from objects and circumstances. The definitions in the framework illustrates our proposal of using different biased definitions of properties for different types of faults. In this work, we also present a brief review of important contributions in the area of naive/qualitative physics. These reviews help in understanding the limitations of naive/qualitative physics in general. We also apply our approach to simple scenarios to asses its limitations in particular. Since this work was done independent of any particular real robotic system, it can be seen as a theoretical proof of the concept of usefulness of naive physics for external fault reasoning in robotics.},
    Address = {Sankt Augustin, Germany},
    Author = {Akhtar, Naveed},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Number = {April},
    Title = {{Fault reasoning based on Naive Physics}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/5/brsu_techreport_02_2011_Naveed_Akhtar_with_keywords_1.pdf},
    Year = {2011},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/5/brsu_techreport_02_2011_Naveed_Akhtar_with_keywords_1.pdf}}

2009

  • D. Holz, “Autonomous Exploration and Inspection,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, Technical Report December, 2009.
    [BibTeX] [Abstract] [Download PDF]

    Autonomous mobile robots need internal environment representations or models of their environment in order to act in a goal-directed manner, plan actions and navigate effectively. Especially in those situations where a robot can not be provided with a manually constructed model or in environments that change over time, the robot needs to possess the ability of autonomously constructing models and maintaining these models on its own. To construct a model of an environment multiple sensor readings have to be acquired and integrated into a single representation. Where the robot has to take these sensor readings is determined by an exploration strategy. The strategy allows the robot to sense all environmental structures and to construct a complete model of its workspace. Given a complete environment model, the task of inspection is to guide the robot to all modeled environmental structures in order to detect changes and to update the model if necessary. Informally stated, exploration and inspection provide the means for acquiring as much information as possible by the robot itself. Both exploration and inspection are highly integrated problems. In addition to the according strategies, they require for several abilities of a robotic system and comprise various problems from the field of mobile robotics including Simultaneous localization and Mapping (SLAM), motion planning and control as well as reliable collision avoidance. The goal of this thesis is to develop and implement a complete system and a set of algorithms for robotic exploration and inspection. That is, instead of focussing on specific strategies, robotic exploration and inspection are addressed as the integrated problems that they are. Given the set of algorithms a real mobile service robot has to be able to autonomously explore its workspace, construct a model of its workspace and use this model in subsequent tasks e.g. for navigating in the workspace or inspecting the workspace itself. The algorithms need to be reliable, robust against environment dynamics and internal failures and applicable online in real-time on a real mobile robot. The resulting system should allow a mobile service robot to navigate effectively and reliably in a domestic environment and avoid all kinds of collisions. In the context of mobile robotics, domestic environments combine the characteristics of being cluttered, dynamic and populated by humans and domestic animals. SLAM is going to be addressed in terms of incremental range image registration which provides efficient means to construct internal environment representations online while moving through the environment. Two registration algorithms are presented that can be applied on two-dimensional and three-dimensional data together with several extensions and an incremental registration procedure. The algorithms are used to construct two different types of environment representations, memory-efficient sparse points and probabilistic reflection maps. For effective navigation in the robot’s workspace, different path planning algorithms are going to be presented for the two types of environment representations. Furthermore, two motion controllers will be described that allow a mobile robot to follow planned paths and to approach a target position and orientation. Finally this thesis will present different exploration and inspection strategies that use the aforementioned algorithms to move the robot to previously unexplored or uninspected terrain and update the internal environment representations accordingly. These strategies are augmented with algorithms for detecting changes in the environment and for segmenting internal models into individual rooms. The resulting system performed very successfully in the 2008 and 2009 RoboCup@Home competitions.

    @techreport{Holz2009,
    Abstract = {Autonomous mobile robots need internal environment representations or models of their environment in order to act in a goal-directed manner, plan actions and navigate effectively. Especially in those situations where a robot can not be provided with a manually constructed model or in environments that change over time, the robot needs to possess the ability of autonomously constructing models and maintaining these models on its own. To construct a model of an environment multiple sensor readings have to be acquired and integrated into a single representation. Where the robot has to take these sensor readings is determined by an exploration strategy. The strategy allows the robot to sense all environmental structures and to construct a complete model of its workspace. Given a complete environment model, the task of inspection is to guide the robot to all modeled environmental structures in order to detect changes and to update the model if necessary. Informally stated, exploration and inspection provide the means for acquiring as much information as possible by the robot itself. Both exploration and inspection are highly integrated problems. In addition to the according strategies, they require for several abilities of a robotic system and comprise various problems from the field of mobile robotics including Simultaneous localization and Mapping (SLAM), motion planning and control as well as reliable collision avoidance. The goal of this thesis is to develop and implement a complete system and a set of algorithms for robotic exploration and inspection. That is, instead of focussing on specific strategies, robotic exploration and inspection are addressed as the integrated problems that they are. Given the set of algorithms a real mobile service robot has to be able to autonomously explore its workspace, construct a model of its workspace and use this model in subsequent tasks e.g. for navigating in the workspace or inspecting the workspace itself. The algorithms need to be reliable, robust against environment dynamics and internal failures and applicable online in real-time on a real mobile robot. The resulting system should allow a mobile service robot to navigate effectively and reliably in a domestic environment and avoid all kinds of collisions. In the context of mobile robotics, domestic environments combine the characteristics of being cluttered, dynamic and populated by humans and domestic animals. SLAM is going to be addressed in terms of incremental range image registration which provides efficient means to construct internal environment representations online while moving through the environment. Two registration algorithms are presented that can be applied on two-dimensional and three-dimensional data together with several extensions and an incremental registration procedure. The algorithms are used to construct two different types of environment representations, memory-efficient sparse points and probabilistic reflection maps. For effective navigation in the robot's workspace, different path planning algorithms are going to be presented for the two types of environment representations. Furthermore, two motion controllers will be described that allow a mobile robot to follow planned paths and to approach a target position and orientation. Finally this thesis will present different exploration and inspection strategies that use the aforementioned algorithms to move the robot to previously unexplored or uninspected terrain and update the internal environment representations accordingly. These strategies are augmented with algorithms for detecting changes in the environment and for segmenting internal models into individual rooms. The resulting system performed very successfully in the 2008 and 2009 RoboCup@Home competitions.},
    Address = {Sankt Augustin, Germany},
    Author = {Holz, Dirk},
    Booktitle = {Applied Sciences},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Number = {December},
    Title = {{Autonomous Exploration and Inspection}},
    Type = {Technical Report},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/4/brsu_techreport_01_2009_holz.pdf},
    Year = {2009},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/4/brsu_techreport_01_2009_holz.pdf}}

2008

  • A. Juarez, “A Computational Model of Robotic Surprise,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany, January, 2008.
    [BibTeX] [Abstract] [Download PDF]

    The research of autonomous artificial agents that adapt to and survive in changing, possibly hostile environments, has gained momentum in recent years. Many of such agents incorporate mechanisms to learn and acquire new knowledge from its environment, a feature that becomes fundamental to enable the desired adaptation, and account for the challenges that the environment poses. The issue of how to trigger such learning, however, has not been as thoroughly studied as its significance suggest. The solution explored is based on the use of surprise (the reaction to unexpected events), as the mechanism that triggers learning. This thesis introduces a computational model of surprise that enables the robotic learner to experience surprise and start the acquisition of knowledge to explain it. A measure of surprise that combines elements from information and probability theory, is presented. Such measure offers a response to surprising situations faced by the robot, that is proportional to the degree of unexpectedness of such event. The concepts of short- and long-term memory are investigated as factors that influence the resulting surprise. Short-term memory enables the robot to habituate to new, repeated surprises, and to “forget” about old ones, allowing them to become surprising again. Long-term memory contains knowledge that is known a priori or that has been previously learned by the robot. Such knowledge influences the surprise mechanism, by applying a subsumption principle: if the available knowledge is able to explain the surprising event, suppress any trigger of surprise. The computational model of robotic surprise has been successfully applied to the domain of a robotic learner, specifically one that learns by experimentation. A brief introduction to the context of such application is provided, as well as a discussion on related issues like the relationship of the surprise mechanism with other components of the robot conceptual architecture, the challenges presented by the specific learning paradigm used, and other components of the motivational structure of the agent.

    @techreport{Juarez2008,
    Abstract = {The research of autonomous artificial agents that adapt to and survive in changing, possibly hostile environments, has gained momentum in recent years. Many of such agents incorporate mechanisms to learn and acquire new knowledge from its environment, a feature that becomes fundamental to enable the desired adaptation, and account for the challenges that the environment poses. The issue of how to trigger such learning, however, has not been as thoroughly studied as its significance suggest. The solution explored is based on the use of surprise (the reaction to unexpected events), as the mechanism that triggers learning. This thesis introduces a computational model of surprise that enables the robotic learner to experience surprise and start the acquisition of knowledge to explain it. A measure of surprise that combines elements from information and probability theory, is presented. Such measure offers a response to surprising situations faced by the robot, that is proportional to the degree of unexpectedness of such event. The concepts of short- and long-term memory are investigated as factors that influence the resulting surprise. Short-term memory enables the robot to habituate to new, repeated surprises, and to ``forget'' about old ones, allowing them to become surprising again. Long-term memory contains knowledge that is known a priori or that has been previously learned by the robot. Such knowledge influences the surprise mechanism, by applying a subsumption principle: if the available knowledge is able to explain the surprising event, suppress any trigger of surprise. The computational model of robotic surprise has been successfully applied to the domain of a robotic learner, specifically one that learns by experimentation. A brief introduction to the context of such application is provided, as well as a discussion on related issues like the relationship of the surprise mechanism with other components of the robot conceptual architecture, the challenges presented by the specific learning paradigm used, and other components of the motivational structure of the agent.},
    Address = {Sankt Augustin, Germany},
    Author = {Juarez, Alex},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Number = {January},
    Title = {{A Computational Model of Robotic Surprise}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/1/brsu_techreport_01_2008_juarez.pdf},
    Year = {2008},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/1/brsu_techreport_01_2008_juarez.pdf}}

  • I. Awaad and B. Leon, “XPERSIF: A Software Integration Framework & Architecture for Robotic Learning by Experimentation,” Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences, Sankt Augustin, Germany 2008.
    [BibTeX] [Abstract] [Download PDF]

    The integration of independently-developed applications into an efficient system, particularly in a distributed setting, is the core issue addressed in this work. Cooperation between researchers across various field boundaries in order to solve complex problems has become commonplace. Due to the multidisciplinary nature of such efforts, individual applications are developed independent of the integration process. The integration of individual applications into a fully-functioning architecture is a complex and multifaceted task. This thesis extends a component-based architecture, previously developed by the authors, to allow the integration of various software applications which are deployed in a distributed setting. The test bed for the framework is the EU project XPERO, the goal of which is robot learning by experimentation. The task at hand is the integration of the required applications, such as planning of experiments, perception of parametrized features, robot motion control and knowledge-based learning, into a coherent cognitive architecture. This allows a mobile robot to use the methods involved in experimentation in order to learn about its environment. To meet the challenge of developing this architecture within a distributed, heterogeneous environment, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Its deployment facilitates the necessary refactoring of concepts. One fully specified and detailed use case is the successful integration of the XPERSim simulator which constitutes one of the kernel components of XPERO. The results of this work demonstrate that the proposed architecture is robust and flexible, and can be successfully scaled to allow the complete integration of the necessary applications, thus enabling robot learning by experimentation. The design supports composability, thus allowing components to be grouped together in order to provide an aggregate service. Distributed simulation enabled real time tele-observation of the simulated experiment. Results show that incorporating the XPERSim simulator has substantially enhanced the speed of research and the information flow within the cognitive learning loop.

    @techreport{Awaad2008,
    Abstract = {The integration of independently-developed applications into an efficient system, particularly in a distributed setting, is the core issue addressed in this work. Cooperation between researchers across various field boundaries in order to solve complex problems has become commonplace. Due to the multidisciplinary nature of such efforts, individual applications are developed independent of the integration process. The integration of individual applications into a fully-functioning architecture is a complex and multifaceted task. This thesis extends a component-based architecture, previously developed by the authors, to allow the integration of various software applications which are deployed in a distributed setting. The test bed for the framework is the EU project XPERO, the goal of which is robot learning by experimentation. The task at hand is the integration of the required applications, such as planning of experiments, perception of parametrized features, robot motion control and knowledge-based learning, into a coherent cognitive architecture. This allows a mobile robot to use the methods involved in experimentation in order to learn about its environment. To meet the challenge of developing this architecture within a distributed, heterogeneous environment, the authors specified, defined, developed, implemented and tested a component-based architecture called XPERSIF. The architecture comprises loosely-coupled, autonomous components that offer services through their well-defined interfaces and form a service-oriented architecture. The Ice middleware is used in the communication layer. Its deployment facilitates the necessary refactoring of concepts. One fully specified and detailed use case is the successful integration of the XPERSim simulator which constitutes one of the kernel components of XPERO. The results of this work demonstrate that the proposed architecture is robust and flexible, and can be successfully scaled to allow the complete integration of the necessary applications, thus enabling robot learning by experimentation. The design supports composability, thus allowing components to be grouped together in order to provide an aggregate service. Distributed simulation enabled real time tele-observation of the simulated experiment. Results show that incorporating the XPERSim simulator has substantially enhanced the speed of research and the information flow within the cognitive learning loop.},
    Address = {Sankt Augustin, Germany},
    Author = {Awaad, Iman and Leon, Beatriz},
    Date-Added = {2015-08-26 07:28:08 +0000},
    Date-Modified = {2015-08-26 07:28:08 +0000},
    Institution = {Department of Computer Science, Bonn-Rhein-Sieg University of Applied Sciences},
    Month = {February},
    Title = {{XPERSIF: A Software Integration Framework \& Architecture for Robotic Learning by Experimentation}},
    Url = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/2/brsu_techreport_02_2008_awaad_leon.pdf},
    Year = {2008},
    Bdsk-Url-1 = {https://opus.bib.hochschule-bonn-rhein-sieg.de/files/2/brsu_techreport_02_2008_awaad_leon.pdf}}