During the years, our research lead to a number of publications at conferences and journals. Publishing ideas and results is the main resource for fruitful discussions and feedback from the worldwide research community.
2023
Kirchheim, Konstantin; Gonschorek, Tim; Ortmeier, Frank
Out-of-Distribution Detection with Logical Reasoning Konferenzbeitrag
In: IEEE, (Hrsg.): IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023.
@inproceedings{nokey,
title = {Out-of-Distribution Detection with Logical Reasoning},
author = {Konstantin Kirchheim and Tim Gonschorek and Frank Ortmeier},
editor = {IEEE},
year = {2023},
date = {2023-10-27},
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Häring, Ivo; Kumar, Sunil; Mopuru, Reddy; Walz, Teo Puig; Dhanani, Mayur; Sandela, Nikhilesh; Finger, Jörg; Vogelbacher, Georg; Höflinger, Fabian; Jain, Aishvarya Kumar; Richter, Alexander; Kirchheim, Konstantin
Overall Markov diagram design and simulation example for scalable safety analysis of autonomous vehicles Konferenzbeitrag
In: 33rd European Safety and Reliability Conference (ESREL), 2023.
@inproceedings{nokey,
title = {Overall Markov diagram design and simulation example for scalable safety analysis of autonomous vehicles},
author = {Ivo H\"{a}ring and Sunil Kumar and Reddy Mopuru and Teo Puig Walz and Mayur Dhanani and Nikhilesh Sandela and J\"{o}rg Finger and Georg Vogelbacher and Fabian H\"{o}flinger and Aishvarya Kumar Jain and Alexander Richter and Konstantin Kirchheim},
year = {2023},
date = {2023-08-19},
urldate = {2023-08-19},
booktitle = {33rd European Safety and Reliability Conference (ESREL)},
journal = {33rd European Safety and Reliability Conference (ESREL)},
abstract = {Markov models are a promising tool regarding the assessment of availability, safety, security, and reliability of autonomous driving functions. The paper addresses challenges regarding the overall system functional and static modeling and related overall Markov diagram design options. To this end, the model space is presented, extending the main functions consisting of extended sensory system, decision and control, and vehicle platform manipulation. Sample transition models from literature are used. It is shown how to color-label overall Markov system product states in terms of the level of their criticality, independent of the multiplicity of failures. This is used to model the effect of structural and functional redundancies, e.g., of redundant sensors and sensors of different technology. The modeling approach allows to compare the effect of redundancy options on a systemic level, as well as to identify the need for further aggregation or subdivision of Markov states or refinement of the transition modeling and simulation approach. For instance, in terms of providing statistical assessment of historic events or by using simulation results of specific autonomous driving scenarios, e.g., interaction with vulnerable road users in case of darkness, bad weather, and partial sensor degradation. The paper presents Markov modeling results with a focus on modeling of redundancies of sensors.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Häring, Ivo; Sandela, Nikhilesh; Walz, Teo Puig; Vogelbacher, Georg; Richter, Alexander; Jain, Aishvarya Kumar; Dhanani, Mayur; Mopuru, Sunil; Kirchheim, Konstantin; Höflinger, Fabian
Dynamically resolving and abstracting Markov models for system resilience analysis Konferenzbeitrag
In: 33rd European Safety and Reliability Conference (ESREL), 2023.
@inproceedings{nokey,
title = {Dynamically resolving and abstracting Markov models for system resilience analysis},
author = {Ivo H\"{a}ring and Nikhilesh Sandela and Teo Puig Walz and Georg Vogelbacher and Alexander Richter and Aishvarya Kumar Jain and Mayur Dhanani and Sunil Mopuru and Konstantin Kirchheim and Fabian H\"{o}flinger},
year = {2023},
date = {2023-08-19},
booktitle = {33rd European Safety and Reliability Conference (ESREL)},
abstract = {Regarding the modeling of quasi-static systems with minor failures for failure prediction and maintenance, Markov models have shown to be very successful. Finite discrete state models can be considered as best practice in this domain, often even assumed to be homogeneous. The question arises if Markov models are also capable to model resilience of systems including major disruptions, where great fractions of the system and its functionality fail. To this end, analytical propositions are made that define model extensions. An initial scalable system is defined, including expected refinements and abstractions. In further phases, major disruptions occur. The disruptions can cause branching points opening routes to model extensions or abstractions. Also independent of disruptions, new states and transitions are introduced or merged for model granularity adoption. Overall system behavior can be interpreted in terms of system improvement with or without new system states or functionalities and corresponding transitions, reaching the ex-ante system state as before the disruption, reaching a deteriorated system state, or finally various degraded and failed overall system states. Definitions such as states, absorbing states and critical transitions are reinterpreted or extended to allow for dynamically resolving or abstracting the Markov model. The main results are extended definitions and derivations when compared to traditional Markov models. Based on the analytical expressions, an example is provided where the formalism could be applied with advantage for autonomous driving safety assessment by considering increasing or decreasing levels of resolution of subsystems or subfunctions.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Kirchheim, Konstantin
Towards Deep Anomaly Detection with Structured Knowledge Representations Konferenzbeitrag
In: LNCS, Springer (Hrsg.): Workshop on AI Safety Engineering 2023, 2023.
@inproceedings{kirchheim2023towards,
title = {Towards Deep Anomaly Detection with Structured Knowledge Representations},
author = {Konstantin Kirchheim},
editor = {Springer LNCS},
year = {2023},
date = {2023-07-31},
urldate = {2023-07-31},
booktitle = {Workshop on AI Safety Engineering 2023},
abstract = {Machine Learning (ML) models tend to only make reliable predictions for inputs that are similar to the training data. Consequentially, anomaly detection, which can be used to detect unusual inputs, is a critical for ensuring the safety of machine learning agents operating in open environments. In this work, we identify and discuss several limitations of current anomaly detection methods, such as their weak performance on tasks that require abstract reasoning, the inability to integrate background knowledge, and the opaqueness that undermines their trustworthiness in critical applications. Furthermore, we propose an architecture for anomaly detection models that aims to integrate structured knowledge representations to address these limitations. Our hypothesis is that this approach can improve performance and robustness, reduce the required resources (such as data and computation), and provide a higher degree of transparency. As a result, our work contributes to increased safety of machine learning systems.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Sambandham, Venkatesh Thirugnana; Kirchheim, Konstantin; Ortmeier, Frank
Evaluating and Increasing Segmentation Robustness in CARLA Konferenzbeitrag
In: LNCS, Springer (Hrsg.): Workshop on AI Safety Engineering 2023, 2023.
@inproceedings{sambandham2023evaluating,
title = {Evaluating and Increasing Segmentation Robustness in CARLA},
author = {Venkatesh Thirugnana Sambandham and Konstantin Kirchheim and Frank Ortmeier},
editor = {Springer LNCS},
year = {2023},
date = {2023-07-31},
urldate = {2023-07-31},
booktitle = {Workshop on AI Safety Engineering 2023},
abstract = {Model robustness is a crucial property in safety-critical appli-
cations such as autonomous driving and medical diagnosis. In this paper,
we use the CARLA simulation environment to evaluate the robustness of
various architectures for semantic segmentation to adverse environmen-
tal changes. Contrary to previous work, the environmental changes that
we test the models against are not applied to existing images, but ren-
dered directly in the simulation, enabling more realistic robustness tests.
Surprisingly, we find that Transformers provide only slightly increased
robustness compared to some CNNs. Furthermore, we demonstrate that
training on a small set of adverse samples can significantly improve the
robustness of most models.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
cations such as autonomous driving and medical diagnosis. In this paper,
we use the CARLA simulation environment to evaluate the robustness of
various architectures for semantic segmentation to adverse environmen-
tal changes. Contrary to previous work, the environmental changes that
we test the models against are not applied to existing images, but ren-
dered directly in the simulation, enabling more realistic robustness tests.
Surprisingly, we find that Transformers provide only slightly increased
robustness compared to some CNNs. Furthermore, we demonstrate that
training on a small set of adverse samples can significantly improve the
robustness of most models.
Dix, Marcel; Manca, Gianluca; Okafor, Kenneth Chigozie; Borrison, Reuben; Kirchheim, Konstantin; Sharma, Divyasheel; KR, Chandrika; Maduskar, Deepti; Ortmeier, Frank
Measuring the Robustness of ML Models Against Data Quality Issues in Industrial Time Series Data Konferenzbeitrag
In: IEEE, (Hrsg.): IEEE INDIN international conference, 2023.
@inproceedings{dix2023measuring,
title = {Measuring the Robustness of ML Models Against Data Quality Issues in Industrial Time Series Data},
author = {Marcel Dix and Gianluca Manca and Kenneth Chigozie Okafor and Reuben Borrison and Konstantin Kirchheim and Divyasheel Sharma and Chandrika KR and Deepti Maduskar and Frank Ortmeier},
editor = {IEEE},
year = {2023},
date = {2023-06-02},
urldate = {2023-06-02},
booktitle = {IEEE INDIN international conference},
abstract = {The performance of machine learning models can be significantly impacted by variations in data quality. Typically, conventional model testing does not examine how robust the model would be in the face of potential data quality deterioration. In an industrial use case, however, data quality is a pertinent issue, as sensors are susceptible to a variety of technical and external issues that may result in poor data quality over time. In order to develop robust machine learning models, industrial data scientists must understand the sensitivity of their models against data quality issues, through the application of an appropriate and comprehensive testing solution. In this work, we propose a generic framework for systematically analyzing the impact of data quality issues on the performance of machine learning models by intentionally applying gradual perturbations to the original time series data. The evaluation is performed using a benchmark industrial process consisting of multivariate time series from sensors in a complex chemical process.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Kerautret, Bertrand; Kirchheim, Konstantin; Lopresti, Daniel; Ngo, P.; Tomaszewska, P.
Promoting Reproducibility of Research Results in International Events Diskussionspapier
2023.
@workingpaper{kerautret2023promoting,
title = {Promoting Reproducibility of Research Results in International Events},
author = {Bertrand Kerautret and Konstantin Kirchheim and Daniel Lopresti and P. Ngo and P. Tomaszewska},
editor = {RRPR 2022 : Fourth Workshop on Reproducible Research in Pattern Recognition, Springer},
year = {2023},
date = {2023-06-02},
urldate = {2023-06-02},
abstract = {Following the fourth edition of the workshop on Reproducible Research in Pattern Recognition (RRPR) at the International Conference on Pattern Recognition (ICPR), this paper reports the main discussions that were held during and after the workshop. In particular, the integration of reproducible research inside an international conference was the first main axis of reflection. Further discussions addressed the ways of initiating or imposing reproducible research, as well as the problem of performance comparisons of published research papers that emerges due to the fact that the reported results are often based on different implementations and datasets.},
keywords = {},
pubstate = {published},
tppubtype = {workingpaper}
}
2022
Petermann, Lukas
Comparison of Real-Time Plane Detection Algorithms on Intel RealSense Abschlussarbeit
2022.
@mastersthesis{Petermann22,
title = {Comparison of Real-Time Plane Detection Algorithms on Intel RealSense},
author = {Lukas Petermann},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2023/01/BA_Petermann.pdf},
year = {2022},
date = {2022-11-29},
urldate = {2022-11-29},
abstract = {Planar structures account for a significant portion of indoor man-made environments.
With advances in the field of Augmented Reality (AR), the automatic detection of planar
surfaces has become essential for recent AR applications. Often, these applications
operate under a strict temporal constriction, also referred to as real-time. Naturally, this
time restriction applies to the integrated plane detection algorithm as well. The technology
that provides real-time plane detection already exists. However, for different reasons,
these devices are often not suitable for the average consumer. This motivates the utilization
of consumer off-the-shelf hardware. Additionally, an appropriate plane detection
algorithm is needed. Decades of research yield a wide variety of different approaches.
As these methods are predominantly evaluated scientifically, the real-world applicability
poses an open question. Moreover, the inherent incomparability of most plane detection
algorithms renders a selection non-trivial.
This work evaluates the real-world applicability of real-time plane detection algorithms.
After considering current state-of-the-art plane detection algorithms, we select
four algorithms, namely RSPD, OPS, 3D-KHT, and OBRG. In a similar
approach, we select the 2D-3D-S dataset and compose the novel FIN dataset. We introduce
a definition of real-time and perform experiments on both datasets. Subsequently,
we compare the respective results. The results show that 3D-KHT is the only real-time
applicable plane detection algorithm in a realistic environment.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
With advances in the field of Augmented Reality (AR), the automatic detection of planar
surfaces has become essential for recent AR applications. Often, these applications
operate under a strict temporal constriction, also referred to as real-time. Naturally, this
time restriction applies to the integrated plane detection algorithm as well. The technology
that provides real-time plane detection already exists. However, for different reasons,
these devices are often not suitable for the average consumer. This motivates the utilization
of consumer off-the-shelf hardware. Additionally, an appropriate plane detection
algorithm is needed. Decades of research yield a wide variety of different approaches.
As these methods are predominantly evaluated scientifically, the real-world applicability
poses an open question. Moreover, the inherent incomparability of most plane detection
algorithms renders a selection non-trivial.
This work evaluates the real-world applicability of real-time plane detection algorithms.
After considering current state-of-the-art plane detection algorithms, we select
four algorithms, namely RSPD, OPS, 3D-KHT, and OBRG. In a similar
approach, we select the 2D-3D-S dataset and compose the novel FIN dataset. We introduce
a definition of real-time and perform experiments on both datasets. Subsequently,
we compare the respective results. The results show that 3D-KHT is the only real-time
applicable plane detection algorithm in a realistic environment.
Ghanem, Christian; Kirchheim, Konstantin; Eckl, Markus
In: Soziale Passagen, 2022, ISSN: 1867-0199.
@article{ghanem2022sworm,
title = {Social Work Research Map \textendash ein niederschwelliger Zugang zu internationalen Publikationen der Sozialen Arbeit},
author = {Christian Ghanem and Konstantin Kirchheim and Markus Eckl},
url = {https://link.springer.com/article/10.1007/s12592-022-00430-8},
doi = {10.1007/s12592-022-00430-8},
issn = {1867-0199},
year = {2022},
date = {2022-11-01},
urldate = {2022-11-01},
journal = {Soziale Passagen},
abstract = {Internationalisierung ist ein Politikum in der deutschen Hochschulpolitik. Auch in der Lehre, Forschung und Praxis Sozialer Arbeit wird eine gr\"{o}\ssere Orientierung an internationalen Diskursen gefordert. Aufgrund rasant anwachsender Forschungsergebnisse wird es zunehmend schwerer, sich einen systematischen \"{U}berblick \"{u}ber disziplin\"{a}re Wissensbest\"{a}nde zu verschaffen. In diesem Beitrag wird die Entwicklung der interaktiven Webseite SWORM („Social Work Research Map“, www.sworm.org) beschrieben, die den Zugang zu wissenschaftlichen Publikationen der Sozialen Arbeit erleichtern soll. Hierf\"{u}r wurde eine Datenbank von knapp 25.000 Zeitschriftenbeitr\"{a}gen aus 23 einschl\"{a}gigen Fachzeitschriften erstellt. Mithilfe automatisierter Analysemethoden (quantitative Textanalyse/Topic-Modeling), wurden die Abstracts untersucht und in 40 thematische Cluster strukturiert. Unterschiedliche Visualisierungstechniken und Filterfunktionen erm\"{o}glichen den Nutzer*innen ein eigenst\"{a}ndiges Durchsuchen der Datenbank anhand des individuellen Erkenntnisinteresses. Einzelne Suchergebnisse k\"{o}nnen dabei gesichert werden, wobei ein auf k\"{u}nstlicher Intelligenz basierendes Empfehlungssystem \"{a}hnliche Publikationen vorschl\"{a}gt. Die Entwicklung von SWORM ist ein Beispiel f\"{u}r den Einsatz computerwissenschaftlicher Methoden in der Sozialen Arbeit und verdeutlicht das Potenzial, gro\sse Textmengen zu strukturieren und f\"{u}r den Menschen zug\"{a}nglich zu machen. Gleichzeitig wird deutlich, dass die Anwendung entsprechender Methoden f\"{u}r Sozialwissenschaftler*innen sehr hochschwellig ist und mit dem Einsatz von k\"{u}nstlicher Intelligenz ethische Probleme aufgeworfen werden.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kirchheim, Konstantin; Ortmeier, Frank
On Outlier Exposure with Generative Models Workshop
NeurIPS ML Safety Workshop, 2022.
@workshop{kirchheim2022generative,
title = {On Outlier Exposure with Generative Models},
author = {Konstantin Kirchheim and Frank Ortmeier},
year = {2022},
date = {2022-11-01},
urldate = {2022-11-01},
booktitle = {NeurIPS ML Safety Workshop},
abstract = {While Outlier Exposure reliably increases the performance of Out-of-Distribution
detectors, it requires a set of available outliers during training. In this paper, we
propose Generative Outlier Exposure (GOE), which alleviates the need for available
outliers by using generative models to sample synthetic outliers from low-density
regions of the data distribution. The approach requires no modification of the
generator, works on image and text data, and can be used with pre-trained models.
We demonstrate the effectiveness of generated outliers on several image and text
datasets, including ImageNet.},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
detectors, it requires a set of available outliers during training. In this paper, we
propose Generative Outlier Exposure (GOE), which alleviates the need for available
outliers by using generative models to sample synthetic outliers from low-density
regions of the data distribution. The approach requires no modification of the
generator, works on image and text data, and can be used with pre-trained models.
We demonstrate the effectiveness of generated outliers on several image and text
datasets, including ImageNet.
Sambandham, Venkatesh Thirugnana; Kirchheim, Konstantin; Mukhopadhaya, Sayan; Ortmeier, Frank
Towards Transformer-based Homogenization of Satellite Imagery for Landsat-8 and Sentinel-2 Workshop
ESST2022: Transformers Workshop for Environmental Science, 2022.
@workshop{sambandham2022transformer,
title = {Towards Transformer-based Homogenization of Satellite Imagery for Landsat-8 and Sentinel-2},
author = {Venkatesh Thirugnana Sambandham and Konstantin Kirchheim and Sayan Mukhopadhaya and Frank Ortmeier},
editor = {arXiv preprint},
url = {https://arxiv.org/abs/2210.07654},
doi = {10.48550/arXiv.2210.07654},
year = {2022},
date = {2022-09-22},
booktitle = {ESST2022: Transformers Workshop for Environmental Science},
abstract = { Landsat-8 (NASA) and Sentinel-2 (ESA) are two prominent multi-spectral imaging satellite projects that provide publicly available data. The multi-spectral imaging sensors of the satellites capture images of the earth's surface in the visible and infrared region of the electromagnetic spectrum. Since the majority of the earth's surface is constantly covered with clouds, which are not transparent at these wavelengths, many images do not provide much information. To increase the temporal availability of cloud-free images of a certain area, one can combine the observations from multiple sources. However, the sensors of satellites might differ in their properties, making the images incompatible. This work provides a first glance at the possibility of using a transformer-based model to reduce the spectral and spatial differences between observations from both satellite projects. We compare the results to a model based on a fully convolutional UNet architecture. Somewhat surprisingly, we find that, while deep models outperform classical approaches, the UNet significantly outperforms the transformer in our experiments. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
Kirchheim, Konstantin; Filax, Marco; Ortmeier, Frank
Multi-Class Hypersphere Anomaly Detection Konferenzbeitrag
In: 26th International Conference on Pattern Recognition , 2022.
@inproceedings{kirchheim2022multi,
title = {Multi-Class Hypersphere Anomaly Detection},
author = {Konstantin Kirchheim and Marco Filax and Frank Ortmeier},
year = {2022},
date = {2022-08-28},
urldate = {2022-08-28},
booktitle = {26th International Conference on Pattern Recognition
},
abstract = {Machine learning-based classification algorithms typically operate under assumptions that assert that the underlying data generating distribution is stationary and draws from a finite set of categories. In some scenarios, these assumptions might not hold, but identifying violating inputs - here referred to as anomalies - is a challenging task. Recent publications propose deep learning-based approaches that perform anomaly detection and classification jointly by (implicitly) learning a mapping that projects data points to a lower-dimensional space, such that the images of points of one class reside inside of a hypersphere, while others are mapped outside of it. In this work, we propose Multi-Class Hypersphere Anomaly Detection (MCHAD), a new hypersphere learning algorithm for anomaly detection in classification settings, as well as a generalization of existing hypersphere learning methods that allows incorporating example anomalies into the training. Extensive experiments on competitive benchmark tasks, as well as theoretical arguments, provide evidence for the effectiveness of our method. Our code is publicly available. },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Kirchheim, Konstantin; Filax, Marco; Ortmeier, Frank
On Challenging Aspects of Reproducibility in Deep Anomaly Detection Workshop
RRPR 2022 : Fourth Workshop on Reproducible Research in Pattern Recognition, Springer, 2022.
@workshop{kirchheim2022aspects,
title = {On Challenging Aspects of Reproducibility in Deep Anomaly Detection},
author = {Konstantin Kirchheim and Marco Filax and Frank Ortmeier},
editor = {Springer},
year = {2022},
date = {2022-08-28},
urldate = {2022-08-28},
booktitle = {RRPR 2022 : Fourth Workshop on Reproducible Research in Pattern Recognition},
publisher = {Springer},
abstract = {This companion paper focuses on challenging aspects of reproducibility that emerge in anomaly detection with Deep Neural Networks.
We provide motivating examples based on our work and present mitigation strategies. Furthermore, we document a trade-off between the complexity of experiments and the strength of the empirical evidence obtained through them, both of which impact different types of reproducibility.
Ultimately, we argue that the reproducibility of inferences should be prioritized over the reproducibility of exact numerical results. },
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
We provide motivating examples based on our work and present mitigation strategies. Furthermore, we document a trade-off between the complexity of experiments and the strength of the empirical evidence obtained through them, both of which impact different types of reproducibility.
Ultimately, we argue that the reproducibility of inferences should be prioritized over the reproducibility of exact numerical results.
Nielebock, Sebastian; Blockhaus, Paul; Krüger, Jacob; Ortmeier, Frank
Automated Change Rule Inference for Distance-Based API Misuse Detection Artikel Geplante Veröffentlichung
In: arXiv preprint, Geplante Veröffentlichung.
@article{Nielebock2022ChangeRule,
title = {Automated Change Rule Inference for Distance-Based API Misuse Detection},
author = {Sebastian Nielebock and Paul Blockhaus and Jacob Kr\"{u}ger and Frank Ortmeier},
editor = {arXiv preprint},
url = {https://arxiv.org/pdf/2207.06665
https://doi.org/10.5281/zenodo.6598541},
doi = {10.48550/arXiv.2207.06665},
year = {2022},
date = {2022-07-14},
journal = {arXiv preprint},
abstract = {Developers build on Application Programming Interfaces (APIs) to reuse existing functionalities of code libraries. Despite the benefits of reusing established libraries (e.g., time savings, high quality), developers may diverge from the API's intended usage; potentially causing bugs or, more specifically, API misuses. Recent research focuses on developing techniques to automatically detect API misuses, but many suffer from a high false-positive rate. In this article, we improve on this situation by proposing ChaRLI (Change RuLe Inference), a technique for automatically inferring change rules from developers' fixes of API misuses based on API Usage Graphs (AUGs). By subsequently applying graph-distance algorithms, we use change rules to discriminate API misuses from correct usages. This allows developers to reuse others' fixes of an API misuse at other code locations in the same or another project. We evaluated the ability of change rules to detect API misuses based on three datasets and found that the best mean relative precision (i.e., for testable usages) ranges from 77.1 % to 96.1 % while the mean recall ranges from 0.007 % to 17.7 % for individual change rules. These results underpin that ChaRLI and our misuse detection are helpful complements to existing API misuse detectors.},
keywords = {},
pubstate = {forthcoming},
tppubtype = {article}
}
Kirchheim, Konstantin; Filax, Marco; Ortmeier, Frank
PyTorch-OOD: A Library for Out-of-Distribution Detection Based on PyTorch Konferenzbeitrag
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, S. 4351-4360, IEEE/CVF, 2022.
@inproceedings{kirchheim2022pytorch,
title = {PyTorch-OOD: A Library for Out-of-Distribution Detection Based on PyTorch},
author = {Konstantin Kirchheim and Marco Filax and Frank Ortmeier},
url = {https://openaccess.thecvf.com/content/CVPR2022W/HCIS/html/Kirchheim_PyTorch-OOD_A_Library_for_Out-of-Distribution_Detection_Based_on_PyTorch_CVPRW_2022_paper.html},
year = {2022},
date = {2022-06-24},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
pages = {4351-4360},
publisher = {IEEE/CVF},
abstract = {Machine Learning models based on Deep Neural Networks behave unpredictably when presented with inputs that do not stem from the training distribution and sometimes make egregiously wrong predictions with high confidence. This property undermines the trustworthiness of systems depending on such models and potentially threatens the safety of their users. Out-of-Distribution (OOD) detection mechanisms can be used to prevent errors by detecting inputs that are so dissimilar from the training set that the model can not be expected to make reliable predictions. In this paper, we present PyTorch-OOD, a Python library for OOD detection based on PyTorch. Its primary goals are to accelerate OOD detection research and improve the reproducibility and comparability of experiments. PyTorch-OOD provides well-tested and documented implementations of OOD detection methods with a unified interface, as well as training and benchmark datasets, architectures, pre-trained models, and utility functions. The library is available online under the permissive Apache 2.0 license and can be installed via Python Package Index (PyPI). },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Schillreff, Nadia; Scholle, Julian; Kirchheim, Konstantin; Ortmeier, Frank
High Speed RCS for Robot Task Sequencing Optimization Konferenzbeitrag
In: VDE, (Hrsg.): 54th International Symposium on Robotics (ISR Europe), 2022, ISBN: 978-3-8007-5891-3.
@inproceedings{schillref2022high,
title = {High Speed RCS for Robot Task Sequencing Optimization},
author = {Nadia Schillreff and Julian Scholle and Konstantin Kirchheim and Frank Ortmeier },
editor = {VDE},
url = {https://ieeexplore.ieee.org/abstract/document/9861809},
isbn = {978-3-8007-5891-3},
year = {2022},
date = {2022-06-21},
booktitle = {54th International Symposium on Robotics (ISR Europe)},
abstract = {Task sequencing optimization requires running robot controller simulations (RCS) a large number of times. However, available RCS solutions are designed to be used in real time applications. In this work, we present a high speed RCS that is up to 5 orders of magnitude faster compared to the available solution, while retaining a high level of accuracy. We demonstrate the effectiveness of our solution in a comparative study on a KUKA robot. Furthermore, we provide first promising results for an RCS parameterized by machine learning algorithms. In conclusion, the presented RCS creates the foundation for the practical application of modern task sequencing optimization and could be easily adapted to a broad range of robot types from various manufacturers. },
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Filax, Marco; Gonschorek, Tim; Ortmeier, Frank
Semi-automatic Acquisition of Datasets for Retail Recognition Konferenzbeitrag
In: Proceedings of the 30th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, 2022.
@inproceedings{Filax2022,
title = {Semi-automatic Acquisition of Datasets for Retail Recognition},
author = {Marco Filax and Tim Gonschorek and Frank Ortmeier},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2022/09/wscg22.pdf},
year = {2022},
date = {2022-06-01},
urldate = {2022-06-01},
booktitle = {Proceedings of the 30th International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision},
volume = {30},
abstract = {The acquisition of datasets is typically a laborious task. It is challenging, especially if the required annotations in every image in the dataset are vast. It is even more challenging if the inter-class variance, the visual difference between two distinct classes, is low. Retail product recognition constitutes an example of both issues. Products are densely packed on shelves, resulting in many objects within an image. Products share visual similarities, which makes them hard to distinguish.
In this work, we propose Annotron, a tool tackling the acquisition problem in this domain. Exploiting dataset structures, such as being organized in consecutive frames, we detect real-world objects through pre-trained detectors and reproject detections to generate candidate traces over time. Further, we aid labelers by computing potential matches of real-world objects and reference images based on their visual similarity: We cluster consecutive detections based on a large set of reference images using embeddings acquired from pre-trained networks.
Using the proposed tool reduces manual efforts drastically by diminishing the time spent on repetitive, error-prone tasks. We evaluate Annotron in the retail recognition domain. The domain is commonly considered fine-grained, which means that instance-level annotations are costly due to the described problems. We refine the given dataset, surpass the number of previously found stock-keeping units, and label over 446.500 individual bounding boxes.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In this work, we propose Annotron, a tool tackling the acquisition problem in this domain. Exploiting dataset structures, such as being organized in consecutive frames, we detect real-world objects through pre-trained detectors and reproject detections to generate candidate traces over time. Further, we aid labelers by computing potential matches of real-world objects and reference images based on their visual similarity: We cluster consecutive detections based on a large set of reference images using embeddings acquired from pre-trained networks.
Using the proposed tool reduces manual efforts drastically by diminishing the time spent on repetitive, error-prone tasks. We evaluate Annotron in the retail recognition domain. The domain is commonly considered fine-grained, which means that instance-level annotations are costly due to the described problems. We refine the given dataset, surpass the number of previously found stock-keeping units, and label over 446.500 individual bounding boxes.
Sambandham, Venkatesh Thirugnana
Deep Learning based Harmonization and Super-Resolution of Landsat-8 and Sentinel-2 images Abschlussarbeit
Otto-von-Guericke-Universität Magdeburg, 2022.
@mastersthesis{sambandham2022deep,
title = {Deep Learning based Harmonization and Super-Resolution of Landsat-8 and Sentinel-2 images},
author = {Venkatesh Thirugnana Sambandham},
url = {https://cse.ovgu.de/files/thesis/sambandham2022deep.pdf},
year = {2022},
date = {2022-04-19},
school = {Otto-von-Guericke-Universit\"{a}t Magdeburg},
abstract = {Earth Observation using remotely sensed images from satellite sensors has been a fas-
cinating topic of study in recent days. Landsat-8 by NASA and sentinel-2 (A\&B) by ESA
are two very prominent multi-spectral imaging satellite projects that provide open-
source data. Images from these sensors are used in monitoring vegetation changes,
urban development, catastrophe management ,and many cutting-edge applications.
However, these multi-spectral imaging sensors work in the visible to the infrared
region of the Electro-Magnetic Spectrum which could not penetrate through clouds.
Since majority of the earth’s surface is consistently covered with clouds, images from
these sensors over a cloud-covered region cannot be used. The combined use of
multi-spectral images from multiple image sources is a viable option to increase the
temporal availability of cloud-free images. However, such an approach brings in
many uncertainties in the analysis due to the differences in the configurations of the
imaging sensors and spatial resolution. Several types of research are already done
on bringing down the differences. This thesis explores a possibility of using a Deep
Learning based pipeline that brings down the spectral differences between the two
image sources, and therby improving the spatial resolution of landsat-8 image.
The dataset for this work is created using the images from both sensors that were
taken on the same day over the study area. The proposed pipeline has a compre-
hensive preprocessing step, which also includes the development of a band-pass
adjustment function that reduces the spectral differences of the common bands be-
tween the two imaging sensors. The preprocessing inputs are then upsampled using a
Convolution Neural Network based super-resolution architecture. To train this archi-
tecture the high-resolution sentinel-2 images are used as ground truth, the landsat-8
images are then brought to the resolution of sentinel-2. The best architecture for this
pipeline is a UNet based super-resolution model that fuses the pan-chromatic band
of landsat-8 with the multi-spectral bands and thereby upsampling the multi-spectral
bands.
The proposed pipeline improves the spatial details of the landsat-8 image and around
5% improvement in the SSIM metrics is observed. A significant drop in the pixel-
to-pixel NRMSE metrics between the images was also observed. In addition to that
significant improvements in the correlation between the derived bands of Normalized
Difference Vegetation Index(NDVI) and Normalized Difference Water Index(NDWI)
are also observed. The robustness of the pipeline is demonstrated by performing a
use-case scenario of field observation over a period. All the tools, libraries ,and the
data that are used in this work are from open sources and the whole work is easily
reproducible.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
cinating topic of study in recent days. Landsat-8 by NASA and sentinel-2 (A&B) by ESA
are two very prominent multi-spectral imaging satellite projects that provide open-
source data. Images from these sensors are used in monitoring vegetation changes,
urban development, catastrophe management ,and many cutting-edge applications.
However, these multi-spectral imaging sensors work in the visible to the infrared
region of the Electro-Magnetic Spectrum which could not penetrate through clouds.
Since majority of the earth’s surface is consistently covered with clouds, images from
these sensors over a cloud-covered region cannot be used. The combined use of
multi-spectral images from multiple image sources is a viable option to increase the
temporal availability of cloud-free images. However, such an approach brings in
many uncertainties in the analysis due to the differences in the configurations of the
imaging sensors and spatial resolution. Several types of research are already done
on bringing down the differences. This thesis explores a possibility of using a Deep
Learning based pipeline that brings down the spectral differences between the two
image sources, and therby improving the spatial resolution of landsat-8 image.
The dataset for this work is created using the images from both sensors that were
taken on the same day over the study area. The proposed pipeline has a compre-
hensive preprocessing step, which also includes the development of a band-pass
adjustment function that reduces the spectral differences of the common bands be-
tween the two imaging sensors. The preprocessing inputs are then upsampled using a
Convolution Neural Network based super-resolution architecture. To train this archi-
tecture the high-resolution sentinel-2 images are used as ground truth, the landsat-8
images are then brought to the resolution of sentinel-2. The best architecture for this
pipeline is a UNet based super-resolution model that fuses the pan-chromatic band
of landsat-8 with the multi-spectral bands and thereby upsampling the multi-spectral
bands.
The proposed pipeline improves the spatial details of the landsat-8 image and around
5% improvement in the SSIM metrics is observed. A significant drop in the pixel-
to-pixel NRMSE metrics between the images was also observed. In addition to that
significant improvements in the correlation between the derived bands of Normalized
Difference Vegetation Index(NDVI) and Normalized Difference Water Index(NDWI)
are also observed. The robustness of the pipeline is demonstrated by performing a
use-case scenario of field observation over a period. All the tools, libraries ,and the
data that are used in this work are from open sources and the whole work is easily
reproducible.
Rao, Rajatha Nagaraja
Active Learning and Transfer Learning for the efficient Labelling and Semantic Segmentation in Aerial Imagery Abschlussarbeit
Otto-von-Guericke-Universität Magdeburg, 2022.
@mastersthesis{rao2022active,
title = {Active Learning and Transfer Learning for the efficient Labelling and Semantic Segmentation in Aerial Imagery},
author = {Rajatha Nagaraja Rao},
url = {https://cse.ovgu.de/files/thesis/rao2022active.pdf},
year = {2022},
date = {2022-03-15},
school = {Otto-von-Guericke-Universit\"{a}t Magdeburg},
abstract = {Deep learning (DL) models are capable of performing semantic segmentation
(SS) in aerial imagery that helps us detect the various semantic features such as
buildings, roads, woodlands, water bodies and so on for use in several applica-
tions such as that of social and economic analysis. Since DL models are data
hungry, data collection and ground truth generation are pivotal to training these
DL models in a supervised setting. However, the annotations of interest may not
be readily available and must be generated manually by annotating aerial images
that span hundreds of kilometers which proves to be impractical on account of
the extraordinarily high financial and temporal resources required.
This thesis work contributes towards minimizing the labelling effort by cir-
cumventing the manual labelling of the entire dataset through the use of both the
transfer learning (TL) and active learning (AL) techniques. In the TL approach,
an extensive research of available aerial datasets is done and DL models are
pre-trained with suitable open-source aerial image datasets of varying ground
sampling distances (GSD). The last layers of the models are then fine-tuned using
a considerably small number of manually annotated samples in the ObViewSly
dataset. The AL approach deals with label scarcity by iteratively selecting the
most informative samples from an enormous unlabelled data pool which would
help the model to learn better and to grow more confident in its predictions. We
have used the entropy-based query strategy to rank and retrieve these samples
for labelling which are then used for iteratively re-training the model. A novel
technique called the AL guided TL (TL+AL) for SS in aerial imagery is proposed
in this work that combines the effectiveness of both the AL and TL approaches.
TL re-uses the learned representations from the source dataset and AL carefully
selects important samples to be annotated such that we ensure both efficiency in
labelling and good model performance. Also, the Shuffle-Unet model is proposed
as a part of this work which employs phase shift in place of maxpooling and
upsampling operations.
The AL, TL and TL+AL were investigated here with the U-Net and the pro-
posed Shuffle-Unet models which were able to achieve an IoU score of 0.75 by
judiciously annotating only 10% of the dataset. Through the entropy heatmaps,
it was demonstrated that the samples that have regions covered by shadow are
difficult to learn for the model. Variable GSDs lead to domain shift between the
datasets used in TL. This domain shift is also addressed here wherein the trained
model is adapted to detect semantic features in the ObViewSly dataset that has a
lower GSD than that of the source dataset used for pre-training in TL. Hence, our
approach attained good segmentation performance while incurring significantly
low labelling costs.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
(SS) in aerial imagery that helps us detect the various semantic features such as
buildings, roads, woodlands, water bodies and so on for use in several applica-
tions such as that of social and economic analysis. Since DL models are data
hungry, data collection and ground truth generation are pivotal to training these
DL models in a supervised setting. However, the annotations of interest may not
be readily available and must be generated manually by annotating aerial images
that span hundreds of kilometers which proves to be impractical on account of
the extraordinarily high financial and temporal resources required.
This thesis work contributes towards minimizing the labelling effort by cir-
cumventing the manual labelling of the entire dataset through the use of both the
transfer learning (TL) and active learning (AL) techniques. In the TL approach,
an extensive research of available aerial datasets is done and DL models are
pre-trained with suitable open-source aerial image datasets of varying ground
sampling distances (GSD). The last layers of the models are then fine-tuned using
a considerably small number of manually annotated samples in the ObViewSly
dataset. The AL approach deals with label scarcity by iteratively selecting the
most informative samples from an enormous unlabelled data pool which would
help the model to learn better and to grow more confident in its predictions. We
have used the entropy-based query strategy to rank and retrieve these samples
for labelling which are then used for iteratively re-training the model. A novel
technique called the AL guided TL (TL+AL) for SS in aerial imagery is proposed
in this work that combines the effectiveness of both the AL and TL approaches.
TL re-uses the learned representations from the source dataset and AL carefully
selects important samples to be annotated such that we ensure both efficiency in
labelling and good model performance. Also, the Shuffle-Unet model is proposed
as a part of this work which employs phase shift in place of maxpooling and
upsampling operations.
The AL, TL and TL+AL were investigated here with the U-Net and the pro-
posed Shuffle-Unet models which were able to achieve an IoU score of 0.75 by
judiciously annotating only 10% of the dataset. Through the entropy heatmaps,
it was demonstrated that the samples that have regions covered by shadow are
difficult to learn for the model. Variable GSDs lead to domain shift between the
datasets used in TL. This domain shift is also addressed here wherein the trained
model is adapted to detect semantic features in the ObViewSly dataset that has a
lower GSD than that of the source dataset used for pre-training in TL. Hence, our
approach attained good segmentation performance while incurring significantly
low labelling costs.
2021
Prabhu, Kartik
Synth2Real : 3D-Furniture Reconstruction in Ersatz Environment Abschlussarbeit
Otto-von-Guericke-Universität Magdeburg, 2021.
@mastersthesis{prabhu2021synth,
title = {Synth2Real : 3D-Furniture Reconstruction in Ersatz Environment},
author = {Kartik Prabhu},
url = {https://cse.ovgu.de/files/thesis/kprabhu2021synth.pdf},
year = {2021},
date = {2021-10-18},
school = {Otto-von-Guericke-Universit\"{a}t Magdeburg},
abstract = {The field of Deep Learning is growing exponentially and has tremendous applications
in many domains. The key for training deep learning models is a large dataset. The
real-world dataset is cost-ineffective, time-consuming, difficult to obtain, and are
very limited. Synthetic data are easier to create and automate. This thesis aims to
check if images created using game engines(Unity) are photorealistic and valuable in
3D reconstruction tasks using Deep Neural Networks. To achieve this, we contribute
the following: (a)A Unity-based application to create synthetic furniture images in
an ersatz indoor environment. (b)We use the new synthetic dataset as a benchmark
for the 3D reconstruction task. (c)A comprehensive study on domain adaptation and
domain randomization.
We conducted a user survey with the proposed synthetic dataset, a real dataset,
and seven other proclaimed photorealistic synthetic datasets to check the photore-
alism. We see that the proposed dataset failed to highlight itself as a standalone
photorealistic dataset but faired against other automated datasets. We compare the
performance of baseline models with transfer learning and mixed training to check
the influence of the synthetic dataset. We check domain randomization by creating a
synthetic chair dataset with diferent parameters like light, textures, camera posi-
tion, etc. The domain gap between real and synthetic datasets is visualized using
t-Distributed Stochastic Neighbor Embedding (t-SNE) and quantitatively measured
with the Fr\'{e}chet Inception Distance (FID) score. Finally, these experimental results
demonstrate that the proposed dataset enhances the performances of Deep Neural
Net models for the 3D reconstruction task.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
in many domains. The key for training deep learning models is a large dataset. The
real-world dataset is cost-ineffective, time-consuming, difficult to obtain, and are
very limited. Synthetic data are easier to create and automate. This thesis aims to
check if images created using game engines(Unity) are photorealistic and valuable in
3D reconstruction tasks using Deep Neural Networks. To achieve this, we contribute
the following: (a)A Unity-based application to create synthetic furniture images in
an ersatz indoor environment. (b)We use the new synthetic dataset as a benchmark
for the 3D reconstruction task. (c)A comprehensive study on domain adaptation and
domain randomization.
We conducted a user survey with the proposed synthetic dataset, a real dataset,
and seven other proclaimed photorealistic synthetic datasets to check the photore-
alism. We see that the proposed dataset failed to highlight itself as a standalone
photorealistic dataset but faired against other automated datasets. We compare the
performance of baseline models with transfer learning and mixed training to check
the influence of the synthetic dataset. We check domain randomization by creating a
synthetic chair dataset with diferent parameters like light, textures, camera posi-
tion, etc. The domain gap between real and synthetic datasets is visualized using
t-Distributed Stochastic Neighbor Embedding (t-SNE) and quantitatively measured
with the Fréchet Inception Distance (FID) score. Finally, these experimental results
demonstrate that the proposed dataset enhances the performances of Deep Neural
Net models for the 3D reconstruction task.
Nielebock, Sebastian; Blockhaus, Paul; Krüger, Jacob; Ortmeier, Frank
An Experimental Analysis of Graph-Distance Algorithms for Comparing API Usages Konferenzbeitrag
In: IEEE, (Hrsg.): Proceedings of the 21st IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM) - RENE Track, 2021.
@inproceedings{Nielebock2021APIDistance,
title = {An Experimental Analysis of Graph-Distance Algorithms for Comparing API Usages},
author = {Sebastian Nielebock and Paul Blockhaus and Jacob Kr\"{u}ger and Frank Ortmeier},
editor = {IEEE},
url = {https://arxiv.org/abs/2108.12511
https://doi.org/10.5281/zenodo.5255402},
doi = {10.1109/SCAM52516.2021.00034},
year = {2021},
date = {2021-09-28},
booktitle = {Proceedings of the 21st IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM) - RENE Track},
abstract = {Modern software development heavily relies on the reuse of functionalities through Application Programming Interfaces (APIs). However, client developers
can have issues identifying the correct usage of a certain API, causing misuses accompanied by software crashes or usability bugs. Therefore, researchers have
aimed at identifying API misuses automatically by comparing client code usages to correct API usages. Some techniques rely on certain API-specific graph-based
data structures to improve the abstract representation of API usages. Such techniques need to compare graphs, for instance, by computing distance metrics
based on the minimal graph edit distance or the largest common subgraphs, whose computations are known to be NP-hard problems. Fortunately, there exist many
abstractions for simplifying graph distance computation. However, their applicability for comparing graph representations of API usages has not been
analyzed. In this paper, we provide a comparison of different distance algorithms of API-usage graphs regarding correctness and runtime. Particularly,
correctness relates to the algorithms' ability to identify similar correct API usages, but also to discriminate similar correct and false usages as well as
non-similar usages. For this purpose, we systematically identified a set of eight graph-based distance algorithms and applied them on two datasets of
real-world API usages and misuses. Interestingly, our results suggest that existing distance algorithms are not reliable for comparing API usage graphs.
To improve on this situation, we identified and discuss the algorithms' issues,
based on which we formulate hypotheses to initiate research on overcoming them.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
can have issues identifying the correct usage of a certain API, causing misuses accompanied by software crashes or usability bugs. Therefore, researchers have
aimed at identifying API misuses automatically by comparing client code usages to correct API usages. Some techniques rely on certain API-specific graph-based
data structures to improve the abstract representation of API usages. Such techniques need to compare graphs, for instance, by computing distance metrics
based on the minimal graph edit distance or the largest common subgraphs, whose computations are known to be NP-hard problems. Fortunately, there exist many
abstractions for simplifying graph distance computation. However, their applicability for comparing graph representations of API usages has not been
analyzed. In this paper, we provide a comparison of different distance algorithms of API-usage graphs regarding correctness and runtime. Particularly,
correctness relates to the algorithms' ability to identify similar correct API usages, but also to discriminate similar correct and false usages as well as
non-similar usages. For this purpose, we systematically identified a set of eight graph-based distance algorithms and applied them on two datasets of
real-world API usages and misuses. Interestingly, our results suggest that existing distance algorithms are not reliable for comparing API usage graphs.
To improve on this situation, we identified and discuss the algorithms' issues,
based on which we formulate hypotheses to initiate research on overcoming them.
Heumüller, Robert; Nielebock, Sebastian; Ortmeier, Frank
Exploit Those Code Reviews! Bigger Data for Deeper Learning Konferenzbeitrag
In: ACM, (Hrsg.): Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE), S. 1505-1509, ACM, 2021.
@inproceedings{Heumueller2021etcr_1,
title = {Exploit Those Code Reviews! Bigger Data for Deeper Learning},
author = {Robert Heum\"{u}ller and Sebastian Nielebock and Frank Ortmeier},
editor = {ACM},
url = {https://doi.org/10.5281/ZENODO.5079076
https://doi.org/10.5281/ZENODO.4739592},
doi = {10.1145/3468264.3473110},
year = {2021},
date = {2021-08-20},
booktitle = {Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE)},
pages = {1505-1509},
publisher = {ACM},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nielebock, Sebastian; Heumüller, Robert; Schott, Kevin Michael; Ortmeier, Frank
Guided Pattern Mining for API Misuse Detection by Change-Based Code Analysis Artikel
In: Springer Automated Software Engineering (AUSE), Bd. 28, Nr. 15, 2021.
@article{NielebockAPIFilterSearch2021,
title = {Guided Pattern Mining for API Misuse Detection by Change-Based Code Analysis},
author = {Sebastian Nielebock and Robert Heum\"{u}ller and Kevin Michael Schott and Frank Ortmeier},
editor = {Springer},
url = {https://arxiv.org/abs/2008.00277},
doi = {10.1007/s10515-021-00294-x},
year = {2021},
date = {2021-08-17},
journal = {Springer Automated Software Engineering (AUSE)},
volume = {28},
number = {15},
abstract = {Lack of experience, inadequate documentation, and sub-optimal API design frequently cause developers to make mistakes when re-using third-party implementations. Such API misuses can result in unintended behavior, performance losses, or software crashes. Therefore, current research aims to automatically detect such misuses by comparing the way a developer used an API to previously inferred patterns of the correct API usage. While research has made significant progress, these techniques have not yet been adopted in practice. In part, this is due to the lack of a process capable of seamlessly integrating with software development processes. Particularly, existing approaches do not consider how to collect relevant source code samples from which to infer patterns. In fact, an inadequate collection can cause API usage pattern miners to infer irrelevant patterns which leads to false alarms instead of finding true API misuses. In this paper, we target this problem (a) by providing a method that increases the likelihood of finding relevant and true-positive patterns concerning a given set of code changes and agnostic to a concrete static, intra-procedural mining technique and (b) by introducing a concept for just-in-time API misuse detection which analyzes changes at the time of commit. Particularly, we introduce different, lightweight code search and filtering strategies and evaluate them on two real-world API misuse datasets to determine their usefulness in finding relevant intra-procedural API usage patterns. Our main results are (1) commit-based search with subsequent filtering effectively decreases the amount of code to be analyzed, (2) in particular method-level filtering is superior to file-level filtering, (3) project-internal and project-external code search find solutions for different types of misuses and thus are complementary, (4) incorporating prior knowledge of the misused API into the search has a negligible effect. },
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Filax, Marco; Ortmeier, Frank
On the Influence of Viewpoint Change for Metric Learning Konferenzbeitrag
In: Proceedings of the 17th International Conference on Machine Vision Applications, 2021.
@inproceedings{Filax21b,
title = {On the Influence of Viewpoint Change for Metric Learning},
author = {Marco Filax and Frank Ortmeier},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2021/07/Filax21b.pdf},
year = {2021},
date = {2021-07-25},
booktitle = {Proceedings of the 17th International Conference on Machine Vision Applications},
abstract = {Physical objects imaged through a camera change their visual representation based on various factors, e.g., illumination, occlusion, or viewpoint changes. Thus, it is the inevitable goal in computer vision systems to use mathematical representations of these objects robust to various changes and yet sufficient to determine even minor differences to distinguish objects. However, finding these powerful representations is challenging if the amount of data is limited, such as in few-shot learning problems. In this work, we investigate the influence of viewpoint changes in modern recognition systems in the context of metric learning problems, in which fine-grained differences differentiate objects based on their learned numeric representation. Our results demonstrate that restricting the degrees of freedom, especially by fixing the virtual viewpoint using synthetic frontal views, elevates the overall performance. We await that our observation of an increased performance using rectified patches is persistent and reproducible in other scenarios.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Kirchheim, Konstantin; Gonschorek, Tim; Ortmeier, Frank
Addressing Randomness in Evaluation Protocols for Out-of-Distribution Detection Workshop
The 2nd Workshop on Artificial Intelligence for Anomalies and Novelties in conjunction with IJCAI, 2021.
@workshop{kirchheim21addressing,
title = {Addressing Randomness in Evaluation Protocols for Out-of-Distribution Detection},
author = {Konstantin Kirchheim and Tim Gonschorek and Frank Ortmeier },
url = {https://arxiv.org/abs/2203.00382},
year = {2021},
date = {2021-07-14},
urldate = {2021-07-14},
booktitle = {The 2nd Workshop on Artificial Intelligence for Anomalies and Novelties in conjunction with IJCAI},
keywords = {},
pubstate = {published},
tppubtype = {workshop}
}
Nielebock, Sebastian; Blockhaus, Paul; Krüger, Jacob; Ortmeier, Frank
AndroidCompass: A Dataset of Android Compatibility Checks in Code Repositories Konferenzbeitrag
In: IEEE, (Hrsg.): Proceedings of the 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR) - Data Showcase Track, 2021, ISBN: 978-1-7281-8710-5.
@inproceedings{NielebockAndroidCompass2021,
title = {AndroidCompass: A Dataset of Android Compatibility Checks in Code Repositories},
author = {Sebastian Nielebock and Paul Blockhaus and Jacob Kr\"{u}ger and Frank Ortmeier},
editor = {IEEE},
url = {https://arxiv.org/abs/2103.09620
https://doi.org/10.5281/zenodo.4428340
https://www.youtube.com/watch?v=M3ruWediurs},
doi = {10.1109/MSR52588.2021.00069},
isbn = {978-1-7281-8710-5},
year = {2021},
date = {2021-05-17},
booktitle = {Proceedings of the 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR) - Data Showcase Track},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Heumüller, Robert
Learning to Boost the Efficiency of Modern Code Review Artikel
In: 2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), 2021.
@article{heum2021,
title = {Learning to Boost the Efficiency of Modern Code Review},
author = {Robert Heum\"{u}ller},
editor = {IEEE/ACM},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2021/04/paper-camera-ready.pdf},
doi = {10.1109/ICSE-Companion52605.2021.00126},
year = {2021},
date = {2021-05-07},
journal = {2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)},
abstract = {Modern Code Review (MCR) is a standard in all kinds of organizations that develop software.
MCR pays for itself through perceived and proven benefits in quality assurance and knowledge transfer.
However, the time invest in MCR is generally substantial.
The goal of this thesis is to boost the efficiency of MCR by developing AI techniques that can partially replace or assist human reviewers.
The envisioned techniques distinguish from existing MCR-related AI models in that we interpret these challenges as graph-learning problems.
This should allow us to use state-of-science algorithms from that domain to learn coding and reviewing standards directly from existing projects.
The required training data will be mined from online repositories and the experiments will be designed to use standard, quantitative evaluation metrics.
This research proposal defines the motivation, research-questions, and solution components for the thesis, and gives an overview of the relevant related work.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
MCR pays for itself through perceived and proven benefits in quality assurance and knowledge transfer.
However, the time invest in MCR is generally substantial.
The goal of this thesis is to boost the efficiency of MCR by developing AI techniques that can partially replace or assist human reviewers.
The envisioned techniques distinguish from existing MCR-related AI models in that we interpret these challenges as graph-learning problems.
This should allow us to use state-of-science algorithms from that domain to learn coding and reviewing standards directly from existing projects.
The required training data will be mined from online repositories and the experiments will be designed to use standard, quantitative evaluation metrics.
This research proposal defines the motivation, research-questions, and solution components for the thesis, and gives an overview of the relevant related work.
Filax, Marco; Gonschorek, Tim; Ortmeier, Frank
Grocery Recognition in the Wild: A New Mining Strategy for Metric Learning Artikel
In: Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2021.
@article{Filax21,
title = {Grocery Recognition in the Wild: A New Mining Strategy for Metric Learning},
author = {Marco Filax and Tim Gonschorek and Frank Ortmeier},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2021/02/paper.pdf},
doi = {10.5220/0010322304980505},
year = {2021},
date = {2021-02-18},
journal = {Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications},
abstract = {Recognizing grocery products at scale is an open issue for computer-vision systems due to their subtle visual differences. Typically the problem is addressed as a classification problem, e.g., by learning a CNN, for which all classes that are to be distinguished need to be known at training time. We instead observe that the products within stores change over time. Sometimes new products are put on shelves, or existing appearances of products are changed. In this work, we demonstrate the use of deep metric learning for grocery recognition, whereby classes during inference are unknown while training. We also propose a new triplet mining strategy that uses all known classes during training while preserving the ability to perform cross-folded validation. We demonstrate the applicability of the proposed mining strategy using different, publicly available real-world grocery datasets. The proposed approach preserves the ability to distinguish previously unseen groceries while increasing the precision by up to 5 percent.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
2020
Jäger, Georg; Kirchheim, Konstantin; Schrödel, Frank; Zug, Sebastian
Multi-Dimensional Failure Modeling for Shared Data in Cooperative Systems Artikel
In: 21st IFAC World Congress, Bd. 53, Nr. 2, S. 15461-15468, 2020.
@article{JAGER202015461,
title = {Multi-Dimensional Failure Modeling for Shared Data in Cooperative Systems},
author = {Georg J\"{a}ger and Konstantin Kirchheim and Frank Schr\"{o}del and Sebastian Zug},
editor = {Elsevier},
url = {https://www.sciencedirect.com/science/article/pii/S2405896320330408},
doi = {https://doi.org/10.1016/j.ifacol.2020.12.2369},
year = {2020},
date = {2020-12-31},
journal = {21st IFAC World Congress},
volume = {53},
number = {2},
pages = {15461-15468},
abstract = {Autonomous systems will share data to enrich their environmental model and provide cooperative functionality. However, as shared data might be imprecise or inaccurate, its failure characteristics have to be analyzed by the receiving system before using the data. A corresponding failure model for describing failure characteristics was proposed by J\"{a}ger et al. (2018), but is limited to one-dimensional sensory data. In this work, we extend the failure model to support multi-dimensional feature data as well. We exemplary evaluate the approach by modeling the failure characteristics of a lane detection system of a simulated car. By comparing it to state-of-the-art failure modeling techniques, we can show that the model accurately predicts failure amplitudes of previously unseen tracks even when trained on limited data.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Kühne, Maximilian
Automated Collision-free Tasksequencing for Industrial Robots Abschlussarbeit
Otto-von-Guericke-University Magdeburg, 2020.
@mastersthesis{K\"{u}hne2020,
title = {Automated Collision-free Tasksequencing for Industrial Robots},
author = {Maximilian K\"{u}hne},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2021/02/MA_MaximilianKuehne.pdf},
year = {2020},
date = {2020-11-18},
school = {Otto-von-Guericke-University Magdeburg},
abstract = {Die in der Industrie eingesetzten Robotermanipulatoren bieten meist durch die Anzahl ihrer Gelenke gewisse Freiheitsgrade, was ihnen die M\"{o}glichkeit gibt jeden beliebigen Punkt in ihrem Arbeitsraum zu erreichen. Dabei k\"{o}nnen sie sogar Punkte hinter Hindernissen anfahren. Ist ein Arbeitsraum jedoch frei von solchen Hindernissen, kann es f\"{u}r einen Punkt viele verschiedene Stellungen (Konfigurationen) f\"{u}r den Manipulator geben. Betrachtet man die Aufgabe, die vom Roboter an einem Werkst\"{u}ck vollzogen werden soll, ergibt sich daraus meist eine zus\"{a}tzlicher Freiheitsgrad. Daraus folgt eine gro\sse Menge an m\"{o}glichen L\"{o}sungen. Das f\"{u}hrt dazu, dass f\"{u}r eine automatisierte Optimierung der Aufgabenplanung, eine enorme Berechnungszeit entstehen kann. Will man bei dieser Optimierung noch darauf achten, dass der Roboter nicht mit sich selbst bzw. mit dem Werkst\"{u}ck oder anderen Objekten kollidiert, steigert das nochmals den ben\"{o}tigten Zeitaufwand. Deshalb wird in den meisten F\"{a}llen der Suchraum vereinfacht und die Kollisions\"{u}berpr\"{u}fung wird erst am Ende durchgef\"{u}hrt.
In dieser Masterarbeit soll eine Sequenzoptimierung rein im Konfigurationsraum des Roboters durchgef\"{u}hrt werden. Dabei sollen sich alle Pfade offline und kollisionsfrei mit Hilfe von zufallsbasierten Pfadplanern wie RRT bestimmen lassen. Die auftretenden Probleme hinsichtlich der Berechnungszeiten sollen mit Hilfe einer Parallelisierung verringert werden. Aus dieser Vorgehensweise ergeben sich
folgende Fragen:
(i) Wie l\"{a}sst sich eine Parallelisierung integrieren?
(ii) Welchen Einfluss hat die Form des Werkst\"{u}cks auf die Optimierung?
(iii) L\"{a}sst sich die Planung der Bearbeitung eines Werkst\"{u}cks automatisiert besser umsetzen als eine Planung durch den Menschen?},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
In dieser Masterarbeit soll eine Sequenzoptimierung rein im Konfigurationsraum des Roboters durchgeführt werden. Dabei sollen sich alle Pfade offline und kollisionsfrei mit Hilfe von zufallsbasierten Pfadplanern wie RRT bestimmen lassen. Die auftretenden Probleme hinsichtlich der Berechnungszeiten sollen mit Hilfe einer Parallelisierung verringert werden. Aus dieser Vorgehensweise ergeben sich
folgende Fragen:
(i) Wie lässt sich eine Parallelisierung integrieren?
(ii) Welchen Einfluss hat die Form des Werkstücks auf die Optimierung?
(iii) Lässt sich die Planung der Bearbeitung eines Werkstücks automatisiert besser umsetzen als eine Planung durch den Menschen?
Klockmann, Maximilian
MWMA-SLAM: Manhattan-World-Multi-Agent-SLAM Abschlussarbeit
Otto-von-Guericke-University Magdeburg, 2020.
@mastersthesis{Klockmann2020,
title = {MWMA-SLAM: Manhattan-World-Multi-Agent-SLAM},
author = {Maximilian Klockmann},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/10/ma_klockmann-1.pdf},
year = {2020},
date = {2020-10-20},
school = {Otto-von-Guericke-University Magdeburg},
abstract = {Einsatzkr\"{a}fte m\"{u}ssen sich regelm\"{a}\ssig zur Rettung von Opfern in unbekannten Umgebungen zurechtfinden. Dabei liegen nur selten Karten vor. Die Einsatzleitung muss die einzelnen Teams mittels Funk koordinieren und kann die Positionen nur absch\"{a}tzen. Um die Einsatzkr\"{a}fte zu unterst\"{u}tzen, k\"{o}nnen Multi-Agent-SLAM Algorithmen eingesetzt werden. Diese m\"{u}ssen jedoch datensparsam sein, um mit den Netzwerk-Bedingungen vor Ort zu funktionieren. Oftmals muss das Netzwerk selbst aufgespannt werden.
F\"{u}r die Kartographierung vor Ort gibt es eine Vielzahl an verschiedenen Multi-Agent- SLAM-Algorithmen. Diese arbeiten \"{u}blicherweise mit RGBD-Bildern oder Punktwolken. Die berechneten Rotationen erreichen dabei eine hohe Genauigkeit mit weniger als 1,9 Grad Abweichung. In einigen SLAM-Algorithmen werden Ebenen als Eingabe verwendet [35, 8]. Diese Algorithmen arbeiten jedoch nur lokal und wurden nicht als Multi-Agent-SLAM entwickelt.
Ebenen haben durch ihre einfache, mathematische Darstellung ein hohes Potenzial, um Daten einzusparen. In der Koordinatenform m\"{u}ssen nur eine Normale und die Distanz zum Ursprung, also vier Dezimalzahlen, \"{u}bertragen werden. In dieser Arbeit wird untersucht, ob sich Ebenen auch f\"{u}r einen Multi-Agent-SLAM eignen. Es wurde der auf Ebenen basierende MWMA-SLAM konzipiert. Die lokalen Karten werden daf\"{u}r in Form von Ebenen \"{u}bertragen und auf dem Server miteinander gematcht. Untersucht wird neben der Tauglichkeit der Ebenen auch die ben\"{o}tigte Datenmenge f\"{u}r die \"{U}bertragung.
Der Algorithmus wurde sowohl mit synthetischen als auch mit realen Daten getestet. Es wurde in den Tests gezeigt, dass die Verwendung von Ebenen funktioniert. In jeweils \"{u}ber 97% der F\"{a}lle konnte eine gute Rotation mit weniger als 0.2_ Abweichung berechnet werden. Eine gute Translation mit weniger als 1 mm Abweichung konnte in durchschnittlich 90% der F\"{a}lle berechnet werden. Der MAE der Rotation liegt bei 0,6 Grad.
Insgesamt hat sich gezeigt, dass das Konzept eines Ebenen basierten Multi-Agent-SLAMs funktioniert. Die Ergebnisse zeigen eine \"{a}hnlich gute Genauigkeit bei der Rotation wie andere Multi-Agent-SLAM-Algorithmen. F\"{u}r die Translation m\"{u}ssen jedoch noch Anpassungen in der Berechnung vorgenommen werden, um die Ergebnisse zu verbessern. Mit einer Gr\"{o}\sse von maximal 576 Byte f\"{u}r die \"{U}bertragung der einzelnen Testumgebungen, konnte.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Für die Kartographierung vor Ort gibt es eine Vielzahl an verschiedenen Multi-Agent- SLAM-Algorithmen. Diese arbeiten üblicherweise mit RGBD-Bildern oder Punktwolken. Die berechneten Rotationen erreichen dabei eine hohe Genauigkeit mit weniger als 1,9 Grad Abweichung. In einigen SLAM-Algorithmen werden Ebenen als Eingabe verwendet [35, 8]. Diese Algorithmen arbeiten jedoch nur lokal und wurden nicht als Multi-Agent-SLAM entwickelt.
Ebenen haben durch ihre einfache, mathematische Darstellung ein hohes Potenzial, um Daten einzusparen. In der Koordinatenform müssen nur eine Normale und die Distanz zum Ursprung, also vier Dezimalzahlen, übertragen werden. In dieser Arbeit wird untersucht, ob sich Ebenen auch für einen Multi-Agent-SLAM eignen. Es wurde der auf Ebenen basierende MWMA-SLAM konzipiert. Die lokalen Karten werden dafür in Form von Ebenen übertragen und auf dem Server miteinander gematcht. Untersucht wird neben der Tauglichkeit der Ebenen auch die benötigte Datenmenge für die Übertragung.
Der Algorithmus wurde sowohl mit synthetischen als auch mit realen Daten getestet. Es wurde in den Tests gezeigt, dass die Verwendung von Ebenen funktioniert. In jeweils über 97% der Fälle konnte eine gute Rotation mit weniger als 0.2_ Abweichung berechnet werden. Eine gute Translation mit weniger als 1 mm Abweichung konnte in durchschnittlich 90% der Fälle berechnet werden. Der MAE der Rotation liegt bei 0,6 Grad.
Insgesamt hat sich gezeigt, dass das Konzept eines Ebenen basierten Multi-Agent-SLAMs funktioniert. Die Ergebnisse zeigen eine ähnlich gute Genauigkeit bei der Rotation wie andere Multi-Agent-SLAM-Algorithmen. Für die Translation müssen jedoch noch Anpassungen in der Berechnung vorgenommen werden, um die Ergebnisse zu verbessern. Mit einer Größe von maximal 576 Byte für die Übertragung der einzelnen Testumgebungen, konnte.
Heumüller, Robert; Nielebock, Sebastian; Krüger, Jacob; Ortmeier, Frank
Publish or Perish, but do not Forget your Software Artifacts Artikel
In: Empirical Software Engineering, 2020.
@article{sap2020,
title = {Publish or Perish, but do not Forget your Software Artifacts},
author = {Robert Heum\"{u}ller and Sebastian Nielebock and Jacob Kr\"{u}ger and Frank Ortmeier},
editor = {Springer},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/07/2020-emse-paper-publish-or-perish.pdf},
doi = {10.1007/s10664-020-09851-6},
year = {2020},
date = {2020-10-08},
journal = {Empirical Software Engineering},
abstract = {Open-science initiatives have gained substantial momentum in computer science, and particularly in software-engineering research.
A critical aspect of open-science is the public availability of artifacts (e.g., tools), which facilitate the replication, reproduction, extension, and verification of results.
While we experienced that many artifacts are not publicly available, we are not aware of empirical evidence supporting this subjective claim.
In this article, we report an empirical study on software artifact papers (SAPs) published at the International Conference on Software Engineering (ICSE), in which we investigated whether and how researchers have published their software artifacts, and whether this had scientific impact.
Our dataset comprises 789 ICSE research track papers, including 604 SAPs (76.6,%), from the years 2007 to 2017.
While showing a positive trend towards artifact availability, our results are still sobering.
Even in 2017, only 58.5,% of the papers that stated to have developed a software artifact made that artifact publicly available.
As we did find a small, but statistically significant, positive correlation between linking to artifacts in a paper and its scientific impact in terms of citations, we hope to motivate the research community to share more artifacts.
With our insights, we aim to support the advancement of open science by discussing our results in the context of existing initiatives and guidelines.
In particular, our findings advocate the need for clearly communicating artifacts and the use of non-commercial, persistent archives to provide replication packages.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
A critical aspect of open-science is the public availability of artifacts (e.g., tools), which facilitate the replication, reproduction, extension, and verification of results.
While we experienced that many artifacts are not publicly available, we are not aware of empirical evidence supporting this subjective claim.
In this article, we report an empirical study on software artifact papers (SAPs) published at the International Conference on Software Engineering (ICSE), in which we investigated whether and how researchers have published their software artifacts, and whether this had scientific impact.
Our dataset comprises 789 ICSE research track papers, including 604 SAPs (76.6,%), from the years 2007 to 2017.
While showing a positive trend towards artifact availability, our results are still sobering.
Even in 2017, only 58.5,% of the papers that stated to have developed a software artifact made that artifact publicly available.
As we did find a small, but statistically significant, positive correlation between linking to artifacts in a paper and its scientific impact in terms of citations, we hope to motivate the research community to share more artifacts.
With our insights, we aim to support the advancement of open science by discussing our results in the context of existing initiatives and guidelines.
In particular, our findings advocate the need for clearly communicating artifacts and the use of non-commercial, persistent archives to provide replication packages.
Schillreff, Nadia; Ortmeier, Frank
Reduced Error Model for Learning-based Calibration of Serial Manipulators Konferenzbeitrag
In: SciTePress, (Hrsg.): Proceedings of the 17th International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO, 2020, ISBN: 978-989-758-442-8.
@inproceedings{Schillreff2020,
title = {Reduced Error Model for Learning-based Calibration of Serial Manipulators},
author = {Nadia Schillreff and Frank Ortmeier},
editor = {SciTePress},
doi = {10.5220/0009835804780483},
isbn = { 978-989-758-442-8},
year = {2020},
date = {2020-07-07},
booktitle = {Proceedings of the 17th International Conference on Informatics in Control, Automation and Robotics - Volume 1: ICINCO},
abstract = { In this work a reduced error model for a learning-based robot kinematic calibration of a serial manipulator is compared with a complete error model. To ensure high accuracy this approach combines the geometrical (structural inaccuracies) and non-geometrical influences like for e.g. elastic deformations that are configuration-dependent without explicitly defining all underlying physical processes that contribute to positioning inaccuracies by using a polynomial regression method. The proposed approach is evaluated on a dataset obtained using a 7-DOF manipulator KUKA LBR iiwa 7. The experimental results show the reduction of the mean Cartesian error up to 0.16 mm even for a reduced error model.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Matschek, Janine; Gonschorek, Tim; Hanses, Magnus; Elkmann, Norbert; Ortmeier, Frank; Findeisen, Rolf
Learning References with Gaussian Processes in Model Predictive Control Applied to Robot Assisted Surgery Konferenzbeitrag Geplante Veröffentlichung
In: IFAC, (Hrsg.): Geplante Veröffentlichung.
@inproceedings{matschek2020,
title = {Learning References with Gaussian Processes in Model Predictive Control Applied to Robot Assisted Surgery},
author = {Janine Matschek and Tim Gonschorek and Magnus Hanses and Norbert Elkmann and Frank Ortmeier and Rolf Findeisen},
editor = {IFAC},
year = {2020},
date = {2020-05-13},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
Nielebock, Sebastian; Heumüller, Robert; Krüger, Jacob; Ortmeier, Frank
Cooperative API Misuse Detection Using Correction Rules Konferenzbeitrag
In: ACM, (Hrsg.): Proccedings of the 42nd IEEE/ACM International Conference on Software Engineering - New Ideas and Emerging Results Track, ICSE-NIER, ACM, 2020.
@inproceedings{Nielebock2020,
title = {Cooperative API Misuse Detection Using Correction Rules},
author = {Sebastian Nielebock and Robert Heum\"{u}ller and Jacob Kr\"{u}ger and Frank Ortmeier},
editor = {ACM},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/02/cooperative-api-misuse-detection-1.pdf
https://bitbucket.org/SNielebock/icse-2020-nier-cooperative-api-misuse/src/master/},
doi = {10.1145/3377816.3381735},
year = {2020},
date = {2020-05-01},
booktitle = {Proccedings of the 42nd IEEE/ACM International Conference on Software Engineering - New Ideas and Emerging Results Track, ICSE-NIER},
publisher = {ACM},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Nielebock, Sebastian; Heumüller, Robert; Krüger, Jacob; Ortmeier, Frank
Using API-Embedding for API-Misuse Repair Konferenzbeitrag
In: ACM, (Hrsg.): Proceedings of the 1st International Workshop on Automated Program Repair (APR 2020) in conjunction with 42nd International Conference on Software Engineering (ICSE 2020), Seoul, South Korea, 2020.
@inproceedings{Nielebock2020A,
title = {Using API-Embedding for API-Misuse Repair},
author = {Sebastian Nielebock and Robert Heum\"{u}ller and Jacob Kr\"{u}ger and Frank Ortmeier},
editor = {ACM},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/04/api-embeddings-for-repair-Nielebock-et-al-APR2020.pdf},
doi = {10.1145/3387940.3392171},
year = {2020},
date = {2020-05-01},
booktitle = {Proceedings of the 1st International Workshop on Automated Program Repair (APR 2020) in conjunction with 42nd International Conference on Software Engineering (ICSE 2020), Seoul, South Korea},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Krüger, Jacob; Nielebock, Sebastian; Heumüller, Robert
How Can I Contribute? A Qualitative Analysis of Community Websites of 25 Unix-Like Distributions Konferenzbeitrag
In: ACM, (Hrsg.): Proceedings of the 24th International Conference on Evaluation and Assessment in Software Engineering, (EASE) - Short Papers Track, S. 324–329, Trondheim, Norway, 2020, ISBN: 9781450377317.
@inproceedings{Kr\"{u}ger2020,
title = {How Can I Contribute? A Qualitative Analysis of Community Websites of 25 Unix-Like Distributions},
author = {Jacob Kr\"{u}ger and Sebastian Nielebock and Robert Heum\"{u}ller},
editor = {ACM},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/02/docsAnalysis.pdf
https://doi.org/10.5281/zenodo.3665429},
doi = {10.1145/3383219.3383256},
isbn = {9781450377317},
year = {2020},
date = {2020-04-17},
booktitle = {Proceedings of the 24th International Conference on Evaluation and Assessment in Software Engineering, (EASE) - Short Papers Track},
journal = {Proceedings of the 24th International Conference on Evaluation and Assessment in Software Engineering, (EASE)},
pages = {324\textendash329},
address = {Trondheim, Norway},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Kirchheim, Konstantin
Self-Assessment of Visual Recognition Systems based on Attribution Abschlussarbeit
Otto-von-Guericke-University Magdeburg, 2020.
@mastersthesis{Kirchheim2019,
title = {Self-Assessment of Visual Recognition Systems based on Attribution},
author = {Konstantin Kirchheim},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/01/MA_2019_KonstantinKirchheim.pdf},
year = {2020},
date = {2020-03-01},
urldate = {2019-12-09},
school = {Otto-von-Guericke-University Magdeburg},
abstract = {Convolutional Neural Networks achieve state of the art results in various visual recognition tasks like object classification and object detection. While CNNs perform surprisingly well, it is difficult to retrace why they arrive at a certain prediction. Additionally, they have been shown to be prone to certain errors. As CNN are increasingly deployed into physical systems - for example in self driving vehicles - undetected errors could result in catastrophic consequences. Approaches to prevent this include the usage of attribution based explanation methods to facilitate an understanding in the systems decision in hindsight, as well as the detection of recognition errors at runtime, called self-assessment. Some state-of-the-art self-assessment approaches aim to detect anomalies in the activation patterns of neurons in a CNN.
This work explores the usage of attribution based explanations for self-assessment of CNNs. We build multiple self-assessment models and evaluate their performance in various settings. In our experiments, we find that, while self-assessment based on attribution does not outperform self-assessment based on neural activity on its own, it always surpasses random guessing. Furthermore, we find that self-assessment models using neural activation patterns as well as neural attribution can in some cases outperform models which do not consider attribution patterns. Thus, we conclude that it might be possible to improve self-assessment models by including the explanation of the model into the assessment process.},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
This work explores the usage of attribution based explanations for self-assessment of CNNs. We build multiple self-assessment models and evaluate their performance in various settings. In our experiments, we find that, while self-assessment based on attribution does not outperform self-assessment based on neural activity on its own, it always surpasses random guessing. Furthermore, we find that self-assessment models using neural activation patterns as well as neural attribution can in some cases outperform models which do not consider attribution patterns. Thus, we conclude that it might be possible to improve self-assessment models by including the explanation of the model into the assessment process.
Fritzsche, Holger; Ataide, Elmer Jeto Gomes; Bi, Afshan; Kalva, Rohit; Tripathi, Sandeep; Boese, Axel; Friebe, Michael; Gonschorek, Tim
Innovative Hospital Management: Tracking of Radiological Protection Equipment Artikel
In: International Journal of Biomedical and Clinical Engineering (IJBCE), Bd. 9, Nr. 1, S. 33–47, 2020.
@article{fritzsche2020innovative,
title = {Innovative Hospital Management: Tracking of Radiological Protection Equipment},
author = {Holger Fritzsche and Elmer Jeto Gomes Ataide and Afshan Bi and Rohit Kalva and Sandeep Tripathi and Axel Boese and Michael Friebe and Tim Gonschorek},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/04/Innovative-Hospital-Management_-Tracking-of-Radiological-Protection-Equipment.pdf},
doi = {10.4018/IJBCE.2020010103},
year = {2020},
date = {2020-01-01},
journal = {International Journal of Biomedical and Clinical Engineering (IJBCE)},
volume = {9},
number = {1},
pages = {33--47},
publisher = {IGI Global},
abstract = {The healthcare industry is consistently developing a constant supply of medical equipment, e.g. radiation protection wear. These must be inspected regularly to ensure safety and quality. As this equipment keeps on moving from department to department, it has to be located in one place for annual inspection and must be properly documented after quality check. Conventionally, barcodes, QR codes, and manual entry of the required data are used as a tracking method which requires tedious human efforts without delivering the expected results for registration, tracking, and maintenance. A fully or semi-automated computerized system would be desirable in this case. Radio frequency identification systems which consist of tag, reader, and database can be used for tracking. This article presents new innovative RFID based system which is dedicated to quality assurance of radiological protection wear specifically lead aprons. This process facilitates the service management of hospitals.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Ataide, Elmer; Fritzsche, Holger; Filax, Marco; Chittamuri, Dinesh; Potluri, Lakshmi; Friebe, Michael
ENT Endoscopic Surgery and Mixed Reality: Application Development and Integration Buchkapitel mit eigenem Titel
In: Biomedical and Clinical Engineering for Healthcare Advancement, S. 17-29, IGI Global, 2020.
@incollection{Ataide20,
title = {ENT Endoscopic Surgery and Mixed Reality: Application Development and Integration},
author = {Elmer Ataide and Holger Fritzsche and Marco Filax and Dinesh Chittamuri and Lakshmi Potluri and Michael Friebe},
url = {https://www.igi-global.com/gateway/chapter/239074},
year = {2020},
date = {2020-01-01},
booktitle = {Biomedical and Clinical Engineering for Healthcare Advancement},
pages = {17-29},
publisher = {IGI Global},
keywords = {},
pubstate = {published},
tppubtype = {incollection}
}
2019
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank
In: Applied Sciences, Bd. 9, Nr. 24, S. 5421, 2019, ISSN: 2076-3417.
@article{fuentealba2019independent,
title = {Independent Analysis of Decelerations and Resting Periods through CEEMDAN and Spectral-Based Feature Extraction Improves Cardiotocographic Assessment},
author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/12/fuentealba2019independent.pdf},
doi = {https://doi.org/10.3390/app9245421},
issn = {2076-3417},
year = {2019},
date = {2019-12-11},
journal = {Applied Sciences},
volume = {9},
number = {24},
pages = {5421},
publisher = {Multidisciplinary Digital Publishing Institute},
abstract = {Fetal monitoring is commonly based on the joint recording of the fetal heart rate (FHR) and uterine contraction signals obtained with a cardiotocograph (CTG). Unfortunately, CTG analysis is difficult, and the interpretation problems are mainly associated with the analysis of FHR decelerations. From that perspective, several approaches have been proposed to improve its analysis; however, the results obtained are not satisfactory enough for their implementation in clinical practice. Current clinical research indicates that a correct CTG assessment requires a good understanding of the fetal compensatory mechanisms. In previous works, we have shown that the complete ensemble empirical mode decomposition with adaptive noise, in combination with time-varying autoregressive modeling, may be useful for the analysis of those characteristics. In this work, based on this methodology, we propose to analyze the FHR deceleration episodes separately. The main hypothesis is that the proposed feature extraction strategy applied separately to the complete signal, deceleration episodes, and resting periods (between contractions), improves the CTG classification performance compared with the analysis of only the complete signal. Results reveal that by considering the complete signal, the classification performance achieved 81.7% quality. Then, including information extracted from resting periods, it improved to 83.2%.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank
A Study on the Classification Performance of Cardiotocographic Data vs Class Formation Criteria Konferenzbeitrag Geplante Veröffentlichung
In: Geplante Veröffentlichung.
@inproceedings{fuentealba2019study,
title = {A Study on the Classification Performance of Cardiotocographic Data vs Class Formation Criteria},
author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier},
year = {2019},
date = {2019-11-24},
abstract = {Fetal monitoring during labor is commonly based on the joint recording of the fetal heart rate (FHR) and uterine contraction data obtained by a Cardiotocograph (CTG). Currently, the interpretation of such data is difficult because it involves a visual analysis of highly complex signals. For this reason, several approaches based on signal processing and classification have been proposed. Most of the CTG classification approaches use class formation criteria based on the pH value, which is considered as a gold standard measure for postpartum evaluation. However, at birth, the association of a precise value of pH with the neonatal outcome is still inconclusive, which makes the classification training a difficult task. This work focuses on studying the CTG classification performance in relation to the used class formation criterion. For this purpose, first, the FHR signal is decomposed by using the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method. Second, we extract a set of signal features based on CEEMDAN and conventional time-domain features proposed in the literature, which are computed in different FHR signal lengths just before delivery. Then, the features classification performance is evaluated according to a set of class formation criteria based on different pH values used as thresholds. Results reveal that the classification performance significantly depends on the selected pH value for the class formation, whose best performance is achieved by considering a class formation based on a pH=7.05.},
keywords = {},
pubstate = {forthcoming},
tppubtype = {inproceedings}
}
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank
In: IEEE Access, Bd. 7, Nr. 1, S. 159754 - 159772, 2019.
@article{fuentealba2019thyroid,
title = {Cardiotocographic Signal Feature Extraction through CEEMDAN and Time-Varying Autoregressive Spectral-Based Analysis for Fetal Welfare Assessment},
author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/11/fuentealba2019cardiotocographic.pdf},
doi = {10.1109/ACCESS.2019.2950798},
year = {2019},
date = {2019-10-31},
journal = {IEEE Access},
volume = {7},
number = {1},
pages = {159754 - 159772},
publisher = {IEEE},
abstract = {Cardiotocograph (CTG) is a widely used tool for fetal surveillance during labor, which provides the joint recording of fetal heart rate (FHR) and uterine contraction data. Unfortunately, the CTG interpretation is difficult because it involves a visual analysis of highly complex signals. Recent clinical research indicates that a correct CTG assessment requires a good understanding of the fetal compensatory mechanisms modulated by the autonomic nervous system. Certainly, this modulation reflects variations in the FHR, whose characteristics can involve significant information about the fetal condition. The main contribution of this work is to investigate these characteristics by a new approach combining two signal processing methods: the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and time-varying autoregressive (TV-AR) modeling. The idea is to study the CEEMDAN intrinsic mode functions (IMFs) in both the time-domain and the spectral-domain in order to extract information that can help to assess the fetal condition. For this purpose, first, the FHR signal is decomposed, and then for each IMF, the TV-AR spectrum is computed in order to study their spectral dynamics over time. In this paper, we first explain the foundations of our proposed features. Then, we evaluate their performance in CTG classification by using three machine learning classifiers. The proposed approach has been evaluated on real CTG data extracted from the CTU-UHB database. Results show that by using only conventional FHR features, the classification performance achieved 78,0%. Then, by including the proposed CEEMDAN spectral-based features, it increased to 81,7%.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank
Cardiotocograph Data Classification Improvement by Using Empirical Mode Decomposition Konferenzbeitrag
In: 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), S. 5646–5649, IEEE 2019, ISBN: 978-1-5386-1311-5 .
@inproceedings{fuentealba2019cardiotocograph,
title = {Cardiotocograph Data Classification Improvement by Using Empirical Mode Decomposition},
author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier},
url = {https://ieeexplore.ieee.org/document/8856673},
doi = {10.1109/EMBC.2019.8856673},
isbn = {978-1-5386-1311-5 },
year = {2019},
date = {2019-10-07},
booktitle = {2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)},
pages = {5646--5649},
organization = {IEEE},
abstract = {This work proposes to study the fetal heart rate (FHR) signal based on information about its dynamics as a signal resulting from the modulation by the autonomic nervous system. The analysis is performed using the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) technique. The main idea is to extract a set of signal features based on that technique and also conventional time-domain features proposed in the literature in order to study their performance by using a support vector machine (SVM) as a classifier. As a hypothesis, we postulate that by including CEEMDAN based features, the classification performance should improve compared with the performance achieved by conventional features. The proposed method has been evaluated using real FHR data extracted from the open access CTU-UHB database. Results show that the classification performance improved from 67, 6% using only conventional features, to 71, 7% by incorporating CEEMDAN based features.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank
Foetal heart rate assessment by empirical mode decomposition and spectral analysis Artikel
In: Current Directions in Biomedical Engineering, Bd. 5, Nr. 1, S. 381–383, 2019.
@article{fuentealba2019foetal,
title = {Foetal heart rate assessment by empirical mode decomposition and spectral analysis},
author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier},
url = {https://www.degruyter.com/downloadpdf/j/cdbme.2019.5.issue-1/cdbme-2019-0096/cdbme-2019-0096.pdf},
doi = {https://doi.org/10.1515/cdbme-2019-0096},
year = {2019},
date = {2019-09-18},
journal = {Current Directions in Biomedical Engineering},
volume = {5},
number = {1},
pages = {381--383},
publisher = {De Gruyter},
abstract = {This paper focuses on studying the time-variant dynamics involved in the foetal heart rate (FHR) response resulting from the autonomic nervous system modulation. It provides a comprehensive analysis of such dynamics by relating the spectral information involved in the FHR signal with foetal physiological characteristics. This approach is based on two signal processing methods: the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and time-varying autoregressive (TV-AR) modelling. First, the CEEMDAN allows to decompose the signal into intrinsic mode functions (IMFs). Then, the TV-AR modelling allows to analyse their spectral dynamics. Results reveal that the IMFs can involve significant spectral information (p -value < 0.05) that can help to assess the foetal condition.},
keywords = {},
pubstate = {published},
tppubtype = {article}
}
Gonschorek, Tim; Bergt, Philipp; Filax, Marco; Ortmeier, Frank; von Hoyningen-Hüne, Jan; Piper, Thorsten
SafeDeML: On Integrating the Safety Design into the System Model Konferenzbeitrag
In: Romanovsky, Alexander; Troubitsyna, Elena; Bitsch, Friedemann (Hrsg.): Computer Safety, Reliability, and Security, S. 271–285, Springer International Publishing, Cham, 2019, ISBN: 978-3-030-26601-1.
@inproceedings{10.1007/978-3-030-26601-1_19,
title = {SafeDeML: On Integrating the Safety Design into the System Model},
author = {Tim Gonschorek and Philipp Bergt and Marco Filax and Frank Ortmeier and Jan von Hoyningen-H\"{u}ne and Thorsten Piper},
editor = {Alexander Romanovsky and Elena Troubitsyna and Friedemann Bitsch},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/04/GonschorekEtAl_SafeDeML.pdf
https://link.springer.com/chapter/10.1007/978-3-030-26601-1_19},
doi = {10.1007/978-3-030-26601-1_19},
isbn = {978-3-030-26601-1},
year = {2019},
date = {2019-09-18},
booktitle = {Computer Safety, Reliability, and Security},
pages = {271--285},
publisher = {Springer International Publishing},
address = {Cham},
abstract = {The safety design definition of a safety critical system is a complex task. On the one hand, the system designer must ensure that he addressed all potentially hazardous harwdware faults. This is often defined not(!) in the model but within extra documents (e.g., Excel sheets). On the other hand, all defined safety mechanisms must be transformed back into the system model. We think an improvement for the designer would be given by a modeling extension integrating relevant safety design artifacts into the normal design work-flow and supporting the safety design development directly from within the model.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Gonschorek, Tim; Bergt, Philipp; Filax, Marco; Ortmeier, Frank
Integrating Safety Design Artifacts into System Development Models Using SafeDeML Konferenzbeitrag
In: Papadopoulos, Yiannis; Aslansefat, Koorosh; Katsaros, Panagiotis; Bozzano, Marco (Hrsg.): Model-Based Safety and Assessment, S. 93–106, Springer International Publishing, Cham, 2019, ISBN: 978-3-030-32872-6.
@inproceedings{10.1007/978-3-030-32872-6_7,
title = {Integrating Safety Design Artifacts into System Development Models Using SafeDeML},
author = {Tim Gonschorek and Philipp Bergt and Marco Filax and Frank Ortmeier},
editor = {Yiannis Papadopoulos and Koorosh Aslansefat and Panagiotis Katsaros and Marco Bozzano},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/04/SafeDeML_gonschorekEtAl.pdfhttps://link.springer.com/chapter/10.1007/978-3-030-32872-6_7},
doi = {10.1007/978-3-030-32872-6_7},
isbn = {978-3-030-32872-6},
year = {2019},
date = {2019-09-18},
booktitle = {Model-Based Safety and Assessment},
pages = {93--106},
publisher = {Springer International Publishing},
address = {Cham},
abstract = {Applying a safety artifact language as Safety Design Modeling Language SafeDeML integrates the generation of the safety design into the system modeling stage -- directly within the system architecture. In this paper, we present a modeling process and a prototype for the CASE tool Enterprise Architect for SafeDeML. The goal is to support the system designer in developing a standard (in this paper Iso 26262) conform system and safety design containing all relevant safety artifact within one model. Such integration offers several modeling guarantees like consistency checks or computation of coverage and fault metrics. Since all relevant information and artifacts are contained within the model, SafeDeML and the prototype can help to decrease the effect of structural faults during the safety design and further supports the safety assessment. To give an idea to the reader of the complexity of the approach's application, we present an exemplary implementation of the safety design for a brake light system, a real case-study from the Iso 26262 context.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
Heumüller, Robert; Nielebock, Sebastian; Ortmeier, Frank
SpecTackle - A Specification Mining Experimentation Platform Konferenzbeitrag
In: Proceedings of the 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA),Kallithea, Chalkidiki. Greece, Euromicro 2019.
@inproceedings{Heum\"{u}ller2019,
title = {SpecTackle - A Specification Mining Experimentation Platform},
author = {Robert Heum\"{u}ller and Sebastian Nielebock and Frank Ortmeier},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/08/paper-spectackle.pdf},
year = {2019},
date = {2019-08-30},
booktitle = {Proceedings of the 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA),Kallithea, Chalkidiki. Greece},
organization = {Euromicro},
abstract = {Nowadays, API Specification Mining is an important cornerstone of automated software engineering. In this paper, we
introduce SpecTackle, an IDE-based experimentation platform aiming to facilitate experimentation and validation of
specification mining algorithms and tools. SpecTackle strives toward (1) providing easy access to various specification
mining tools, (2) simplifying configuration and usage through a shared interface, and (3) in-code visualization of
pattern occurrences. The first version supports two heterogeneous mining tools, a third-party graph-based miner as well
as a custom sequence mining tool. In the long term, SpecTackle envisions to also provide ground-truth benchmark projects,
a unified pattern meta-model and parameter optimization for mining tools.},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
introduce SpecTackle, an IDE-based experimentation platform aiming to facilitate experimentation and validation of
specification mining algorithms and tools. SpecTackle strives toward (1) providing easy access to various specification
mining tools, (2) simplifying configuration and usage through a shared interface, and (3) in-code visualization of
pattern occurrences. The first version supports two heterogeneous mining tools, a third-party graph-based miner as well
as a custom sequence mining tool. In the long term, SpecTackle envisions to also provide ground-truth benchmark projects,
a unified pattern meta-model and parameter optimization for mining tools.
Nielebock, Sebastian; Nykolaichuk, Mykhaylo; Ortmeier, Frank
2019.
@misc{datenschutzkonzept_nielebock,
title = {Leitfaden "Ihre ersten Schritte auf dem Weg zu einem Datenschutzkonzept f\"{u}r Ihr Unternehmen - Das k\"{o}nnen Sie selbst tun!},
author = {Sebastian Nielebock and Mykhaylo Nykolaichuk and Frank Ortmeier},
editor = {Mittelstand 4.0-Kompetenzzentrum Magdeburg c/o ZPVP GmbH },
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/06/Leitfaden_Erste-Schritte_zum_Datenschutzkonzept_final.pdf},
year = {2019},
date = {2019-06-03},
abstract = {Sie informieren sich auf einer Firmen-Website im Internet. Sie m\"{o}chten in einem Onlineshop bestellen oder sind Mitglied in einem Verband? Selbst wenn Sie auf den ersten Blick gar keine pers\"{o}nlichen Daten preisgeben, so hinterlassen Sie doch, bei allem, was Sie tun, Ihre Datenspur. Diese Datenspur gilt es zu sch\"{u}tzen und im Umkehrschluss nat\"{u}rlich auch die Ihrer Kunden.
Am 25. Mai 2018 trat die so genannte Datenschutzgrundverordnung (DSGVO) in Kraft. Damit sind die Richtlinien noch strenger geworden. Wir alle wissen, dass uns der Schutz unserer Daten wichtig ist. Aber was genau bedeutet das? Welche Daten fallen zum Beispiel in meinem Unternehmen an? Welche Daten muss ich sch\"{u}tzen und wie mache ich das? Wenn ich schon Schutzma\ssnahmen ergriffen habe, sind diese ausreichend? Vor diesen Fragen stehen nicht nur Sie, sondern viele Unternehmer und Unternehmerinnen. W\"{a}hrend Gro\ssbetriebe \"{u}ber eigene IT- und Rechtsabteilungen verf\"{u}gen, stehen Firmenchefs kleinerer und mittlerer Unternehmen diesen Fragen h\"{a}ufig alleine gegen\"{u}ber. Oftmals fehlt schon das Basiswissen, von Zeit und Mu\sse ganz zu schweigen.
Dazu vorab eine gute und eine weniger gute Nachricht. Beginnen wir mit der weiniger guten: Datenschutz ist komplex und entwickelt sich st\"{a}ndig weiter. Um wirklich auf Nummer sicher zu gehen, brauchen Sie sehr wahrscheinlich Rat und Hilfe eines Datenschutz-Experten. Dabei kann dieser Leitfaden Sie unterst\"{u}tzen, auf Augenh\"{o}he mit einem Datenschutz-Profi zu reden, denn daf\"{u}r statten wir Sie mit dem notwendigen Vokabular aus und vermitteln Ihnen die wichtigsten datenschutzrechtlichen Grundlagen. Am Ende eines Prozesses steht ein f\"{u}r Ihr Unternehmen ma\ssgeschneidertes Datenschutzkonzept. Das ist nichts anderes als ein Ma\ssnahmenkonzept, um den Datenschutz in Ihrer Firma einzuhalten. Wir zeigen Ihnen auch, wer genau Ihnen beim Datenschutzkonzept f\"{u}r Ihr
Unternehmens unter die Arme greifen kann. So kommen wir nun zur guten Nachricht: Guter Datenschutz ist wichtig, muss aber nicht teuer sein. Wichtig ist, dass Sie f\"{u}r Ihr Unternehmen systematisch vorgehen und im Thema stecken. So wird es Ihnen leicht fallen, die gr\"{o}\ssten Risiken herauszufinden und zielgerichtet zu minimieren. Ist Ihr Blick einmal f\"{u}r eventuelle Sicherheitsl\"{u}cken geschult, sind Sie auch zuk\"{u}nftig f\"{u}r noch kommende Datenschutzma\ssnahmen sensibilisiert. Datenschutz ist komplex und kompliziert. Aber Datenschutz ist auch keine „Raketenwissenschaft“. Wer es geschafft hat,
ein Unternehmen aufzubauen und zu leiten, der kann auch die ersten Schritte hin zu einem Datenschutzkonzept selbst\"{a}ndig gehen. Lassen Sie sich dazu von uns an die Hand nehmen. Datenschutz und Datensicherheit entwickeln sich stetig weiter.
Am besten ist, Sie entwickeln sich einfach mit ...},
keywords = {},
pubstate = {published},
tppubtype = {misc}
}
Am 25. Mai 2018 trat die so genannte Datenschutzgrundverordnung (DSGVO) in Kraft. Damit sind die Richtlinien noch strenger geworden. Wir alle wissen, dass uns der Schutz unserer Daten wichtig ist. Aber was genau bedeutet das? Welche Daten fallen zum Beispiel in meinem Unternehmen an? Welche Daten muss ich schützen und wie mache ich das? Wenn ich schon Schutzmaßnahmen ergriffen habe, sind diese ausreichend? Vor diesen Fragen stehen nicht nur Sie, sondern viele Unternehmer und Unternehmerinnen. Während Großbetriebe über eigene IT- und Rechtsabteilungen verfügen, stehen Firmenchefs kleinerer und mittlerer Unternehmen diesen Fragen häufig alleine gegenüber. Oftmals fehlt schon das Basiswissen, von Zeit und Muße ganz zu schweigen.
Dazu vorab eine gute und eine weniger gute Nachricht. Beginnen wir mit der weiniger guten: Datenschutz ist komplex und entwickelt sich ständig weiter. Um wirklich auf Nummer sicher zu gehen, brauchen Sie sehr wahrscheinlich Rat und Hilfe eines Datenschutz-Experten. Dabei kann dieser Leitfaden Sie unterstützen, auf Augenhöhe mit einem Datenschutz-Profi zu reden, denn dafür statten wir Sie mit dem notwendigen Vokabular aus und vermitteln Ihnen die wichtigsten datenschutzrechtlichen Grundlagen. Am Ende eines Prozesses steht ein für Ihr Unternehmen maßgeschneidertes Datenschutzkonzept. Das ist nichts anderes als ein Maßnahmenkonzept, um den Datenschutz in Ihrer Firma einzuhalten. Wir zeigen Ihnen auch, wer genau Ihnen beim Datenschutzkonzept für Ihr
Unternehmens unter die Arme greifen kann. So kommen wir nun zur guten Nachricht: Guter Datenschutz ist wichtig, muss aber nicht teuer sein. Wichtig ist, dass Sie für Ihr Unternehmen systematisch vorgehen und im Thema stecken. So wird es Ihnen leicht fallen, die größten Risiken herauszufinden und zielgerichtet zu minimieren. Ist Ihr Blick einmal für eventuelle Sicherheitslücken geschult, sind Sie auch zukünftig für noch kommende Datenschutzmaßnahmen sensibilisiert. Datenschutz ist komplex und kompliziert. Aber Datenschutz ist auch keine „Raketenwissenschaft“. Wer es geschafft hat,
ein Unternehmen aufzubauen und zu leiten, der kann auch die ersten Schritte hin zu einem Datenschutzkonzept selbständig gehen. Lassen Sie sich dazu von uns an die Hand nehmen. Datenschutz und Datensicherheit entwickeln sich stetig weiter.
Am besten ist, Sie entwickeln sich einfach mit ...
Filax, Marco; Gonschorek, Tim; Ortmeier, Frank
Data for Image Recognition Tasks: An Efficient Tool for Fine-Grained Annotations Konferenzbeitrag
In: Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, 2019.
@inproceedings{Filax2019,
title = {Data for Image Recognition Tasks: An Efficient Tool for Fine-Grained Annotations},
author = {Marco Filax and Tim Gonschorek and Frank Ortmeier},
url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2021/02/filax2019.pdf
https://bitbucket.org/cse_admin/md_groceries
},
doi = {10.5220/0007688709000907},
year = {2019},
date = {2019-02-19},
booktitle = {Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods},
abstract = {Using large datasets is essential for machine learning. In practice, training a machine learning algorithm requires hundreds of samples. Multiple off-the-shelf datasets from the scientific domain exist to benchmark new approaches. However, when machine learning algorithms transit to industry, e.g., for a particular image classification problem, hundreds of specific purpose images are collected and annotated in laborious manual work.
In this paper, we present a novel system to decrease the effort of annotating those large image sets. Therefore, we generate 2D bounding boxes from minimal 3D annotations using the known location and orientation of the camera. We annotate a particular object of interest in 3D once and project these annotations on to every frame of a video stream.
The proposed approach is designed to work with off-the-shelf hardware. We demonstrate its applicability with an example from the real world. We generated a more extensive dataset than available in other works for a particular industrial use case: fine-grained recognition of items within grocery stores. Further, we make our dataset available to the interested vision community consisting of over 60,000 images. Some images were taken under ideal conditions for training while others were taken with the proposed approach in the wild.
},
keywords = {},
pubstate = {published},
tppubtype = {inproceedings}
}
In this paper, we present a novel system to decrease the effort of annotating those large image sets. Therefore, we generate 2D bounding boxes from minimal 3D annotations using the known location and orientation of the camera. We annotate a particular object of interest in 3D once and project these annotations on to every frame of a video stream.
The proposed approach is designed to work with off-the-shelf hardware. We demonstrate its applicability with an example from the real world. We generated a more extensive dataset than available in other works for a particular industrial use case: fine-grained recognition of items within grocery stores. Further, we make our dataset available to the interested vision community consisting of over 60,000 images. Some images were taken under ideal conditions for training while others were taken with the proposed approach in the wild.