My Science

Aleksander Lodwich (aleksander[/a/t/]

Thoughts on Autonomous Systems

Exploring high­-level Perspectives on Self-­Configuration Capabilities of Systems, 2016
self-configuring_systems.pdf, June 29th, 2016
Aleksander Lodwich
Abstract: Optimization of product performance repetitively introduces the need to make products adaptive in a more general sense. This more general idea is often captured under the term “self-configuration”. Despite the importance of such capability, research work on this feature appears isolated by technical domains. It is not easy to tell quickly whether the approaches chosen in different technological domains introduce new ideas or whether the differences just reflect domain idiosyncrasies. For the sake of easy identification of key differences between systems with self-configuring capabilities, I will explore higher level concepts for understanding self-configuration, such as the Ω-units, in order to provide theoretical instruments for connecting different areas of technology and research.
DOI: 10.13140/RG.2.1.2945.6885
also available on
also available on

How to avoid ethically relevant Machine Consciousness, 2016
machine_consciousness.pdf, June 1st, 2016
Aleksander Lodwich
Abstract: This paper discusses the root cause of systems perceiving the self experience and how to exploit adaptive and learning features without introducing ethically problematic system properties.
DOI: 10.13140/RG.2.1.1733.3361
also available on
also available on

Differences between Industrial Models of Autonomy and Systemic Models of Autonomy, 2016
LevelsOfAutonomy.pdf, May 25th 2016
Aleksander Lodwich
Abstract: This paper discusses the idea of levels of autonomy of systems – be this technical or organic – and compares the insights with models employed by industries used to describe maturity and capability of their products.
also available on
also available on


Understanding Error Correction and its Role as Part of the Communication Channel in Environments composed of Self-Integrating Systems, 2016
Correcting Errors , December 20th, 2016
Aleksander Lodwich
Abstract: The raise of complexity of technical systems also raises knowledge required to set them up and to maintain them. The cost to evolve such systems can be prohibitive. In the field of Autonomic Computing, technical systems should therefore have various self-healing capabilities allowing system owners to provide only partial, potentially inconsistent updates of the system. The self-healing or self-integrating system shall find out the remaining changes to communications and functionalities in order to accommodate change and yet still restore function. This issue becomes even more interesting in context of Internet of Things and Industrial Internet where previously unexpected device combinations can be assembled in order to provide a surprising new function. In order to pursue higher levels of self-integration capabilities I propose to think of self-integration as sophisticated error correcting communications. Therefore, this paper discusses an extended scope of error correction with the purpose to emphasize error correction’s role as an integrated element of bi-directional communication channels in self-integrating, autonomic communication scenarios.
DOI: 10.13140/RG.2.2.33551.59041

Research on interoperability within development processes of Embedded Systems on an example, 2014
- A concept for tackling Frontloading in Model-based Engineering with AUTOSAR -
Master Thesis of Ferdinand Schäfer, University of Applied Sciences Karlsruhe,
buy and read

OFISS: An Organic End-user-programmable Data-centric User Interface Framework as Frontend to Commandline-based Applications, 2010
Bachelor Thesis of Christopher Schölzel, TU Kaiserslautern, Prof. Dr. Breuel
Abstract Many tools have made Graphical User Interface (GUI) programming easier for developers, but most of these tools do not care too much about issues of transparency and extensibility on the toolkit-level, leaving it in the hands of developers to provide customization support such as the possibility to create macros inside the applications themselves.
Applications written in the framework introduced in this thesis address this issue by taking the UNIX-paradigm "Everything is a file." to a new level. Each screen object and each attribute of these objects is represented by files and folders on the filesystem and therefore accessible and customizable for everybody, including the end-user.
Such a system might be of great use in both professional and personal settings.

Model-Based Engineering for Embedded Systems in Practice, 2014
Research Reports in Software Engineering and Management 2014:01 ISSN 1654-4870
Nadja Marko, Grischa Liebel, Daniel Sauter, Aleksander Lodwich, Matthias Tichy, Andrea Leitner, and Jörgen Hansson
Abstract: Model-Based Engineering (MBE) aims at increasing the effectiveness of engineering by using models as key artifacts in the development process. While empirical studies on the use and the effects of MBE in industry generally exist, there is only little work targeting the embedded systems domain. We contribute to the body of knowledge with a study on the use and the assessment of MBE in that particular domain. Therefore, we collected quantitative data from 112 subjects, mostly professionals working with MBE, with the goal to assess the current State of Practice and the challenges the embedded systems domain is facing. Of the 112 subjects, the majority are experienced with MBE, working at large companies in the automotive, avionics, or healthcare domains. Additionally, mainly OEMs and First-tier suppliers are represented in the study. Our main findings are that MBE is used by a majority of all participants in the embedded systems domain, mainly for simulation, code generation, and documentation. Reported positive effects of MBE are higher quality and improved reusability. Main shortcomings are interoperability difficulties between MBE tools, high training effort for developers and usability issues. The data also shows that there are no large differences between subgroups with respect to domains, position in the value chain, company size and product size.


Beyond Interoperability in the Systems Engineering Process for the Industry 4.0 (To be published March 2017).
Lodwich, A.; and Alvarez-Rodríguez, J. M. pages 161-182. Springer Verlag on Intelligent Systems Reference Library, 2016.
Abstract Future complex products will be different from existing ones in several relevant ways. They will be more intelligent and connected and they will have to be greatly leaner across software and hardware in order to handle safety, security and resource demand. Industrial Internet, Industry 4.0 and Internet of Things will greatly shift responsibility for products away from engineering departments towards the actual environment in which the products are employed. This situation will eventually transform the most tested companies into intelligent building platforms where the responsibility for designing, producing and delivering will be distributed among market parties in unrecognizable ways. The benefits of these upcoming changes will be higher utility of products for customers and new levels of production flexibility and efficiency. However, this new environment can only be attained if developmental and operational platforms and embedded products can rely on reusing the explicit knowledge used in their designs. The provision of technology for this new environment goes far beyond asking for tools interoperability. In this chapter a conceptual layer of interoperability is outlined describing what kind of features a powerful new interoperability technology should support in order to fuel desired changes in engineering and production paradigms.

Note on Communication Incompatibility Types, 2016
./notes_on_incompatibility.pdf November 18th, 2016
Abstract: This note contains the description of eight basic communication incompatibility types and basic strategies to combat them. The purpose of this note is to remind software and system designers to design communication technologies which suffer as few reasonably unresolvable incompatibilities as possible.

Bubbles: a data management approach to create an advanced industrial interoperability layer for critical systems development applying reuse techniques, 2016
./Bubbles-v2.06.pdf July 28th, 2016
Abstract: The development of critical systems is becoming more and more complex. The overall tendency is that development costs raise. In order to cut cost of development, companies are forced to build systems from proven components and larger new systems from smaller older ones. Respective reuse activities involve good number of people, tools and processes along different stages of the development lifecycle which involve large numbers of tools. Some development is directly planned for reuse. Planned reuse implies excellent knowledge management and firm governance of reusable items. According to the current state of the art, there are still practical problems in the two fields, mainly because the governance and knowledge management is fragmented over the tools of the toolchain. In our experience, the practical effect of this fragmentation is that involved ancestor and derivation relationships are often undocumented or not exploitable. Additionally, useful reuse is almost always dealing with heterogeneous content which must be transferred from older to newer development environments. In this process, interoperability proves either as biggest obstacle or greatest help. In this paper, authors connect the topics interoperability and knowledge management and propose to seek for ubiquitous reuse via advanced interoperability features. A single concept from a larger Technical Interoperability Concept (TIC) with the name bubble is presented. Bubbles are expected to overcome existing barriers to cost-efficient reuse in systems and software development lifecycles. That is why, the present paper introduces and defines bubbles by showing how they simplify application of repairs and changes and hence contribute to expansion of reuse at reduced cost.
also available on
also available on

Challenges to OSLC Link Management in Safety Audited Industries (Talk)
OSLC Unconference Nov. 2nd, 2015, Ludwigsburg

Secure Collaboration in Virtual Organizations (Talk)
OSLC Unconference Nov. 2nd, 2015, Ludwigsburg

Pattern Recognition and Data Mining

Efficient Estimation of k for the Nearest Neighbors Class of Methods, unpublished, 2011
download and read DOI: 10.13140/RG.2.1.2628.8245
Aleksander Lodwich, Faisal Shafait, Thomas M. Breuel
The k Nearest Neighbors (kNN) method has received much attention in the past decades, where some theoretical bounds on its performance were identi ed and where practical optimizations were proposed for making it work fairly well in high dimensional spaces and on large datasets. From countless experiments of the past it became widely accepted that the value of k has a significant impact on the performance of this method. However, the efficient optimization of this parameter has not received so much attention in literature. Today, the most common approach is to cross-validate or bootstrap this value for all values in question. This approach forces distances to be recomputed many times, even if efficient methods are used. Hence, estimating the optimal k can become expensive even on modern systems. Frequently, this circumstance leads to a sparse manual search of k. In this paper we want to point out that a systematic and thorough estimation of the parameter k can be performed efficiently. The discussed approach relies on large matrices, but we want to argue, that in practice a higher space complexity is often much less of a problem than repetetive distance computations.

Report on Practical Bayes-True Data Generators For Evaluation of Machine Learning, Pattern Recognition and Data Mining Methods, technical report, 2009
download and read
Janick V. Frasch, Aleksander Lodwich, Thomas M. Breuel
Benchmarking pattern recognition, machine learning and data mining methods commonly relies on real-world data sets. However there exist a couple of reasons for not using real-world data. On one hand collecting real-world data can become difficult or impossible for many reasons, on the other hand real-world variables are difficult to control, even in the problem domain; in the feature domain, where most statistical learning methods operate, control is even more difficult to achieve and hence rarely attempted. This is at odds with the scienti c experimentation guidelines mandating the use of as directly controllable and as directly observable variables as possible. Because of this, synthetic data is a necessity for experiments with algorithms. In this report we present four algorithms that produce data with guaranteed global and intra-data statistical or geometrical properties. The data generators can be used for algorithm testing and fair performance evaluation of statistical learning methods.
DOI: 10.13140/RG.2.1.5045.4649
  • DFKI WGKS and WMCG generators for Python 2.6
  • Easy to use patterns container for Python 2.6
  • Quick data generators for Python 2.6
  • A Bayes-true data generator for evaluation of supervised and unsupervised learning methods, 2011
    Pattern Recognition Letters. 08/2011; 32(11):1523-1531. DOI: 10.1016/j.patrec.2011.04.010
    Janick V. Frasch, Aleksander Lodwich, Faisal Shafait, Thomas M. Breuel

    Evaluation of Robustness and Performance of Early Stopping Rules with Multi-Layer Perceptrons, 2009
    Proceedings of International Joint Conference on Neural Networks, Atlanta, Georgia, USA, June 14-19, 2009
    Aleksander Lodwich, Yves Rangoni, Thomas Breuel

    Optimization of the Vagus Nerve Stimulation Parameters by the Means of Computational Intelligence, 2007
    ./MT_OptiVaNeS.pdf (original data can be requested from me)
    DOI: 10.13140/RG.2.1.2784.4723
    related: Animal model of the short-term cardiorespiratory effects of intermittent vagus nerve stimulation, 2008
    Boubker Zaaimi, Reinhard Grebe, Fabrice Wallois

    Optimization of the Vagus Nerve Stimulation Parameters with CI-Methods, unpublished, 2006
    download and read
    DOI: 10.13140/RG.2.1.2784.4723
    In order to improve the Vagus Nerve Stimulation (VNS) statistical efficacy increase for certain parameter vectors was investigated. Such vectors are used in real cases. Although immediate tests on humans are not possible it was possible to find rats as adequate substitute models. The rats were stimulated with a protocol of 81 parameter vectors. Their physical response was recorded. Each rat is different but it is assumed that for some vectors similar response will occur. If only enough rats were recorded then a statistically safe prediction could be made about what parameter vectors will result in effects mostly desired by the therapist. It is the primary goal of the project to find similarities in different response cases for some of the typical vectors and eventually provide deeper insights on the treatment. The project faces extremely large data amounts from data acquisition. Methods of Computational Intelligence (CI) are used for the conversion of data to knowledge. After conventional data pre-processing evolutionarily designed artificial neural networks are used in order to evaluate available data and to help medical staff verify their medical thesis.

    Optimierung der Vagusnervstimulationsparameter in der Behandlung refraktärer Epilepsie, 2006

    eKNN-DE 1.0 - Moderne Prototypenplattform für neuronale Lösungen - Integration mit externen Problemträgern, 2005
    ./eKNN-DE_DA.pdf ./eKNN 1.0a.ppt

    Schluss mit Spam

    The End