Izbrane publikacije
Inovacije temeljijo na izmenjavi znanja. Preberite raziskovalne objave in članke naših strokovnjakov s številnih IT področij: upravljanje in avtomatizacija, varnost in zasebnost, podatkovna analitika, internet stvari.
Security in DevSecOps: Applying Tools and Machine Learning to Verification and Monitoring Steps
Security represents one of the crucial concerns when it comes to DevOps methodology-empowered software development and service delivery process. Considering the adoption of Infrastructure as Code (IaC), even minor flaws could potentially cause fatal consequences, especially in sensitive domains such as healthcare and maritime applications. However, most of the existing solutions tackle either Static Application Security Testing (SAST) or run-time behavior analysis distinctly. In this paper, we propose a) IaC Scan Runner, an open-source solution developed in Python for inspecting a variety of state-of-the-art IaC languages in application design time and b) the run time anomaly detection tool called LOMOS. Both tools work in synergy and provide a valuable contribution to a DevSecOps tool set. The proposed approach is demonstrated and their results will be demonstrated on various case studies showcasing the capabilities of static analysis tool IaC Scan Runner combined with LOMOS - log analysis artificial intelligence-enabled framework.
Automated Approach to IaC Code Inspection Using Python-Based DevSecOps Tool
One of main benefits enabled by DevOps ideology is to automatize activities and operations related to development, testing, integration and deployment of software, to fulfill the needs of relevant organization’s goals. On the other side, quality of code, security, together with compliance according to given standards represent highly relevant considerations. In this paper, we present an open-source Python-based tool with web-based graphical interface which enables automation of static code analysis and checks when it comes to Infrastructure as Code (IaC) scripts. The proposed tool is evaluated in several scenarios when it comes to terraform scripts.
Embracing IaC Through the DevSecOps Philosophy: Concepts, Challenges, and a Reference Framework
We introduce the challenges of DevSecOps philosophy and its applicability to the development and operation of trustworthy infrastructure-as-code, and we combine the solutions into a single framework covering all crucial steps. Finally, we discuss how the proposed framework addresses the challenges and introduce an initial design for it.
Application of Unsupervised Anomaly Detection Techniques to Moisture Content Data from Wood Constructions
Wood is considered one of the most important construction materials, as well as a natural material prone to degradation, with fungi being the main reason for wood failure in a temperate climate. Visual inspection of wood or other approaches for monitoring are time-consuming, and the incipient stages of decay are not always visible. Thus, visual decay detection and such manual monitoring could be replaced by automated real-time monitoring systems. The capabilities of such systems can range from simple monitoring, periodically reporting data, to the automatic detection of anomalous measurements that may happen due to various environmental or technical reasons. In this paper, we explore the application of Unsupervised Anomaly Detection (UAD) techniques to wood Moisture Content (MC) data. Specifically, data were obtained from a wood construction that was monitored for four years using sensors at different positions. Our experimental results prove the validity of these techniques to detect both artificial and real anomalies in MC signals, encouraging further research to enable their deployment in real use cases.
Auto-scaling Using TOSCA Infrastructure as Code
Autoscaling cloud infrastructures still remains a challenging endeavour during orchestration, given the many possible risks, options, and connected costs. In this paper we discuss the options for defining and enacting autoscaling using TOSCA standard templates and its own policy definition specifications. The goal is to define infrastructure blueprints to be self-contained, executable by an orchestrator that can take over autonomously all scaling tasks while maintaining acceptable structural and non-functional quality levels.
Examination and Comparison of TOSCA Orchestration Tools
The use of orchestration and automation has been growing in recent years. This can be especially evident in cloud infrastructures where OASIS TOSCA orchestration standard can be used to provide independence and prevent vendor lock-in. In this paper we examine different TOSCA compliant orchestration tools, test them with TOSCA templates and present a comparison between these tools. This comparison should be used to decide which tool is easier to use for both the companies and the developers according to their requirements.
Improving the Maintenance of Railway Switches through Proactive Approach
A strong necessity for faster mass transport with high capacities and frequent runs has put pressure on railway infrastructure. These changes require an improvement in the maintenance activities that need to be planned carefully to reduce as much as possible their impacts on the actual use of the infrastructure. As a consequence, it is crucial to continuously monitor the technical equipment of the railroad tracks in the most efficient way. Railway switches operate in harsh environmental conditions. However, their reliability requirements are high due to safety and economic factors. Their maintenance depends on the data collected and the decisions on due corrective actions. The more frequent the data collection, the shorter the decision cycle; and the more intuitive the data visualization, the better the confidence in the system. This paper presents some tangible results, where the MANTIS concepts were applied to the use-case of railway switches to introduce continuous monitoring and proactive maintenance of the railroad tracks. The presented results include the measurement method; the analysis of the measured data; the profiling and modelling of the physical behaviour of the switch, including the recognized need to have separate models per season; and the visualization of the gathered data.
An Approach to Train and Evaluate the Cybersecurity Skills of Participants in Cyber Ranges based on Cyber-Risk Models
There is an urgent need for highly skilled cybersecurity professionals, and at the same time there is an awareness gap and lack of integrated training modules on cybersecurity related aspects on all school levels. In order to address this need and bridge the awareness gap, we propose a method to train and evaluate the cybersecurity skills of participants in cyber ranges based on cyber-risk models. Our method consists of five steps: create cyber-risk model, identify risk treatments, setup training scenario, run training scenario, and evaluate the performance of participants. The target users of our method are the White Team and Green Team who typically design and execute training scenarios in cyber ranges. The output of our method, however, is an evaluation report for the Blue Team and Red Team participants being trained in the cyber range. We have applied our method in three large scale pilots from academia, transport, and energy. Our initial results indicate that the method is easy to use and comprehensible for training scenario developers (White/Green Team), develops cyber-risk models that facilitate real-time evaluation of participants in training scenarios, and produces useful feedback to the participants (Blue/Red Team) in terms of strengths and weaknesses regarding cybersecurity skills.
A Method for User Identification for Multitouch Displays
This paper describes MTi, a biometric method for user identification on multitouch displays. The method is based on features obtained only from the coordinates of the 5 touchpoints of one of the user’s hands. This makes MTi applicable to all multitouch displays large enough to accommodate a human hand and detect 5 or more touchpoints without requiring additional hardware and regardless of the display’s underlying sensing technology. MTi only requests that the user places his hand on the display with the fingers comfortably stretched apart. A dataset of 34 users was created on which our method reported 94.69% identification accuracy. The method’s scalability was tested on a subset of the Bosphorus hand database (100 users, 94.33% identification accuracy) and a usability study was performed.
A Novel Approach to Manage Cloud Security SLA Incidents
Cloud computing is increasingly playing an important role in the service provisioning domain given the economic and technological benefits it offers. The popularity of cloud services is increasing but so are their customers’ concerns about security assurance and transparency of the Cloud Service Providers (CSPs). This is especially relevant in the case of critical services that are progressively moving to the cloud. Examples include the integrated European air traffic control system or public administrations through the governmental clouds. Recent efforts aim to specify security in cloud by using security service level agreements (secSLAs). However, the paucity of approaches to actually control the fulfillment of secSLAs and to react in case of security breaches, often results in distrust in cloud services. In this paper, we present a solution to monitor and enforce the fulfillment of secSLAs. Our framework is able to (a) detect occurrences that lead to unfulfillment of commitments, and (b) also provide mitigation to the harmful events that may or do compromise the validity of secSLAs.
A Slovene Translation of the System Usability Scale: The SUS-SI
The System Usability Scale (SUS) is a widely adopted and studied questionnaire for usability evaluation. It is technology independent and has been used to evaluate the perceived usability of a broad range of products, including hardware, software, and websites. In this paper we present a Slovene translation of the SUS (the SUS-SI) along with the procedure used in its translation and psychometric evaluation. The results indicated that the SUS-SI has properties similar to the English version. Slovene usability practitioners should be able to use the SUS-SI with confidence when conducting user research.
Automated System for Ship Detection from Medium Resolution Satellite Optical Imagery
In this paper we present a ship detection pipeline for low-cost medium resolution satellite optical imagery obtained from ESA Sentinel-2 and Planet Labs Dove constellations. This optical satellite imagery is readily available for any place on Earth and underutilized in maritime domain, compared to existing solutions based on synthetic-aperture radar (SAR) imagery. We developed ship detection method based on state-of-the-art deep-learning based object detection method which was developed and evaluated on a large scale dataset that was collected and automatically annotated with the help of Automatic Identification System (AIS) data.
Automatic Differentiation: a look through Tensor and Operational Calculus
In this paper we take a look at Automatic Differentiation through the eyes of Tensor and Operational Calculus. This work is best consumed as supplementary material for learning tensor and operational calculus by those already familiar with automatic differentiation. To that purpose, we provide a simple implementation of automatic differentiation, where the steps taken are explained in the language tensor and operational calculus.
Big-data analytics: a critical review and some future directions
The aim of the paper is to present a critical review of analytics and visualization technology for big data, and propose future directions to overcome the shortcomings of the current technologies. The current machine learning and data-mining algorithms are operating mostly on predefined scales of aggregation, while in the vast amounts of data the problem arises at the level of aggregation which cannot be defined ahead of time. We therefore identify a novel and extended architecture to operate on flexible multi-resolution hypothesis space. With such architecture framework the goal is to open a space of possibly discovered models towards classes of data, which are by today’s approaches discovered only for special cases. Furthermore, the multi-resolution approach to big-data analytics could allow scenarios like semi-supervised and unsupervised anomaly detection, detecting complex relationships from the heterogeneous data sources, and providing ground for visualization of complex processes.
Co-Allocation with Collective Requests in Grid Systems
We present a new algorithm for resource allocation in large, heterogeneous grids. Its main advantage over existing co-allocation algorithms is that it supports collective requests with partial resource reservation, where the focus is on better grid utilisation. Alongside the requests that must be fulfilled by each resource, a collective request specifies the total amount of a required resource property without a strict assumption with regard to its distribution. As a consequence, the job becomes much more flexible in terms of its resource assignment and the co-allocation algorithm may therefore start the job earlier. This flexibility increases grid utilisation as it allows an optimisation of job placement that leads to a greater number of accepted jobs. The proposed algorithm is implemented as a module in the XtreemOS grid operating system. Its performance and complexity have been assessed through experiments on the Grid’5000 infrastructure. The results reveal that in most cases the algorithm returns optimal start times for jobs and acceptable, but sometimes suboptimal resource sets.
Constellation-Based Deep Ear Recognition
This chapter introduces COM-Ear, a deep constellation model for ear recognition. Different from competing solutions, COM-Ear encodes global as well as local characteristics of ear images and generates descriptive ear representations that ensure competitive recognition performance. The model is designed as dual-path convolutional neural network (CNN), where one path processes the input in a holistic manner, and the second captures local images characteristics from image patches sampled from the input image. A novel pooling operation, called patch-relevant-information pooling, is also proposed and integrated into the COM-Ear model. The pooling operation helps to select features from the input patches that are locally important and to focus the attention of the network to image regions that are descriptive and important for representation purposes. The model is trained in an end-to-end manner using a combined cross-entropy and center loss. Extensive experiments on the recently introduced Extended Annotated Web Ears (AWEx).
CyberWISER-Light: Supporting Cyber Risk Assessment with Automated Vulnerability Scanning
Most of us rely on information and communication technologies (ICT) in our professional lives as well as private day-to-day activities. Although this brings huge benefits in many areas, we rarely think about the threats introduced by our increasing dependence on ICT. As the number of security-related cyberspace incidents continues to increase, it is important to be aware of the potential impact that incidents such as identity misappropriation, information theft or disruption of critical services can have on individuals and businesses. SMEs, representing the highest proportion of European businesses, are the most vulnerable to cybercrime. The biggest obstacle in the process of limiting the growth of cybersecurity incidents is the lack of awareness of individuals, business decision makers and even IT professionals, which leads to insufficient risk management and inadequately resilient security information systems and networks.
DICE: quality-driven development of data-intensive cloud applications
Model-driven engineering (MDE) often features quality assurance (QA) techniques to help developers creating software that meets reliability, efficiency, and safety requirements. In this paper, we consider the question of how quality-aware MDE should support data-intensive software systems. This is a difficult challenge, since existing models and QA techniques largely ignore properties of data such as volumes, velocities, or data location. Furthermore, QA requires the ability to characterize the behavior of technologies such as Hadoop/MapReduce, NoSQL, and stream-based processing, which are poorly understood from a modeling standpoint. To foster a community response to these challenges, we present the research agenda of DICE, a quality-aware MDE methodology for data-intensive cloud applications. DICE aims at developing a quality engineering tool chain offering simulation, verification, and architectural optimization for Big Data applications. We overview some key challenges involved in developing these tools and the underpinning models.
Digital Encyclopedia Of Slovenian Natural And Cultural Heritage – DEDI
This paper presents the web application, called Digital Encyclopaedia of Heritage (or DEDI) as a new milestone in the field of preservation and presentation of the Slovenian cultural and natural heritage. It introduces novel concepts, and aims for a more interactive search capability, and a greater presentation of the heritage to the general public. DEDI also supports research, learning, cultural development etc. and, as such, strengthens the national identity. The most important feature of DEDI is its unified approach where all four types of heritage are presented equally in one place. In this paper, special attention is given to the presentation of services developed for DEDI that could be used in other applications and projects – in essence, a presentation of the DEDI framework and the DEDI metadata model.
Employing Graphical Risk Models to Facilitate Cyber-Risk Monitoring - the WISER Approach
Presenting a method for developing machine-readable cyber-risk assessment algorithms based on graphical risk models, along with a framework that can automatically collect the input, execute the algorithms, and present the assessment results to a decision maker. This facilitates continuous monitoring of cyber-risk. The intended users of the method are professionals and practitioners interested in developing new algorithms for a specific organization, system or attack type, such as consultants or dedicated cyber-risk experts in larger organizations. For the assessment results, the intended users are decision makers in charge of countermeasure selection from an overall business perspective.
Experiences in Building a mOSAIC of Clouds
The diversity of Cloud computing services is challenging the application developers as various and non-standard interfaces are provided for these services. Few middleware solutions were developed until now to support the design, deployment and execution of service-independent applications as well as the management of resources from multiple Clouds. This paper focuses on one of these advanced middleware solutions, called mOSAIC. Written after the completion of its development, this paper presents an integrated overview of the mOSAIC approach and the use of its various software prototypes in a Cloud application development process. We are starting from the design concepts and arrive to various applications, as well as to the position versus similar initiatives.
Federated Authentication and Credential Translation in the EUDAT Collaborative Data Infrastructure
One of the challenges in a distributed data infrastructure is how users authenticate to the infrastructure, and how their authorisations are tracked. Each user community comes with its own established practices, all different, and users are put off if they need to use new, difficult tools. From the perspective of the infrastructure project, the level of assurance must be high enough, and it should not be necessary to reimplement an authentication and authorisation infrastructure (AAI). In the EUDAT project, we chose to implement a mostly loosely coupled approach based on the outcome of the Contrail and Unicore projects. We have preferred a practical approach, combining the outcome of several projects who have contributed parts of the puzzle. The present paper aims to describe the experiences with the integration of these parts. Eventually, we aim to have a full framework which will enable us to easily integrate new user communities and new services.
Fog and Cloud in the Transportation, Marine and eHealth Domains
Amazing things have been achieved in a wide range of application domains by exploiting a multitude of small connected devices, defined as the Internet of Things. Managing of these devices and their resources is a task for the underlying Fog technology that enables building of smart and efficient applications. Currently, the Fog is not implemented to the extent that we can submit application requirements to a Fog provider, select returned resources and deploy an application on them. A widely adopted workaround is to deploy Cloud applications that exploit the functionality of IoT and Fog devices. Although Clouds provide virtually unlimited computation power, they could present a bottleneck and unnecessary communication overhead when a huge number of devices needs to be controlled, read or written to. Therefore, it is reasonable to formulate use cases that will exploit the Edge and Fog functionality and define a set of basic requirements for Fog providers.
ICT tools’ potential and opportunities: Energy hub approach
ICT tools will play a central role in the proliferation of new energy services in a liberalised energy market. A holistic view of communication and IT solutions focuses on the design of energy hub solutions that enable active prosumer participation in tertiary energy markets through VPPs (virtual power plants) and also consumers’ home management of power consumption.
Implicit Human-computer Interaction for Photo Collection Management
In the age of digital photography, the amount of photos we have in our personal collections has increased substantially along with the effort needed to manage these new, larger collections. This issue has already been addressed in various ways: from organization by meta-data analysis to image recognition and social network analysis. We introduce a new, more personal perspective on photowork that aims at understanding the user and his/her subjective relationship to the photos. It does so by means of implicit human–computer interaction, that is, by observing the user’s interaction with the photos. In order to study this interaction, we designed an experiment to see how people behave when manipulating photos on a tablet and how this implicitly conveyed information can be used to aid photo collection management.
Improving LiDAR Compression Efficiency on Small Packets
Several high-quality methods for compressing LiDAR data stored in the LAS format have evolved in recent years. They offer good compression for large datasets, but are less efficient on small data packets, which are needed in web applications. This problem is focused on by analysing two state-of-the-art implementations for large LAS datasets and then proposing improvements suitable for small data packets.
Influence of segmentation on deep iris recognition performance
Despite the rise of deep learning in numerous areas of computer vision and image processing, iris recognition has not benefited considerably from these trends so far. Most of the existing research on deep iris recognition is focused on new models for generating discriminative and robust iris representations and relies on methodologies akin to traditional iris recognition pipelines. Hence, the proposed models do not approach iris recognition in an end-to-end manner, but rather use standard heuristic iris segmentation (and unwrapping) techniques to produce normalized inputs for the deep learning models. However, because deep learning is able to model very complex data distributions and nonlinear data changes, an obvious question arises. How important is the use of traditional segmentation methods in a deep learning setting? To answer this question, we present in this paper an empirical analysis of the impact of iris segmentation on the performance of deep learning models using a simple two stage pipeline consisting of a segmentation and a recognition step. We evaluate how the accuracy of segmentation influences recognition performance but also examine if segmentation is needed at all. We use the CASIA Thousand and SBVPI datasets for the experiments and report several interesting findings.
Infrastructure-as-Code for Data-Intensive Architectures: A Model-Driven Development Approach
As part of the DevOps tactics, Infrastructure-asCode (IaC) provides the ability to create, configure, and manage complex infrastructures by means of executable code. Writing IaC, however, is not an easy task, since it requires blending different infrastructure programming languages and abstractions, each specialized on a particular aspect of infrastructure creation, configuration, and management. Moreover, the more the architectures become large and complex (e.g. Data-Intensive or Microservice-based architectures), the more dire the need of IaC becomes. The goal of this paper is to exploit ModelDriven Engineering (MDE) to create language-agnostic models that are then automatically transformed into IaC. We focus on the domain of Data-Intensive Applications as these typically exploit complex infrastructures which demand sophisticated and finegrained configuration and re-configuration — we show that, through our approach, called DICER, it is possible to create complex IaC with significant amounts of time savings, both in IaC design as well as deployment and re-deployment times.
Introduction of the 3D GIS for Decision-Making and Response in the Event of Disasters
Administration for civil protection and disaster relief, a constituent body of the Ministry of Defence, is renowned as one of the first in the world to introduce a Geographic Information System (GIS) for efficient support of rescue operations in cases of traffic accidents, floods, fires and other natural disasters. Based on additional visualisation and manipulation of geospatial data requirements analysis the Administration issued a public tender in 2009 to replace the existing 2D geographic information system with a more flexible, customizable and distributed 3D GIS solution (Gaea+) providing open standards based data access, advanced 3D visualization, real-time data integration, and spatial alaysis tools. This paper presents the overall architecture of the 3D GIS solution and core functionalities. The deployed system has already been demonstrated to several administrations from countries in the region who were all fascinated by the ease of use and powerful functionalities.
Inverse Eigenvalue Problem for Euclidean Distance Matrices of Size 3
A matrix is a Euclidean distance matrix (EDM) if there exist points such that the matrix elements are squares of distances between the corresponding points. The inverse eigenvalue problem (IEP) is as follows: construct (or prove the existence of) a matrix with particular properties and a given spectrum. It is well known that the IEP for EDMs of size 3 has a solution. In this paper all solutions of the problem are given and their relation with geometry is studied. A possible extension to larger EDMs is tackled.
Maritime IoT Solutions in Fog and Cloud
We present the complexity of marine-based IoT solutions, i.e. devices with sensors deployed in boats and vessels managed by the Fog and Cloud. The paper describes issues and challenges that arise in this specific environment and presents approaches taken to tackle them. The approach is demonstrated with the implementation of a solution for an open-source fog to cloud platform, mF2C, which results in a working prototype showing promising advantages of exploiting the platform over independently developing a proprietary solution.
Methods for Constructing Distance Matrices and the Inverse Eigenvalue Problem
In this paper, a symmetric nonnegative matrix with zero diagonal and given spectrum, where exactly one of the eigenvalues is positive, is constructed. This solves the symmetric nonnegative eigenvalue problem (SNIEP) for such a spectrum. The construction is based on the idea from the paper Hayden, Reams, Wells, “Methods for constructing distance matrices and the inverse eigenvalue problem”. Some results of this paper are enhanced. The construction is applied for the solution of the inverse eigenvalue problem for Euclidean distance matrices, under some assumptions on the eigenvalues.
mF2C: The Evolution of Cloud Computing Towards an Open and Coordinated Ecosystem of Fogs and Clouds
Fog computing brings cloud computing capabilities closer to the end-devices and users, while enabling location-dependent resource allocation, low latency services, and extending significantly the IoT services portfolio as well as market and business opportunities in the cloud and IoT sectors. With the number of devices growing exponentially globally, new cloud and fog models are expected to emerge, paving the way for shared, collaborative, extensible mobile, volatile and dynamic compute, storage and network infrastructure. When put together, cloud and fog computing create a new stack of resources, which we refer to as Fog-to-Cloud (F2C), creating the need for a new, open and coordinated management ecosystem. The EU Horizon 2020 program has recently funded a new research initiative (mF2C) bringing together relevant industry and academic players in the cloud arena, aimed at designing an open, secure, decentralized, multistakeholder management framework for F2C computing, including novel programming models, privacy and security, data storage techniques, service creation, brokerage solutions, SLA policies, and resource orchestration methods. This paper introduces the main mF2C concepts, illustrates the need for a coordinated management ecosystem, proposes a preliminary design of its foundational building blocks and presents results that show the benefits mF2C may have on three key real-world scenarios.
Model-driven continuous deployment for quality DevOps
DevOps entails a series of software engineering strategies and tools that promise to deliver quality and speed at the same time with little or no additional expense. In our work we strived to enable a DevOps way of working, combining Model-Driven Engineering tenets with the challenges of delivering a model-driven continuous deployment tool that allows quick (re-)deployment of cloud applications for the purpose of continuous improvement. This paper illustrates the DICER tool and elaborates on how it can bring about the DevOps promise and enable the quality-awareness.
Multidimensional Representations of Natural and Cultural Heritage in the DEDI Project
DEDI stands for the Digital Encyclopaedia of Natural and Cultural Heritage in Slovenia which was a result of two prototype research and development projects DEDI and DEDI II in the period between 2008 and 2010. The two projects were co-financed by the Ministry of Higher Education, Science and Technology and the European Regional Development Fund in the frame of research and development projects e-content and e-services. DEDI is the first attempt of multimedia- rich digital representation of Slovenian natural and cultural heritage on a common web site offering verifiable, qualitative and complex content to a wide range of general public. Digital content (text, video records, audio records, photographs) is enriched by 2-dimensional and/or 3-dimensional visualisation of geographical data or even by 4-dimensional models with time component. 4D models combine 3D models with the time dimension. Thus it is possible to sim- ulate the past, the current and the future condition of natural and cultural heritage objects, their changes, growth, deterioration or oscillation.
Novel Efficient Techniques for Real-Time Cloud Security Assessment
Cloud computing offers multiple benefits to users by offloading them of the tasks of setting up complex infrastructure and costly services. However, these benefits come with a price, namely that the Cloud Service Customers (CSCs) need to trust the Cloud Service Providers (CSPs) with their data, and additionally being exposed to integrity and confidentiality related incidents on the CSPs. Thus, it is important for CSCs to know what security assurances the CSPs are able to guarantee by being able to quantitatively or qualitatively compare CSPs offers with respect to their own needs. On the other hand, it is also important for CSPs to assess their own offers by comparing them to the competition and with the CSCs needs, to consequently improve their offers and to gain better trust. Thus there is a basic need for techniques that address the Cloud security assessment problem. Although a few assessment methodologies have recently been proposed, their value comes only if they can be efficiently executed to support actual decisions at run time. For an assessment methodology to be practical, it should be efficient enough to allow CSCs to adjust their preferences while observing on the fly the current evaluation of CSPs’ offers based on the preferences that are being chosen. Furthermore, for an assessment methodology to be useful in real-world applications, it should be efficient enough to support many requests in parallel, taking into account the growing number of CSPs and the variety of requirements that CSCs might have. In this paper, we develop a novel Cloud security assessment technique called Moving Intervals Process (MIP) that possesses all these qualities. Unlike the existing complex approaches (e.g., Quantitative Hierarchical Process – QHP) that are computationally too expensive to be deployed for the needed on-line real-time assessment, MIP offers both accuracy and high computational efficiency. Additionally, we also show how to make the existing QHP competitively efficient.
On Characterizing Proteomics Maps by Using Weighted Voronoi Maps
In contrast to the standard construction of Voronoi regions, in which the boundaries between different regions are at equal distance from the given points, we consider the construction of modified Voronoi regions obtained by giving greater weights to spots reported to have higher abundance. Specifically we are interested in applying this approach to 2-D proteomics maps and their numerical characterization. As will be seen, the boundaries of the weighted Voronoi regions are sensitive to the relative abundances of the protein spots and thus the abundances of protein spots, the z component of the (x, y, z) triplet, are automatically incorporated in the numerical analysis of the adjacency matrix, rather than used to augment the adjacency matrix as non-zero diagonal matrix elements. The outlined approach is general and it may be of interest for numerical analyses of other maps that are defined by triplets (x, y, z) as input information.
On Euclidean Distance Matrices of Graphs
In this paper, a relation between graph distance matrices and Euclidean distance matrices (EDM) is considered. It is proven that distance matrices of paths and cycles are EDMs. The proofs are constructive and the generating points of studied EDMs are given in a closed form. A generalization to weighted graphs (networks) is tackled.
On the similarities and differences between the Cloud, Fog and the Edge
The field of edge and fog computing is growing, but there are still many inconsistent and loosely-defined terms in current literature. With many articles comparing theoretical architectures and evaluating implementations, there is a need to understand the underlying meaning of information condensed into fog, edge, and similar terms. Through our review of current literature, we discuss these differences and extract key characteristics for basic concepts that appear throughout. The similarities to existing IaaS, PaaS and SaaS models are presented, contrasted against similar models modified for the specifics of edge devices and workloads.
We also evaluate the different aspects existing evaluation and comparison works investigate, including the compute, networking, storage, security, and ease-of-use capabilities of the target implementations. Following that, we make a broad overview of currently available commercial and open-source platforms implementing the edge or fog paradigms, identifying key players, successful niche actors and general trends for feature-level and technical development of these platforms.
Operational Calculus for Differentiable Programming
In this work we present a theoretical model for differentiable programming. We construct an algebraic language that encapsulates formal semantics of differentiable programs by way of Operational Calculus. The algebraic nature of Operational Calculus can alter the properties of the programs that are expressed within the language and transform them into their solutions.
In our model programs are elements of programming spaces and viewed as maps from the virtual memory space to itself. Virtual memory space is an algebra of programs, an algebraic data structure one can calculate with. We define the operator of differentiation (∂) on programming spaces and, using its powers, implement the general shift operator and the operator of program composition. We provide the formula for the expansion of a differentiable program into an infinite tensor series in terms of the powers of ∂. We express the operator of program composition in terms of the generalized shift operator and ∂, which implements a differentiable composition in the language. Such operators serve as abstractions over the tensor series algebra, as main actors in our language.
We demonstrate our models usefulness in differentiable programming by using it to analyse iterators, deriving fractional iterations and their iterating velocities, and explicitly solve the special case of ReduceSum.
Point-Based Rendering Optimization with Textured Meshes for Fast LiDAR Visualization
In this paper a new method for high quality rendering of large LiDAR-based terrain data is presented. The visualization system upgrades previous methods of point-based rendering by detecting continuous surfaces and replacing them with decimated triangle meshes. High-quality visualization is retained by using render-to-texture methods to generate color textures and bump maps from original LiDAR data and applying them to the newly generated triangle meshes. This hybrid approach is able to decrease rendering times of surfaces to less than 50% with little to no difference in rendering quality. The described optimizations can be executed at run-time without interfering with user interaction. Keywords: LiDAR; Terrain visualization; Point-based rendering; Hybrid rendering; Surface detection; Render-to-texture
SLA-enabled Enterprise IT
The SLA@SOI project has researched and engineered technologies to embed SLA-aware infrastructures into the service economy. It has published models, defined architectures, developed an open-source framework and driven open standards such as the Open Cloud Computing Interface. In this demo the application of SLA@SOI in an enterprise IT use case will be demonstrated. The presentation will cover the SLA-aware negotiation, scheduling, provisioning, and monitoring of virtual machines.
SSBC 2018: Sclera Segmentation Benchmarking Competition
This paper summarises the results of the Sclera Segmentation Benchmarking Competition (SSBC 2018). It was organised in the context of the 11th IAPR International Conference on Biometrics (ICB 2018). The aim of this competition was to record the developments on sclera segmentation in the cross-sensor environment (sclera trait captured using multiple acquiring sensors). Additionally, the competition also aimed to gain the attention of researchers on this subject of research. For the purpose of benchmarking, we have developed two datasets of sclera images captured using different sensors. The first dataset was collected using a DSLR camera and the second one was collected using a mobile phone camera. The first dataset is the Multi-Angle Sclera Dataset (MASD version 1), which was used in the context of the previous versions of sclera segmentation competitions. The images in the second dataset were captured using .a mobile phone rear camera of 8-megapixel. As baseline manual segmentation mask of the sclera images from both the datasets were developed. Precision and recall-based statistical measures were employed to evaluate the effectiveness of the submitted segmentation technique and to rank them. Six algorithms were submitted towards the segmentation task. This paper analyses the results produced by these algorithms/system and defines a way forward for this subject of research. Both the datasets along with some of the accompanying ground truth/baseline mask will be freely available for research purposes upon request to authors by email.
SSERBC 2017: Sclera segmentation and eye recognition benchmarking competition
This paper summarises the results of the Sclera Segmentation and Eye Recognition Benchmarking Competition (SSERBC 2017). It was organised in the context of the International Joint Conference on Biometrics (IJCB 2017). The aim of this competition was to record the recent developments in sclera segmentation and eye recognition in the visible spectrum (using iris, sclera and peri-ocular, and their fusion), and also to gain the attention of researchers on this subject. In this regard, we have used the Multi-Angle Sclera Dataset (MASD version 1). It is comprised of2624 images taken from both the eyes of 82 identities. Therefore, it consists of images of 164 (82×2) eyes. A manual segmentation mask of these images was created to baseline both tasks. Precision and recall based statistical measures were employed to evaluate the effectiveness of the segmentation and the ranks of the segmentation task. Recognition accuracy measure has been employed to measure the recognition task. Manually segmented sclera, iris and peri-ocular regions were used in the recognition task. Sixteen teams registered for the competition, and among them, six teams submitted their algorithms or systems for the segmentation task and two of them submitted their recognition algorithm or systems. The results produced by these algorithms or systems reflect current developments in the literature of sclera segmentation and eye recognition, employing cutting edge techniques. The MASD version 1 dataset with some of the ground truth will be freely available for research purposes. The success of the competition also demonstrates the recent interests of researchers from academia as well as industry on this subject.
The English-Slovene Glossary of Virtualization Terminology
With the growing adoption of using virtual machines over physical hosts as a form of resource consolidation, The English-Slovene Glossary of Virtualization-related Terms encompassing management of virtual machines, cloud orchestration and data storage seemed like the next logical step.The Glossary of Virtualization-related Terms has been translated into Slovene and reviewed by experts in the fields of cloud computing, virtualization technologies and linguists. Close to 6000 terms had been localized for the Slovene market, using the advanced version of Poedit application – the editor for translating apps and websites. PoEdit automatically displays translation equivalents either from its own base (built-in translation memory) or from the base of previously translated words and phrases, which had been created and offered as opensource by other users. Based on these, it makes suggestions and, over time, learns enough to fill in frequently used strings. The translated text was then imported into its original page location – the Graphic User Interface (visible on buttons on the dashboard) of the customized ManageIQ Enterprise Virtualization Manager (EVM) software used by administrators of public and private clouds. Hence the main criterion was brevity and precision in transfering meaning across languages. This is where we encountered most problems – neologisms and existing words that acquire new meaning as a result of rapid development of virtualization technology. To avoid merely adding a suffix while the core of the word remains the same in Slovene (e.g. tenant, tenant-ov) and also to encourage further additions, comments or suggested changes the glossary has been made available on Wikipedia, the online encyclopedia.
The Unconstrained Ear Recognition Challenge
In this paper we present the results of the Unconstrained Ear Recognition Challenge (UERC), a group benchmarking effort centered around the problem of person recognition from ear images captured in uncontrolled conditions. The goal of the challenge was to assess the performance of existing ear recognition techniques on a challenging large-scale dataset and identify open problems that need to be addressed in the future. Five groups from three continents participated in the challenge and contributed six ear recognition techniques for the evaluation, while multiple baselines were made available for the challenge by the UERC organizers. A comprehensive analysis was conducted with all participating approaches addressing essential research questions pertaining to the sensitivity of the technology to head rotation, flipping, gallery size, large-scale recognition and others. The top performer of the UERC was found to ensure robust performance on a smaller part of the dataset (with 180 subjects) regardless of image characteristics, but still exhibited a significant performance drop when the entire dataset comprising 3,704 subjects was used for testing.
Towards a Proof-Based SLA Management Framework - The SPECS Approach
We present a framework that allows monitoring of the cloud-based applications and environments to verify fulfilment of Service Level Agreements (SLAs), to analyse and remediate detectable security breaches that compromise the validity of SLAs related to storage services. In particular, we describe a system to facilitate identification of the root cause of each violation of integrity, write-serializability and read-freshness properties. Such a system enables executing remediation actions specifically planned for detectable security incidents. The system is activated in an automated way on top of storage services, according to an SLA, which can be negotiated with customers.
Towards a Unified Taxonomy and Architecture of Cloud Frameworks
Infrastructure as a Service (IaaS) is one of the most important layers of Cloud Computing. However, there is an evident deficiency of mechanisms for analysis, comparison and evaluation of IaaS cloud implementations, since no unified taxonomy or reference architecture is available. In this article, we propose a unified taxonomy and an IaaS architectural framework. The taxonomy is structured around seven layers: core service layer, support layer, value-added services, control layer, management layer, security layer and resource abstraction. We survey various IaaS systems and map them onto our taxonomy to evaluate the classification. We then introduce an IaaS architectural framework that relies on the unified taxonomy. We provide a detailed description of each layer and define dependencies between the layers and components. Finally, we evaluate the proposed IaaS architectural framework on several real-world projects, while performing a comprehensive analysis of the most important commercial and open-source IaaS products. The evaluation results show notable distinction of feature support and capabilities between commercial and open-source IaaS platforms, significant deficiency of important architectural components in terms of fulfilling true promise of infrastructure clouds, and real-world usability of the proposed taxonomy and architectural framework.
Training Convolutional Neural Networks with Limited Training Data for Ear Recognition in the Wild
Identity recognition from ear images is an active field of research within the biometric community. The ability to capture ear images from a distance and in a covert manner makes ear recognition technology an appealing choice for surveillance and security applications as well as related application domains. In contrast to other biometric modalities, where large datasets captured in uncontrolled settings are readily available, datasets of ear images are still limited in size and mostly of laboratory-like quality. As a consequence, ear recognition technology has not benefited yet from advances in deep learning and convolutionalneural networks (CNNs) and is still lacking behind other modalities that experienced significant performance gains owing to deep recognition technology. In this paper we address this problem and aim at building a CNN-based ear recognition model. We explore different strategies towards model training with limited amounts of training data and show that by selecting an appropriate model architecture, using aggressive data augmentation and selective learning on existing (pre-trained) models, we are able to learn an effective CNN·based model using a little more than 1300training images. The result of our work is the first CNN·based approach to ear recognition that is also made publicly available to the research community. With our model we are able to improve on the rank one recognition rate of the previous state-of-the-art by more than 25% on a challenging dataset of ear images captured from the web (a.k.a, in the wild).
Transportation Ecosystem Framework in Fog to Cloud Environment
Traffic congestion and accidents cause cities to be the principal source of pollutant emissions. The TIMON project initiative aims at providing Real-Time (RT) information and cloud-based services through an open web-based platform and a mobile application to the main actors: drivers, vulnerable road users and businesses. TIMON establishes a cooperative ecosystem to connect people, vehicles, infrastructure and business and contributes to intelligent transport, IoT and Cloud computing. In the first part, this paper provides an overview of TIMON and how it contributes to increasing safety and reducing congestion and emissions. The TIMON ecosystem represents the perfect use case of distributed technologies, as it collects data from IoT sensors, open and closed data sources and user engagement data, processes it and provides useful information not only for road users, but also for scientists and technicians who need real systems to study the data, infrastructure and IT safety management. In the second part, the Cloud deployment of the TIMON system is described in detail and a new, more distributed design is proposed to exploit the potential of current emerging technologies of Fog and Edge computing.
Use of the TRIPOD Overlay Network for Resource Discovery
This paper presents a fully decentralized, efficient and highly scalable resource-discovery approach that is applicable to large heterogeneous and highly dynamic distributed systems. The approach is based on a hybrid overlay network, named TRIPOD, which enables an efficient search for resources in the aforementioned highly distributed, dynamic and largely heterogeneous systems. The key advantages of our solution are its efficient proximity searching, an ability to search over highly dynamic resource properties, the in-built fault tolerance and robustness and, finally, its very low and manageable network overhead.
VPP and its role in the eBADGE project
The project’s main objective is to propose an optimal pan-European Intelligent Balancing Mechanism, which is able to integrate VPP systems by means of an integrated communication infrastructure that can assist in the management of the electricity transmission and distribution grids in an optimised, controlled and secure manner. The project will develop an optimal architecture for the implementation of a transnational balancing/reserve market in Europe, and evaluation of the order of magnitude of the cost reduction that can be achieved compared to separate national management. The project was piloted on the borders of Italy, Austria and Slovenia.
XLAB – an SME with Expertise in Distributed Systems and Computationally Intensive Applications
The company and its selected product portfolio are described, along with its brief history, and future plans. XLAB has been focusing on distributed systems since the company was founded in 2001. The company emphasizes the exchange between academia and industry and is successfully applying the knowledge from both worlds in its products. One example of an SOA-based application – PHOV – is presented more thoroughly, while the other are mentioned only briefly. Our future plans focus primarily on specialized Cloud-based development and offerings in local environments of Slovenia and the Balkans.