Elastic Processes

Summary

In Cloud Computing, resource elasticity is the ability to scale computing, storage, and network capacities. Scaling can be done either horizontally, when additional virtual machines (VMs) are leased or released if necessary, or vertically, when the size of a particular VM is changed . Apart from resource elasticity, elasticity may be considered also from a cost and quality perspective. Cost elasticity describes the responsiveness of resource provisioning to changes in cost. Quality elasticity measures how responsive quality is to a change in resource usage . Importantly, these three perspectives - resources, cost, and quality - do not necessarily have a linear relationship.

During the enactment of manufacturing processes in a Cloud Manufacturing platform, the usage of cloud-based computational resources allows for the runtime adjustment of infrastructural components based on the actual demand. Such flexible business processes which are enacted on the basis of smart leasing and releasing of computational resources are called elastic processes . In order to realize elastic manufacturing processes, a methodology and toolset to manage the complete business process lifecycle is needed. It is equally important to control cloud-based computational resources appropriately: To lease resources when needed, to deploy process activities (in terms of single manufacturing services) onto those resources, to invoke service instances using a schedule, and to release resources after activities have been finished. Such cloud control mechanisms have recently become a topic of major interest within the research community .

Elastic process implementation

Elastic processes are composed from services. However, while in classic service composition services are available on the Internet of Services, for elastic processes, process owners or brokers may deploy services on cloud-based computational resources, e.g., VMs or containers. To realize elastic processes, a Business Process Management System (BPMS) is needed to be extended by modules to allow the BPMS to act as a cloud controller in order to lease and release resources, deploy services on these resources, and invoke the service instances following a process schedule . The process schedule should be optimised based on functional and non-functional constraints.

Optimisation

In an elastic process landscape, process scheduling and resource allocation are based on the expected functionalities and non-functional demands of the process instances. Usually, constraints on non-functional attributes like Quality of Service (QoS) aspects, for example, an expected deadline for process enactment, are defined in Service Level Agreements (SLAs). Whenever SLA violations occur, penalty costs may be charged. There are several approaches to SLA enactment for single cloud applications, which optimise application executions with regard to cost minimization . In addition, a number of different approaches have been proposed to solve the optimisation of elastic processes, e.g., . In real-world manufacturing processes, a BPMS for elastic processes, also known as elastic Business Process Management System (eBPMS), needs to be able to solve these optimisation problems in very short time and under potentially heavy load.

In order to tackle the problem of elastic process optimisation, one common approach is to enact SLAs in a resource-efficient way with little or no human-based interaction. For this goal, the method of autonomically governing cloud computing infrastructures is investigated within CREMA . Another optimisation approach for simple sequential elastic processes applies a predictive strategy based on load forecasts. More complex process patterns like XOR-blocks, AND-blocks, or Repeat Loops remain to be a challenge, even though these process patterns are very common in real-world business processes. Despite being an important cost factor, penalty cost as well as the Billing Time Unit, which expresses the cost per leasing period for cloud-based computational resources, are only considered within the most recent optimisation work, which defines optimisation to find an optimal solution for complex elastic processes. Since optimisation of elastic processes is an NP-hard problem, one particular issue is to find solutions to the optimisation problem in polynomial time. Process landscapes can become very large, comprising literally thousands of process models and instances. Also, these process landscapes are dynamic and volatile, i.e., ever-changing, requiring replanning during process runtime. Hence, it is necessary to implement heuristical approaches, which are able to schedule processes and allocate resources in a dynamic and real-time way.

Relation to CREMA

Inter-organisational manufacturing processes are the primary way of interaction between companies in the manufacturing settings CREMA investigates. Hence, supporting real-world manufacturing processes by their virtualised, software-based counterparts is of primary interest in CREMA. Especially, CREMA does groundbreaking research with regard to process optimisation (both in the real world and regarding the usage of Cloud-based computational resources), and provides the means for the "servitisation" of manufacturing processes, which can then be used within elastic manufacturing processes. 

Articles

  1. C. Chang et al., "Mobile Cloud Business Process Management System for the Internet of Things: A Survey," ACM Computing Surveys, vol. 49, no. 4, 2017. Link
    The Internet of Things (IoT) represents a comprehensive environment that consists of a large number of smart devices interconnecting heterogeneous physical objects to the Internet. Many domains such as logistics, manufacturing, agriculture, urban computing, home automation, ambient assisted living, and various ubiquitous computing applications have utilized IoT technologies. Meanwhile, Business Process Management Systems (BPMSs) have become a successful and efficient solution for coordinated management and optimized utilization of resources/entities. However, past BPMSs have not considered many issues they will face in managing large-scale connected heterogeneous IoT entities. Without fully understanding the behavior, capability, and state of the IoT entities, the BPMS can fail to manage the IoT integrated information systems. In this article, we analyze existing BPMSs for IoT and identify the limitations and their drawbacks based on a Mobile Cloud Computing perspective. Later, we discuss a number of open challenges in BPMS for IoT.
    none entered
  2. P. Hoenisch et al., "Optimization of Complex Elastic Processes," IEEE Transactions on Services Computing, pp. 700-713, 2016. doi:10.1109/TSC.2015.2428246 Link
    Business Process Management is a matter of great importance in different industries and application areas. In many cases, it involves the execution of resource-intensive tasks in terms of computing power such as CPU and RAM. Due to the emergence of Cloud computing, theoretically unlimited resources can be used for the enactment of business processes. These Cloud resources render several challenges for Business Process Management Systems to ensure a predefined Quality of Service level during Cloud-based process enactment. Therefore, new solutions for process scheduling and resource allocation are required to tackle these challenges. Within this paper, we present a novel approach to schedule business processes and optimize the used Cloud-based computational resources in a cost-efficient way, thus realizing so-called elastic processes. For that, we specify the Service Instance Placement Problem, i.e., an optimization model which defines the setting of how service instances are scheduled among resources. Through extensive evaluations we show the benefits of our contributions and compare the novel approach against a baseline which follows an ad hoc approach.
    none entered
  3. S. Singh et al., “QoS-Aware Autonomic Resource Management in Cloud Computing: A Systematic Review,” ACM Computing Surveys, vol. 48, no. 3, 2015 Link
    As computing infrastructure expands, resource management in a large, heterogeneous, and distributed environment becomes a challenging task. In a cloud environment, with uncertainty and dispersion of resources, one encounters problems of allocation of resources, which is caused by things such as heterogeneity, dynamism, and failures. Unfortunately, existing resource management techniques, frameworks, and mechanisms are insufficient to handle these environments, applications, and resource behaviors. To provide efficient performance of workloads and applications, the aforementioned characteristics should be addressed effectively. This research depicts a broad methodical literature analysis of autonomic resource management in the area of the cloud in general and QoS (Quality of Service)-aware autonomic resource management specifically. The current status of autonomic resource management in cloud computing is distributed into various categories. Methodical analysis of autonomic resource management in cloud computing and its techniques are described as developed by various industry and academic groups. Further, taxonomy of autonomic resource management in the cloud has been presented. This research work will help researchers find the important characteristics of autonomic resource management and will also help to select the most suitable technique for autonomic resource management in a specific application along with significant future research directions.
    none entered
  4. S. Schulte et al., "Elastic Business Process Management: State of the Art and Open Challenges for BPM in the Cloud," Future Generation Computer Systems, vol. 46, pp. 36–50, 2015. Link
    With the advent of cloud computing, organizations are nowadays able to react rapidly to changing demands for computational resources. Not only individual applications can be hosted on virtual cloud infrastructures, but also complete business processes. This allows the realization of so-called elastic processes, i.e., processes which are carried out using elastic cloud resources. Despite the manifold benefits of elastic processes, there is still a lack of solutions supporting them. In this paper, we identify the state of the art of elastic Business Process Management with a focus on infrastructural challenges. We conceptualize an architecture for an elastic Business Process Management System and discuss existing work on scheduling, resource allocation, monitoring, decentralized coordination, and state management for elastic processes. Furthermore, we present two representative elastic Business Process Management Systems which are intended to counter these challenges. Based on our findings, we identify open issues and outline possible research directions for the realization of elastic processes and elastic Business Process Management.
    none entered
  5. S. Euting et al.. Scalable Business Process Execution in the Cloud. IEEE Intern. Conf. on Cloud Engineering (IC2E 2014), Boston, USA, pp.175-184, 2014. Link
    Business processes orchestrate service requests in a structured fashion. Process knowledge, however, has rarely been used to predict and decide about cloud infrastructure resource usage. In this paper, we present an approach for BPM-aware cloud computing that builds on process knowledge to improve the timeliness and quality of resource scaling decisions. We introduce an IaaS resource controller based on fuzzy theory that monitors process execution and that is used to predict and control resource requirements for subsequent process tasks. In a laboratory experiment, we evaluate the controller design against a commercially available state-of-the-art auto scaler. Based on the results, we discuss improvements and limitations, and suggest directions for further research.
    none entered
  6. Z. Wu et al., "A market-oriented hierarchical scheduling strategy in cloud workflow systems," The Journal of Supercomputing, vol. 63, no. 1, pp. 256-293, 2013. Link
    A cloud workflow system is a type of platform service which facilitates the automation of distributed applications based on the novel cloud infrastructure. One of the most important aspects which differentiate a cloud workflow system from its other counterparts is the market-oriented business model. This is a significant innovation which brings many challenges to conventional workflow scheduling strategies. To investigate such an issue, this paper proposes a market-oriented hierarchical scheduling strategy in cloud workflow systems. Specifically, the service-level scheduling deals with the Task-to-Service assignment where tasks of individual workflow instances are mapped to cloud services in the global cloud markets based on their functional and non-functional QoS requirements; the task-level scheduling deals with the optimisation of the Task-to-VM (virtual machine) assignment in local cloud data centres where the overall running cost of cloud workflow systems will be minimised given the satisfaction of QoS constraints for individual tasks. Based on our hierarchical scheduling strategy, a package based random scheduling algorithm is presented as the candidate service-level scheduling algorithm and three representative metaheuristic based scheduling algorithms including genetic algorithm (GA), ant colony optimisation (ACO), and particle swarm optimisation (PSO) are adapted, implemented and analysed as the candidate task-level scheduling algorithms. The hierarchical scheduling strategy is being implemented in our SwinDeW-C cloud workflow system and demonstrating satisfactory performance. Meanwhile, the experimental results show that the overall performance of ACO based scheduling algorithm is better than others on three basic measurements: the optimisation rate on makespan, the optimisation rate on cost and the CPU time.
    none entered
  7. T. Rahmani et al., "Ontology-based integration of heterogenoeus material data resources in Product Lifecycle Management," IEEE International Conference on Systems, Man, and Cybernetics (SMC 2013), 2013, pp. 4589–4594. Link
    none entered
    none entered
  8. D. Schuller et al., "Optimizing Complex Service-based Workflows for Stochastic QoS Parameters," International Journal of Web Services Research, vol. 10, no. 4, pp. 1–38, 2013. Link
    The challenge of optimally selecting services from a set of functionally appropriate ones under Quality of Service (QoS) constraints – the Service Selection Problem – has been extensively addressed in the literature based on deterministic parameters. In practice, however, Quality of Service QoS parameters rather follow a stochastic distribution. In the work at hand, we present an integrated approach which addresses the Service Selection Problem for complex structured as well as unstructured workflows in conjunction with stochastic Quality of Service parameters. Accounting for penalty cost which accrue due to Quality of Service violations, we perform a worst-case analysis as opposed to an average-case analysis aiming at avoiding additional penalties. Although considering conservative computations, QoS violations due to stochastic QoS behavior still may occur resulting in potentially severe penalties. Our proposed approach reduces this impact of stochastic QoS behavior on total cost significantly.
    none entered
  9. P. Hoenisch et al., "Workflow Scheduling and Resource Allocation for Cloud-Based Execution of Elastic Processes," IEEE 6th International Conference on Service-Oriented Computing and Applications, Coloa, Hi, USA, 2013, pp. 1–8. Link
    Today's extensive business process landscapes make it necessary to handle the execution of a large number of workflows. Especially if workflow steps require the invocation of resource-intensive applications or a large number of applications needs to be carried out concurrently, process owners may have to allocate extensive computational resources, leading to high fixed costs. Instead, process owners could make use of Cloud-based computational resources, dynamically leasing and releasing resources on demand, which could lead to lower costs. In the work at hand, we propose a resource-efficient workflow scheduling algorithm for business processes and Cloud-based computational resources. Through the integration into the Vienna Platform for Elastic Processes and an evaluation, we show the practical applicability and the benefits of our contributions. Specifically, we find that our approach reduces the resource demand if compared with an ad hoc approach.
    none entered
  10. P. Hoenisch et al., "Self-Adaptive Resource Allocation for Elastic Process Execution," IEEE Sixth International Conference on Cloud Computing, Santa Clara, USA, 2013, pp. 220–227. Link
    Especially in large companies, business process landscapes may be made up from thousands of different process definitions and instances. As a result, a Business Process Management System (BPMS) needs to be able to handle the concurrent execution of a very large number of workflow steps. Many of these workflow steps may be resource-intensive, leading to ever-changing requirements regarding the needed computing resources to execute them. Using Cloud technologies, it is possible to allocate workflow steps to resources obtained on demand from Cloud platform providers. However, current BPMS do not feature the means to make use of Cloud resources in order to execute workflows. This work presents an approach to automatically lease and release Cloud resources for workflow executions based on knowledge about the current and future process landscape. This approach to self-adaptive resource allocation for elastic process execution is implemented as part of ViePEP, a research BPMS able to handle workflow executions in the Cloud.
    none entered
  11. S. Schulte et al., "Cost‐Driven Optimization of Cloud Resource Allocation for Elastic Processes," International Journal of Cloud Computing, vol. 1, no. 2, pp. 1–15, 2013. Link
    Today's extensive business process landscapes make it necessary to handle the execution of a large number of business processes and individual process steps. Especially if process steps require the invocation of resource‐intensive applications or a large number of applications need to be executed concurrently, process owners may have to allocate extensive computational resources, leading to high fixed cost. In the work at hand, we propose an alternative to the provision of fixed resources, based on automatic leasing and releasing of Cloud‐based computational resources. For this, we present an integrated approach which addresses the cost‐driven optimization of Cloud‐based computational resources for business processes in order to realize so‐called Elastic Processes. Through an evaluation, we show the practical applicability and benefits of our contributions. Specifically, we find that our approach substantially reduces the cost compared to an ad hoc approach.
    none entered
  12. M. Maurer et al., "Adaptive resource configuration for Cloud infrastructure management," Future Generation Computer Systems, vol. 29, no. 2, pp. 472–487, 2013. Link
    To guarantee the vision of Cloud Computing QoS goals between the Cloud provider and the customer have to be dynamically met. This so-called Service Level Agreement (SLA) enactment should involve little human-based interaction in order to guarantee the scalability and efficient resource utilization of the system. To achieve this we start from Autonomic Computing, examine the autonomic control loop and adapt it to govern Cloud Computing infrastructures. We first hierarchically structure all possible adaptation actions into so-called escalation levels. We then focus on one of these levels by analyzing monitored data from virtual machines and making decisions on their resource configuration with the help of knowledge management (KM). The monitored data stems both from synthetically generated workload categorized in different workload volatility classes and from a real-world scenario: scientific workflow applications in bioinformatics. As KM techniques, we investigate two methods, Case-Based Reasoning and a rule-based approach. We design and implement both of them and evaluate them with the help of a simulation engine. Simulation reveals the feasibility of the CBR approach and major improvements by the rule-based approach considering SLA violations, resource utilization, the number of necessary reconfigurations and time performance for both, synthetically generated and real-world data.
    none entered
  13. Z. Cai et al., "Critical Path-Based Iterative Heuristic for Workflow Scheduling in Utility and Cloud Computing," 11th Intern. Conf. on Service Oriented Computing (ICSOC 2013), Berlin, Germany, pp. 207-221, 2013. Link
    This paper considers the workflow scheduling problem in utility and cloud computing. It deals with the allocation of tasks to suitable resources so as to minimize total rental cost of all resources while maintaining the precedence constraints on one hand and meeting workflow deadlines on the other. A Mixed Integer programming (MILP) model is developed to solve small-size problem instances. In view of its NP-hard nature, a Critical Path-based Iterative (CPI) heuristic is developed to find approximate solutions to large-size problem instances where the multiple complete critical paths are iteratively constructed by Dynamic Programming according to the service assignments for scheduled activities and the longest (cheapest) services for the unscheduled ones. Each critical path optimization problem is relaxed to a Multi-stage Decision Process (MDP) problem and optimized by the proposed dynamic programming based Pareto method. The results of the scheduled critical path are utilized to construct the next new critical path. The iterative process stops as soon as the total duration of the newly found critical path is no more than the deadline of all tasks in the workflow. Extensive experimental results show that the proposed CPI heuristic outperforms the existing state-of-the-art algorithms on most problem instances. For example, compared with an existing PCP (partial critical path based) algorithm, the proposed CPI heuristic achieves a 20.7% decrease in the average normalized resource renting cost for instances with 1,000 activities.
    none entered
  14. S. Schulte et al., "Realizing Elastic Processes with ViePEP," Service-Oriented Computing (ICSOC 2012) Workshops, 2013, pp. 439–442. Link
    Online business processes are faced with varying workloads that require agile deployment of computing resources. Elastic processes leverage the on-demand provisioning ability of Cloud Computing to allocate and de-allocate resources as required to deal with shifting demand. To realize elastic processes, it is necessary to track the current and future system landscape, monitor the process execution, reason about how to utilize resources in an optimal way, and carry out the necessary actions (e.g., start/stop servers, move services). Traditional Business Process Management Systems (BPMS) do not consider such needs of elastic process. Within this demo, we present ViePEP, a research BPMS able to execute and monitor resource-, cost- and QoS-elastic, service-based workflows and optimize the overall system landscape based on a reasoning of the non-functional requirements of current and forthcoming elastic processes.
    none entered
  15. S. Schulte et al., "Introducing the Vienna Platform for Elastic Processes," Service-Oriented Computing (ICSOC 2012) Workshops, 2013, pp. 179–190. Link
    Today's extensive business process landscapes make it necessary to handle the execution of a large number of business processes and individual process steps. Especially if process steps require the invocation of resource‐intensive applications or a large number of applications need to be executed concurrently, process owners may have to allocate extensive computational resources, leading to high fixed cost.

    In the work at hand, we propose an alternative to the provision of fixed resources, based on automatic leasing and releasing of Cloud‐based computational resources. For this, we present an integrated approach which addresses the cost‐driven optimization of Cloud‐based computational resources for business processes in order to realize so‐called Elastic Processes. Through an evaluation, we show the practical applicability and benefits of our contributions. Speesource-intensive tasks are playing an increasing role in business processes. The emergence of Cloud computing has enabled the deployment of such tasks onto resources sourced on-demand from Cloud providers. This has enabled so-called elastic processes that are able to dynamically adjust their resource usage to meet varying workloads.

    Traditional Business Process Management Systems (BPMSs) do not consider the needs of elastic processes such as monitoring facilities, tracking the current and future system landscape, reasoning about optimally utilizing resources given Quality of Service constraints, and executing necessary actions (e.g., start/stop servers, move services). This paper introduces ViePEP, a research BPMS capable of handling the aforementioned requirements of elastic processes.cifically, we find that our approach substantially reduces the cost compared to an ad hoc approach.
    none entered
  16. T. Grubic et al., "Integrating process and ontology to support supply chain modelling," Journal of Computer Integrated Manufacturing, vol. 24, no. 9, pp. 847–863, 2011. Link
    Many researchers have recognised a lack of common framework to support supply chain modelling and analysis and proposed their solutions accordingly. Majority of the approaches proposed are more concerned with building an object model of a supply chain than identifying processes that realistically describe a supply chain. Although object models provide means or building blocks necessary to model and analyse different elements of a supply chain, absence of supply chain processes promotes a ‘black box’ view on the supply chain. This study proposes an ontology model specifically developed to support supply chain process modelling and analysis. It is based on a premise that prior identification of processes the ontology is supposed to support facilitates the ontology development and validation. This study introduces development, validation and application of supply chain ontology to support supply chain process modelling and analysis.
    none entered
  17. M.J. Pratt, "Introduction to ISO 10303–the STEP standard for product data exchange," Journal of Computing and Information Science in Engineering, vol. 1, no. 1, pp. 102–103, 2011. Link
    none entered
    none entered
  18. S. Dustdar et al., "Principles of Elastic Processes," IEEE Internet Computing, vol. 15, no. 5, pp. 66–71, 2011. Link
    Cloud computing’s success has made on-demand computing with a pay-as-you-go pricing model popular. However, cloud computing’s focus on resources and costs limits progress in realizing more flexible, adaptive processes. The authors introduce elastic processes, which are based on explicitly modeling resources, cost, and quality, and show how they improve on the state of the art.
    none entered
  19. R. Buyya et al., "Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility," Future Generation Computing Systems, vol. 25, no. 6, pp. 599–616, 2009. Link
    With the significant advances in Information and Communications Technology (ICT) over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility (after water, electricity, gas, and telephony). This computing utility, like all other four existing utilities, will provide the basic level of computing service that is considered essential to meet the everyday needs of the general community. To deliver this vision, a number of computing paradigms have been proposed, of which the latest one is known as Cloud computing. Hence, in this paper, we define Cloud computing and provide the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs). We also provide insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA)-oriented resource allocation. In addition, we reveal our early thoughts on interconnecting Clouds for dynamically creating global Cloud exchanges and markets. Then, we present some representative Cloud platforms, especially those developed in industries, along with our current work towards realizing market-oriented resource allocation of Clouds as realized in Aneka enterprise Cloud technology. Furthermore, we highlight the difference between High Performance Computing (HPC) workload and Internet-based services workload. We also describe a meta-negotiation infrastructure to establish global Cloud exchanges and markets, and illustrate a case study of harnessing 'Storage Clouds' for high performance content delivery. Finally, we conclude with the need for convergence of competing IT paradigms to deliver our 21st century vision.
    none entered
  20. H. Lin et al., "A manufacturing system engineering ontology model on the semantic web for inter-enterprise collaboration," Computers in Industry, vol. 58, pp. 428–437, 2007. Link
    This paper investigates ontology-based approaches for representing information semantics and in particular the World Wide Web. A general manufacturing system engineering (MSE) knowledge representation scheme, called an MSE ontology model, to facilitate communication and information exchange in inter-enterprise, multi-disciplinary engineering design teams has been developed and encoded in the standard semantic web language. The proposed approach focuses on how to support information autonomy that allows the individual team members to keep their own preferred languages or information models rather than requiring them all to adopt standardized terminology. The MSE ontology model provides efficient access by common mediated meta-models across all engineering design teams through semantic matching. This paper also shows how the primitives of Web Ontology Language (OWL) can be used for expressing simple mappings between the mediated MSE ontology model and individual ontologies.
    none entered

Software

  1. The Vienna Platform for Elastic Processes Link
    Resource-intensive tasks are playing an increasing role in business processes. The emergence of Cloud computing has enabled the deployment of such tasks onto resources sourced on-demand from Cloud providers. This has enabled so-called elastic processes that are able to dynamically adjust their resource usage to meet varying workloads. Traditional Business Process Management Systems (BPMSs) do not consider the needs of elastic processes such as monitoring facilities, tracking the current and future system landscape, reasoning about optimally utilizing resources given Quality of Service constraints, and executing necessary actions (e.g., start/stop servers, move services). Hence, we are developed ViePEP - the Vienna Platform for Elastic Processes. ViePEP is a research BPMS capable of handling the aforementioned requirements of elastic processes. It provides a Workflow Manager as well as the means to control the Cloud resources needed for the execution and invocation of single workflow steps, which are realized as REST-based Web services.
    none entered

Projects

  1. ASAP (2014–2017): A Scalable Analytics Platform. FP7 Link
    “A Scalable Analytics Platform ASAP” is an FP7 research project that develops a dynamic open-source execution framework for scalable data analytics. During the execution runtime, the problem of resource elasticity is addressed in the project. The project proposes a modelling framework that evaluates the cost, quality and performance of available computational resources in order to decide on the most advantageous store, indexing and execution pattern. More detailed: This project proposes a unified, open-source execution framework for scalable data analytics. Data analytics tools have become essential for harnessing the power of our era's data deluge. Current technologies are restrictive, as their efficacy is usually bound to a single data and compute model, often depending on proprietary systems. The main idea behind ASAP is that no single execution model is suitable for all types of tasks and no single data model (and store) is suitable for all types of data. The project makes the following innovative contributions:(a) A general-purpose task-parallel programming model. The runtime will incorporate and advance state-of-the-art task-parallel programming models features, namely: (i) irregular general-purpose computations, (ii) resource elasticity, (iii) synchronization, data-transfer, locality and scheduling abstraction, (iv) ability to handle large sets of irregular distributed data, and (v) fault-tolerance.(b) A modeling framework that constantly evaluates the cost, quality and performance of data and computational resources in order to decide on the most advantageous store, indexing and execution pattern available.(c) A unique adaptation methodology that will enable the analytics expert to amend the task she has submitted at an initial or later stage.(d) A state-of-the-art visualization engine that will enable the analytics expert to obtain accurate, intuitive results of the analytics tasks she has initiated in real-time.Two exemplary applications that showcase the ASAP technology in the areas of Web content and large-scale business analytics will be developed. The consortium -- led by the Foundation for Research & Technology -- is well-positioned to achieve its objectives by bringing together a team of leading researchers in data-management technologies. These are combined with active industrial and leading user organizations that offer expertise in the production-level domain of data analytics.
    Apart from elastic process optimisation, CREMA is intended to be more extensive, providinges the a framework enabling the leasing and releasing of manufacturing assets and services in an on-demand, utility-like fashion, rapid elasticity through scaling these assets up and down if necessary, and pay-per-use through metered service. To realize these goals, contributions both during process design time and run time are necessary.
  2. CloudFlow (2013–2016): Cloud Computing for Engineering Workflows. FP7 Link
    “Cloud Computing for Engineering Workflows CloudFlow” of the PF7 program has the aim to provide on-demand access to scalable computational services. Traditionally, the European manufacturing industry is characterized by innovative technology, quality processes and robust products which have leveraged Europe's industrialization. However, globalization has exposed Europe's industry to new emerging and industrialized manufacturing markets and the current economic challenges have decelerated the internal boost and investment, respectively. Hence, new ICT infrastructures across Europe need to be established to re-enforce global competitiveness. The motivating idea behind CloudFlow is to open up the power of Cloud Computing for engineering WorkFlows (CloudFlow). The aim of CloudFlow is to enable engineers to access services on the Cloud spanning domains such as CAD, CAM, CAE (CFD), Systems and PLM, and combining them to integrated workflows leveraging HPC resources. Workflows are of key importance in today's product/production development processes were products show ever increasing complexity integrating geometry, mechanics, electronics and software aspects. Such complex products require multi-domain simulation, simulation-in-the-loop and synchronized workflows based on interoperability of data, services and workflows. CloudFlow is an SME-driven IP incorporating seven SMEs: Missler (CAD/CAM), JOTNE (PLM), Numeca (CAE/CFD), ITI (Systems), Arctur (HPC), Stellba Hydro (hydraulic machines/hydro turbines) and CARSA (business models and security). Four renowned research institutions comple¬ment the consortium: DFKI, SINTEF, University of Nottingham and Fraunhofer. CloudFlow will build on existing standards and components to facilitate an as-vendor-independent-as-possible Cloud engineering workflows platform. Open Cloud Computing Interface (OCCI), STEP (for CAD and CAE data) and WSDL (for service description and orchestration) are amongst the core standards that will be leveraged. The key aspects (from a technical and a business perspective) are: Data, Services, Workflows, Users and Business models including Security aspects. CloudFlow will conduct two Open Calls for external experiments investigating the use of the CloudFlow infrastructure in new and innovative ways, outreaching into the engineering and manufacturing community and engaging external partners. Each of these two Open Calls will look for seven additional experiments to gather experience with engineering Cloud uses and gaining insights from these experiments. CloudFlow is striving for the following impacts: a) increasing industrial competitiveness by contributing to improve performance (front-loading, early error detection, time-to-market, ...) and innovation (co-use of models, early virtual testing) and b) improving in innovation capabilities by enabling more engineers to gain insights and to create innovation by accessing 'new' tools and easing the use of Cloud Infrastructures. All in all, CloudFlow wants to contribute to a wider adaption of Cloud infrastructures and making them a practical option for manufacturing companies.
    Unlike CloudFlow the resource scalability aspect is taken into consideration in the project “Scalable and Adaptive Internet Solutions SAIL”.

    An elasticity management platform is proposed in the project “Managing and Monitoring Elastic Cloud Applications CELAR”: CELAR provides resource allocation based on predefined elasticity strategies by means of a domain-specific language for elasticity requirements specification.

    “Scalability Management for Cloud Computing CloudScale” has the objective of aiding service providers in analysing, predicting and resolving scalability issues, i.e., support scalable service engineering. In comparison to the mentioned projects, CREMA has an emphasis on optimisation of the elastic manufacturing processes and deals with manufacturing resource virtualisation and capacities servitization, providing full life-cycle support for manufacturing processes. In contrast, CloudFlow, SAIL, CELAR, and CloudScale aim at scalability of single, independent applications.
  3. CELAR (2012–2015): Managing and Monitoring Elastic Cloud Applications. FP7 Link
    With the rapidly growing number of applications running over Cloud infrastructures and the amount of storage, compute and networking resources they require, an over-provisioning or manual, coarse-grained resource allocation approach is a highly unsatisfying solution with respect to application performance and incurred costs. The vision of the CELAR project is to provide automatic, multi-grained resource allocation for cloud applications. This enables the commitment of just the right amount of resources based on application demand, performance and requirements, results in optimal use of infrastructure resources and significant reductions in administrative costs.The goal of the project is to develop methods and tools for applying and controlling multi-grained, elastic resource provisioning for Cloud applications in an automated manner. This resource allocation will be performed through intelligent decision-making based on: (a) Cloud and application performance metrics collected and cost-evaluated through a scalable monitoring system and exposed to the user. (b) Qualitative and quantitative characterization of the application's performance through modelling of its elastic properties. CELAR covers the three layers required by an application to operate over the Cloud: The infrastructure layer (deployment over two different IaaS platforms), the monitoring/optimization middleware (automatic elasticity provisioning over cloud platforms and multi-layer monitoring) and the programming development environment (through a distributed tool to enable developers, administrators and users to define the characteristics of their applications, submit jobs and monitor performance). The outcome is a modular, completely open-source system that offers elastic programmability for the user and automatic elasticity at the platform level. This outcome can be bundled in a single software package for one-click installation of any application alongside its automated resource provisioning over a Cloud IaaS. Two exemplary applications that showcase and validate the aforementioned technology will be developed: The first will showcase the use of CELAR technology for massive data management and large-scale collaboration required in the on-line gaming realm, while the second will focus on the area of scientific computing, requiring compute- and storage-intensive genome computations. The CELAR consortium – under the lead of ATHENA Research and Innovation Center – is well-positioned to achieve its objectives by bringing together a team of leading researchers in the large-scale technologies such as Cloud/Grid Computing, service-oriented architectures, virtualization, analytics, Web 2.0 and the world of the Semantic Web. These are combined with active industrial and leading user organizations that offer expertise in the cloud application domain and production-level service provisioning.
    An elasticity management platform is proposed in the project “Managing and Monitoring Elastic Cloud Applications CELAR”: CELAR provides resource allocation based on predefined elasticity strategies by means of a domain-specific language for elasticity requirements specification. In comparison to the mentioned project, CREMA has an emphasis on optimisation of the elastic manufacturing processes and deals with manufacturing resource virtualisation and capacities servitization, providing full life-cycle support for manufacturing processes. In contrast, CloudFlow, SAIL, CELAR, and CloudScale aim at scalability of single, independent applications.
  4. CloudScale (2012–2015): Scalability Management for Cloud Computing. FP7 Link
    Current cloud platforms provide limited support for customers in designing scalable and cost efficient applications. In particular, they do not support analysing how an application will scale with a growing number of users and how this will affect operation costs.CloudScale will provide an engineering approach for building scalable cloud applications and services. CloudScale will support Software as a Service (SaaS) and Platform as a Service (PaaS) providers (a) to design their software for scalability and (b) to swiftly identify and gradually solve scalability problems in existing applications. CloudScale will enable the modelling of design alternatives and the analysis of their effect on scalability and cost. Best practices for scalability will further guide the design process. Additionally, CloudScale will provide tools and methods that detect scalability problems by analysing code. Based on the detected problems, CloudScale will offer guidance on the resolution of scalability problems. It answers the ICT Work Programme's call for achieving massive scalability for software-based services. The planned validation of project results involves two complementary use cases in the SaaS and the PaaS domain. CloudScale will leverage European application expertise into the domain of competitive cloud application offerings, both at the SaaS and PaaS level. The engineering approach for scalable applications and services will enable small and medium enterprises as well as large players to fully benefit from the cloud paradigm by building scalable and cost-efficient applications and services based on state-of-the-art cloud technology. Furthermore, the engineering approach reduces risks as well as costs for companies newly entering the cloud market. A tight, focused consortium with strong industrial partners, solid expertise in the domain and a proven track record from working together in earlier projects will invest a total of 386 PMs over 36 months.
    “Scalability Management for Cloud Computing CloudScale” has the objective of aiding service providers in analysing, predicting and resolving scalability issues, i.e., support scalable service engineering. In comparison to the mentioned project, CREMA has an emphasis on optimisation of the elastic manufacturing processes and deals with manufacturing resource virtualisation and capacities servitization, providing full life-cycle support for manufacturing processes. In contrast, CloudFlow, SAIL, CELAR, and CloudScale aim at scalability of single, independent applications.
  5. SAIL (2010–2013): Scalable and Adaptive Internet Solutions. FP7 Link
    SAIL's objective is the research and development of novel networking technologies using proof-of-concept prototypes to lead the way from current networks to the Network of the Future. SAIL leverages state of the art architectures and technologies, extends them as needed, and integrates them using experimentally-driven research, producing interoperable prototypes to demonstrate utility for a set of concrete use-cases. SAIL reduces costs for setting up, running, and combining networks, applications and services, increasing the efficiency of deployed resources (e.g., personnel, equipment and energy). SAIL improves application support via an information-centric paradigm, replacing the old host-centric one, and develops concrete mechanisms and protocols to realise the benefits of a Network of Information (NetInf). SAIL enables the co-existence of legacy and new networks via virtualisation of resources and self-management, fully integrating networking with cloud computing to produce Cloud Networking (CloNe). SAIL embraces heterogeneous media from fibre backbones to wireless access networks, developing new signalling and control interfaces, able to control multiple technologies across multiple aggregation stages, implementing Open Connectivity Services (OConS). SAIL also specifically addresses cross-cutting themes and non-technical issues, such as socio-economics, inclusion, broad dissemination, standardisation and network migration, driving new markets, business roles and models, and increasing opportunities for both competition and cooperation. SAIL gathers a strong industry-led consortium of leading operators, vendors, SME, universities and research centres, with a valuable experience acquired in previous FP7 projects, notably 4WARD. The impact will be a consensus among major European operators and vendors on a well-defined path to the Network of the Future together with the technologies required to follow that path.
    Resource scalability aspect is taken into consideration in the project “Scalable and Adaptive Internet Solutions SAIL”. In comparison to the mentioned project, CREMA has an emphasis on optimisation of the elastic manufacturing processes and deals with manufacturing resource virtualisation and capacities servitization, providing full life-cycle support for manufacturing processes. In contrast, CloudFlow, SAIL, CELAR, and CloudScale aim at scalability of single, independent applications.
  6. JUNIPER (2007–2013): Java Platform for High-Performance and Real-Time Large Scale Data Link
    This FP7 collaborative project “Java Platform for High-Performance and Real-Time Large Scale Data JUNIPER” has the objective to create a platform that can be configured to build and support a range of high- performance Big Data application domains, enabling real-time constraints to be met. Regarding elasticity, the project provides an API to assist with architecture discovery that characterises the host architecture in terms of accepted patterns (Symmetric multiprocessing, Non-uniform memory access) and assists the development of reactionary software to exploit the hardware. The efficient and real-time exploitation of large streaming data sources and stored data poses many questions regarding the underlying platform, including: Performance - how can the potential performance of the platform be exploited effectively by arbitrary applications; Guarantees - how can the platform support guarantees regarding processing streaming data sources and accessing stored data; and Scalability - how can scalable platforms and applications be built.The fundamental challenge addressed by the project is to enable application development using an industrial strength programming language that enables the necessary performance and performance guarantees required for real-time exploitation of large streaming data sources and stored data.The project's vision is to create a Java Platform that can support a range of high-performance Intelligent Information Management application domains that seek real-time processing of streaming data, or real-time access to stored data. This will be achieved by developing Java and UML modelling technologies to provide: Architectural Patterns - using predefined libraries and annotation technology to extend Java with new directives for exploiting streaming I/O and parallelism on high performance platforms; Virtual Machine Extensions - using class libraries to extend the JVM for scalable platforms;)Java Acceleration - performance optimisation is achieved using Java JIT to Hardware (FPGA), especially to enable real-time processing of fast streaming data;) Performance Guarantees - will be provided for common application real-time requirements; and Modelling - of persistence and real-time within UML / MARTE to enable effective development, code generation and capture of real-time system properties. The project will use financial and web streaming case studies from industrial partners to provide industrial data and data volumes, and to evaluate the developed technologies.
    Having similar intentions allowing for the Big Data and elasticity aspects, CREMA furthermore addresses business process optimisation during design and runtime.
This page was last changed on 13 February 2017, at 18:39.


Please log in if you do not want to leave your comment anonymously.

Home
To contribute:
Log in
Help
Contact