Stream Processing

Summary

Data stream processing or complex event processing (CEP) is one of the vital building blocks to deal with real-time information, like the sensor data provided by manufacturing machines. Stream processing operations can cover different activities. These activities include simple operations like filtering data, i.e., discarding data that contains errors, and aggregating data, i.e., aggregating sensor data over a specific time window to extract key performance indicators. Furthermore, these activities may also transform the data, so that it can be used with external systems, e.g., a proprietary monitoring infrastructure or to propagate error notifications.

Data stream management systems. The first data stream processing systems, e.g., Aurora  or Borealis , originated from the database management domain. They extend the traditional database model to support the continuous aspect of data streams and enable the user to query the data streams. Besides these mere scientific prototypes, there are also commercial solutions to orchestrate different operations on streaming information, like System S by IBM . System S allows to process different kinds of continuous data streams, e.g., financial data, telecommunication data or health data, in a reliable and efficient manner. While System S is designed as a full-featured toolkit, including a graphical user interface, for professional users, there are also several stream processing frameworks that originate from the need of processing enormous amounts of streaming data for web based applications. One of these frameworks is MillWheel , which was integrated into the Google Cloud Platform . Besides Google, also other major cloud service providers also provide their stream processing solutions, like Amazon IoT  or Stream Analytics  by Microsoft. 

Besides these proprietary stream processing solutions, there are also open stream processing frameworks. One of the most popular open frameworks is Apache Storm . Storm provides the developer with a programmatic toolkit, to design complex sequences of stream processing operations, whereas these operations can contain any kind of logic, ranging from simple message forwarding to complex analysis operations. To cope with new requirements, like a distributed deployment, Heron has been developed and offers a seamless successor for Apache Storm. Other stream processing frameworks, like Apache S4 , Apache Spark , Apache Samza , or Apache Apex pursue a more component-based approach. They only provide different software components, i.e., software libraries, which can be used to build any kind of stream processing applications, whereas Apache Storm rather focuses on processing the streaming information in a process-oriented manner, similar to System S.
Although most of these stream processing frameworks support the deployment on clusters, there is only very limited support to implement resource elasticity, based on the input data stream, as proposed by Ishii and Suzumura . Some of the proprietary stream processing solutions, like Amazon IoT or Google Cloud Dataflow  provide elastic scaling mechanism to react on different amount of streaming data, but there are still numerous open research challenges on how to efficiently scale the computational resources to process all information in real-time, while minimizing the costs for leasing the computational resources .

The scientific community has already picked up some of these challenges and proposed basic scaling mechanism based on thresholds . These basic metrics allow for a coarse-grained scaling policy, but there is still much room for improvement, e.g., by considering predictive scaling based on Kalman filters . Further, there are already some projects, which leverage the benefits of a distributed cloud for state-of-the-art stream processing frameworks, e.g., Cardellini et al.  propose an extension for Apache Storm to realize a distributed deployment.

Besides the sole distribution on clusters, there are also several projects, e.g., CSA or VISP, which incorporate a real distributed approach, where multiple the stream processing framework is deployed across different geographic locations. These distributed frameworks can provide a promising solution to reduce the network load by already preprocessing data, e.g., filtering streaming data near the data provider. 

Relation to CREMA

Data stream processing provides the foundation to process a continuous stream of information, such as monitoring data from manufacturing machines. Therefore CREMA investigates towards cost efficient resource provisioning algorithms for stream processing systems to minimize the costs for deploying monitoring solutions which are required for predictive maintenance scenarios. Furthermore, research efforts within CREMA provide distributed stream processing engines, which are capable to efficiently process data originating from different geographic locations. Last, the employment of advanced RDF stream processing (RSP; see Semantic Stream Processing) for intelligent condition monitoring and optimization of manufacturing processes in the cloud. 



Articles

  1. C. Hochreiner et al., "VISP: An Ecosystem for Elastic Data Stream Processing for the Internet of Things," 20th IEEE Intern. Conference on Enterprise Distributed Object Computing (EDOC), Wien, Austria, 2016. Link
    The Internet of Things is getting more and more traction, nevertheless, state-of-the-art approaches only focus on specific aspects, like the integration of heterogeneous devices or the processing of sensor data emitted by these devices. However, such domain-specific approaches slow the adoption rate of the Internet of Things, because users need to select and integrate different approaches in order to build a solution that fits all their requirements. To resolve this shortcoming, we have designed and implemented the VISP ecosystem, which provides a holistic approach for elastic data stream processing in Internet of Things scenarios by supporting the complete lifecycle of designing, deploying, and executing such scenarios. VISP further tackles the challenges of data privacy as well as software reuse, including monetization aspects in today's service landscapes. This paper analyzes challenges for creating solutions for the Internet of Things, presents the VISP ecosystem, and discusses its applicability for use case specific data stream processing topologies.
    none entered
  2. C. Hochreiner et al., “Elastic Stream Processing for Distributed Environments,” in IEEE Internet Computing, vol. 6, 2015. Link
    The Internet of Things introduces the need for more flexibility in stream processing. To address these challenges, the authors propose elastic stream processing for distributed environments. This novel concept allows for scalable and more flexible solutions compared to traditional approaches.
    none entered
  3. V. Cardellini et al., “Distributed QoS-aware scheduling in Storm,” in ACM Intern. Conference on Distributed Event-Based Systems (DEBS), Oslo, Norway, 2015. Link
    Storm is a distributed stream processing system that has recently gained increasing interest. We extend Storm to make it suitable to operate in a geographically distributed and highly variable environment such as that envisioned by the convergence of Fog computing, Cloud computing, and Internet of Things.
    none entered
  4. R. Barazzutti et al., "Elastic Scaling of a High-Throughput Content-Based Publish/Subscribe Engine," in Proeedings of the 34th IEEE Intern. Conference on Distributed Computing Systems (ICDCS), Madrid, Spain, 2014. Link
    Publish/subscribe (pub/sub) infrastructures running as a service on cloud environments offer simplicity and flexibility for composing distributed applications. Provisioning them appropriately is however challenging. The amount of stored subscriptions and incoming publications varies over time, and the computational cost depends on the nature of the applications and in particular on the filtering operation they require (e.g., content-based vs. topic-based, encrypted vs. non-encrypted filtering). The ability to elastically adapt the amount of resources required to sustain given throughput and delay requirements is key to achieving cost-effectiveness for a pub/sub service running in a cloud environment. In this paper, we present the design and evaluation of an elastic content-based pub/sub system: E-STREAMHUB. Specific contributions of this paper include: (1) a mechanism for dynamic scaling, both out and in, of stateful and stateless pub/sub operators, (2) a local and global elasticity policy enforcer maintaining high system utilization and stable end-to-end latencies, and (3) an evaluation using real-world tick workload from the Frankfurt Stock Exchange and encrypted content-based filtering.
    none entered
  5. T. Akidau et al., "MillWheel: Fault-tolerant Stream Processing at Internet Scale," in Very Large Databases (VLDB) Endowment, 2013. Link
    MillWheel is a framework for building low-latency data-processing applications that is widely used at Google. Users specify a directed computation graph and application code for individual nodes, and the system manages persistent state and the continuous flow of records, all within the envelope of the framework's fault-tolerance guarantees. This paper describes MillWheel's programming model as well as its implementation. The case study of a continuous anomaly detector in use at Google serves to motivate how many of MillWheel's features are used. MillWheel's programming model provides a notion of logical time, making it simple to write time-based aggregations. MillWheel was designed from the outset with fault tolerance and scalability in mind. In practice, we find that MillWheel's unique combination of scalability, fault tolerance, and a versatile programming model lends itself to a wide variety of problems at Google.
    none entered
  6. D. Miorandi et al., “Internet of things: Vision, applications and research challenges,” in Journal of Ad Hoc Networks, 10(7), 2012. Link
    The term “Internet-of-Things” is used as an umbrella keyword for covering various aspects related to the extension of the Internet and the Web into the physical realm, by means of the widespread deployment of spatially distributed devices with embedded identification, sensing and/or actuation capabilities. Internet-of-Things envisions a future in which digital and physical entities can be linked, by means of appropriate information and communication technologies, to enable a whole new class of applications and services. In this article, we present a survey of technologies, applications and research challenges for Internet-of-Things.
    none entered
  7. A. Ishii and T. Suzumura, "Elastic Stream Computing with Clouds," in Proceedings of 4th IEEE Intern. Conference on Cloud Computing (CLOUD), Washington, USA, 2011. Link
    Stream computing, also known as data stream processing, has emerged as a new processing paradigm that processes incoming data streams from tremendous numbers of sensors in a real-time fashion. Data stream applications must have low latency even when the incoming data rate fluctuates wildly. This is almost impossible with a local stream computing environment because its computational resources are finite. To address this kind of problem, we have devised a method and an architecture that transfers data stream processing to a Cloud environment as required in response to the changes of the data rate in the input data stream. Since a trade-off exists between application's latency and the economic costs when using the Cloud environment, we treat it as an optimization problem that minimizes the economic cost of using the Cloud. We implemented a prototype system using Amazon EC2 and an IBM System S stream computing system to evaluate the effectiveness of our approach. Our experimental results show that our approach reduces the costs by 80% while keeping the application's response latency low.
    none entered
  8. B. Satzger et al., "Esc: Towards an Elastic Stream Computing Platform for the Cloud," in Proceedings of 4th IEEE Intern. Conference on Cloud Computing (CLOUD), Washington, USA, 2011. Link
    Today, most tools for processing big data are batch-oriented. However, many scenarios require continuous, online processing of data streams and events. We present ESC, a new stream computing engine. It is designed for computations with real-time demands, such as online data mining. It offers a simple programming model in which programs are specified by directed acyclic graphs (DAGs). The DAG defines the data flow of a program, vertices represent operations applied to the data. The data which are streaming through the graph are expressed as key/value pairs. ESC allows programmers to focus on the problem at hand and deals with distribution and fault tolerance. Furthermore, it is able to adapt to changing computational demands. In the cloud, ESC can dynamically attach and release machines to adjust the computational capacities to the current needs. This is crucial for stream computing since the amount of data fed into the system is not under the platform's control. We substantiate the concepts we propose in this paper with an evaluation based on a high-frequency trading scenario.
    none entered
  9. M. Eckert and F. Bry, "Complex event processing (CEP)," in Informatik-Spektrum, Springer, 2009 Link
    Ereignisgesteuerte Informationssysteme benötigen eine systematische und automatische Verarbeitung von Ereignissen: Complex Event Processing (CEP).
    none entered
  10. L. Amini et al., "SPC: A Distributed, Scalable Platform for Data Mining," in Proceedings of the 4th Intern. Workshop on Data Mining Standards, Services and Platforms (DMSSP), Philadelphia, USA, 2006. Link
    The Stream Processing Core (SPC) is distributed stream processing middleware designed to support applications that extract information from a large number of digital data streams. In this paper, we describe the SPC programming model which, to the best of our knowledge, is the first to support stream-mining applications using a subscription-like model for specifying stream connections as well as to provide support for non-relational operators. This enables stream-mining applications to tap into, analyze and track an ever-changing array of data streams which may contain information relevant to the streaming-queries placed on it. We describe the design, implementation, and experimental evaluation of the SPC distributed middleware, which deploys applications on to the running system in an incremental fashion, making stream connections as required. Using micro-benchmarks and a representative large-scale synthetic stream-mining application, we evaluate the performance of the control and data paths of the SPC middleware.
    none entered
  11. D. Abadi et al., "The Design of the Borealis Stream Processing Engine," in Proceedings of Conference on Innovative Data Systems Research, Asilomar, USA, 2005. Link
    Borealis is a second-generation distributed stream processing engine that is being developed at Brandeis Uni- versity, Brown University, and MIT. Borealis inherits core stream processing functionality from Aurora [14] and distribution functionality from Medusa [51]. Borealis modifies and extends both systems in non-trivial and critical ways to provide advanced capabilities that are commonly required by newly-emerging stream processing applications. In this paper, we outline the basic design and functionality of Borealis. Through sample real-world applica- tions, we motivate the need for dynamically revising query results and modifying query specifications. We then describe how Borealis addresses these challenges through an innovative set of features, including revision records, time travel, and control lines. Finally, we present a highly flexible and scalable QoS-based optimization model that operates across server and sensor networks and a new fault-tolerance model with flexible consistency-availability trade-offs.
    none entered
  12. A. Jain et al., "Adaptive Stream Resource Management Using Kalman Filters," in Proceedings of the ACM SIGMOD Intern. Conference on Management of Data, Paris, France, 2004. Link
    To answer user queries efficiently, a stream management system must handle continuous, high-volume, possibly noisy, and time-varying data streams. One major research area in stream management seeks to allocate resources (such as network bandwidth and memory) to query plans, either to minimize resource usage under a precision requirement, or to maximize precision of results under resource constraints. To date, many solutions have been proposed; however, most solutions are ad hoc with hard-coded heuristics to generate query plans. In contrast, we perceive stream resource management as fundamentally a filtering problem, in which the objective is to filter out as much data as possible to conserve resources, provided that the precision standards can be met. We select the Kalman Filter as a general and adaptive filtering solution for conserving resources. The Kalman Filter has the ability to adapt to various stream characteristics, sensor noise, and time variance. Furthermore, we realize a significant performance boost by switching from traditional methods of caching static data (which can soon become stale) to our method of caching dynamic procedures that can predict data reliably at the server without the clients' involvement. In this work we focus on minimization of communication overhead for both synthetic and real-world streams. Through examples and empirical studies, we demonstrate the flexibility and effectiveness of using the Kalman Filter as a solution for managing trade-offs between precision of results and resources in satisfying stream queries.
    none entered
  13. M. Cherniack et al., "Scalable Distributed Stream Processing," Proceedings of the Conference on Innovative Data Systems Research, Asilomar, USA, 2003. Link
    Stream processing fits a large class of new applications for which conventional DBMSs fall short. Because many stream-oriented systems are inherently geographically distributed and because distribution offers scalable load management and higher availability, future stream processing systems will operate in a distributed fashion. They will run across the Internet on computers typically owned by multiple cooperating administrative domains. This paper describes the architectural challenges facing the design of large-scale distributed stream processing systems, and discusses novel approaches for addressing load management, high availability, and federated operation issues. We describe two stream processing systems, Aurora* and Medusa, which are being designed to explore complementary solutions to these challenges.
    none entered
  14. U.J. Kapasi et al., "Programmable Stream Processors," in Computer, Vol. 36, No. 8, 2003
    Stream processing promises to bridge the gap between inflexible specialpurpose

    solutions and current programmable architectures that cannot

    meet the computational demands of media-processing applications.
    none entered
  15. Z. Shen et al., "CSA: Streaming Engine for Internet of Things," in Data Engineering. Link
    The next generation Internet will contain a multitude of geographically distributed, connected de- vices continuously generating data streams, and will require new data processing architectures that can handle the challenges of heterogeneity, distribution, latency and bandwidth. Stream query processing is natural technology for use in IOT applications, and embedding such processing in the network enables processing to be placed closer to the sources of data in widely distributed environments. We propose such a distributed architecture for Internet of Things (IoT) applications based on Cisco’s Connected Streaming Analytics platform (CSA). In this paper describe this architecture and explain in detail how the capabilities built in the platform address real world IoT analytics challenges.
    none entered

Software

  1. Amazon IoT Link
    AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices. AWS IoT can support billions of devices and trillions of messages, and can process and route those messages to AWS endpoints and to other devices reliably and securely. With AWS IoT, your applications can keep track of and communicate with all your devices, all the time, even when they aren’t connected. AWS IoT makes it easy to use AWS services like AWS Lambda, Amazon Kinesis, Amazon S3, Amazon Machine Learning, and Amazon DynamoDB to build IoT applications that gather, process, analyze and act on data generated by connected devices, without having to manage any infrastructure.
    none entered
  2. Microsoft Stream Analytics Link
    Real-time stream processing in the cloud
    none entered
  3. Apache Storm Link
    Apache Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use!

    Storm has many use cases: realtime analytics, online machine learning, continuous computation, distributed RPC, ETL, and more. Storm is fast: a benchmark clocked it at over a million tuples processed per second per node. It is scalable, fault-tolerant, guarantees your data will be processed, and is easy to set up and operate.

    Storm integrates with the queueing and database technologies you already use. A Storm topology consumes streams of data and processes those streams in arbitrarily complex ways, repartitioning the streams between each stage of the computation however needed. Read more in the tutorial.
    none entered
  4. S4 distributed stream computing platform Link
    S4 is a general-purpose, distributed, scalable, fault-tolerant, pluggable platform that allows programmers to easily develop applications for processing continuous unbounded streams of data.
    none entered
  5. Apache Spark Streaming Link
    Spark Streaming makes it easy to build scalable fault-tolerant streaming applications.
    none entered
  6. Apache Samza Link
    Apache Samza is a distributed stream processing framework. It uses Apache Kafka for messaging, and Apache Hadoop YARN to provide fault tolerance, processor isolation, security, and resource management.
    none entered
  7. Google Cloud Platform Link
    none entered
    none entered
  8. Apache Apex Link
    The Apex Project is an enterprise grade native YARN big data-in-motion platform that unifies stream processing as well as batch processing. Apex processes big data in-motion in a highly scalable, highly performant, fault tolerant, stateful, secure, distributed, and an easily operable way. It provides a simple API that enables users to write or re-use generic Java code, thereby lowering the expertise needed to write big data applications.
    none entered
  9. Google Cloud Dataflow Link
    Dataflow is a unified programming model and a managed service for developing and executing a wide range of data processing patterns including ETL, batch computation, and continuous computation. Cloud Dataflow frees you from operational tasks like resource management and performance optimization.
    none entered
  10. Heron Link
    A realtime, distributed, fault-tolerant stream processing engine from Twitter
    none entered
  11. VISP - An Ecosystem for Elastic Data Stream Processing for the Internet of Things Link
    VISP represents a set of different prototypes, which are designed to create an ecosystem for elastic stream processing applications, which consist of individual building blocks to ease the creation of new applications. Therefore, these prototypes cover all relevant lifecycle phases, ranging from the inventarization of the building blocks and the design of the application, over the deployment up to the usage monitoring and provisioning of computational resources to ensure a high level of quality of service.
    none entered

Projects

  1. PrEstoCloud Project (2016–2019): Proactive Cloud Resources Management at the Edge for Efficient Real-Time Big Data Processing. H2020 Programme of the European Commission Link
    PrEstoCloud project will make substantial research contributions in the cloud computing and real-time data intensive applications domains, in order to provide a dynamic, distributed, self-adaptive and proactively configurable architecture for processing Big Data streams. In particular, PrEstoCloud aims to combine real-time Big Data, mobile processing and cloud computing research in a unique way that entails proactiveness of cloud resources use and extension of the fog computing paradigm to the extreme edge of the network. The envisioned PrEstoCloud solution is driven by the microservices paradigm and has been structured across five different conceptual layers: i) Meta-management; ii) Control; iii) Cloud infrastructure; iv) Cloud/Edge communication and v) Devices, layers.

    This innovative solution will address the challenge of cloud-based self-adaptive real-time Big Data processing, including mobile stream processing and will be demonstrated and assessed in several challenging, complementary and commercially-promising pilots. There will be three PrEstoCloud pilots from the logistics, mobile journalism and security surveillance, application domains. The objective is to validate the PrEstoCloud solution, prove that it is domain agnostic and demonstrate its added-value for attracting early adopters, thus initialising the exploitation process early on.

    Expand / Contract
    none entered
  2. LeanBigData (2014–2017): Ultra-Scalable and Ultra-Efficient Integrated and Visual Big Data Analytics, FP7 ICT Programme of the European Commission Link
    LeanBigData aims at addressing three open challenges in big data analytics: 1) The cost, in terms of resources, of scaling big data analytics for streaming and static data sources; 2) The lack of integration of existing big data management technologies and their high response time; 3) The insufficient end-user support leading to extremely lengthy big data analysis cycles. LeanBigData will address these challenges by:

    •Architecting and developing three resource-efficient Big Data management systems typically involved in Big Data processing: a novel transactional NoSQL key-value data store, a distributed complex event processing (CEP) system, and a distributed SQL query engine. We will achieve at least one order of magnitude in efficiency by removing overheads at all levels of the big-data analytics stack and we will take into account technology trends in multicore technologies and non-volatile memories. •Providing an integrated big data platform with these three main technologies used for big data, NoSQL, SQL, and Streaming/CEP that will improve response time for unified analytics over multiple sources and large amounts of data avoiding the inefficiencies and delays introduced by existing extract-transfer-load approaches. To achieve this we will use fine-grain intra-query and intra-operator parallelism that will lead to sub-second response times.

    •Supporting an end-to-end big data analytics solution removing the four main sources of delays in data analysis cycles by using: 1) automated discovery of anomalies and root cause analysis; 2) incremental visualization of long analytical queries; 3) drag-and-drop declarative composition of visualizations; and 4) efficient manipulation of visualizations through hand gestures over 3D/holographic views.Finally, LeanBigData will demonstrate these results in a cluster with 1,000 cores in four real industrial use cases with real data, paving the way for deployment in the context of realistic business processes.
    none entered
  3. NewsReader (2013–2015): Building structured event indexes of large volumes of financial and economic data for decision making, FP7 ICT Programme of the European Commission Link
    The volume of news data is enormous and expanding, covering billions of archived documents and millions of documents as daily streams, while at the same time getting more and more interconnected with knowledge provided elsewhere. Professional decision-makers that need to respond quickly to new developments and knowledge or that need to explain these developments on the basis of the past are faced with the problem that current solutions for consulting these archives and streams no longer work, simply because there are too many possibly relevant and partially overlapping documents and they still need to distinguish the correct from the wrong, the new from the old, the actual from the out-of-date by reading the content and maintaining a record in memory. Consequently, it becomes almost impossible to make well-informed decisions and professionals risk to be held liable for decisions based on incomplete, inaccurate and out-of-date information.
    NewsReader will process news in 4 different languages when it comes in. It will extract what happened to whom, when and where, removing duplication, complementing information, registering inconsistencies and keeping track of the original sources. Any new information is integrated with the past, distinguishing the new from the old and unfolding story lines in a similar way as people tend to remember the past and access knowledge and information. The difference being that NewsReader can provide access to all original sources and will not forget any details. We will develop a decision-support tool that allows professional decision-makers to explore these story lines using visual interfaces and interactions to exploit their explanatory power and their systematic structural implications. Likewise, NewsReader can make predictions from the past on future events or explain new events and developments through the past. The tool will be tested by professional decision makers in the financial and economic area.
    none entered
  4. CityPulse (2013–2016): Real-Time IoT Stream Processing and Large-scale Data Analytics for Smart City Applications, FP7 SMARTCITIES Programme of the European Commission Link
    An increasing number of cities have started to introduce new ICT enabled services. However, the uptake of smart city applications is hindered by various issues, such as the difficulty of integrating heterogeneous data sources and the challenge of extracting up-to-date information in real-time from large-scale dynamic data. Today the challenges are often addressed by application specific solutions, resulting in silo architectures.Bridging technology and domain boundaries, requires a scalable, adaptive and robust framework that provides:

    • Virtualisation hiding the heterogeneity of the numerous data and information sources • Large-scale data analytics for resource efficient event detection in multiple data streams • Semantic description frameworks and semantic analytics tools to provide machine-interpretable descriptions of data and knowledge

    • Easy creation of real-time smart city applications by re-usable intelligent components CityPulse will develop, build and test a framework for semantic discovery and processing of large-scale real-time IoT and relevant social data streams for reliable knowledge extraction in a real city environment.
    none entered
  5. SMART VORTEX (2010–2014): Scalable Semantic Product Data Stream Management for Collaboration and Decision Making in Engineering, FP7 ICT Programme of the European Commission Link
    The goal of SMART VORTEX is to provide a technological infrastructure consisting of a comprehensive suite of interoperable tools, services, and methods for intelligent management and analysis of massive data streams to achieve better collaboration and decision making in large-scale collaborative projects concerning industrial innovation engineering.\n\nSMART VORTEX captures the tractable product data streams in the product lifecycle of design and engineering. In each phase of this lifecycle, different streams of product data are generated. Amongst other, these product data streams contain streams from sensors (data rates of Gigabytes per second), simulation, experimental, and testing data (millions of complex data sets), design data (complex and exchanged between different domains), multi-media collaboration data (heterogeneous, and high information density), and higher level inferred events generated by analyses. These data streams are produced and consumed in all phases of the product lifecycle. The large volume of data in these streams makes the detection of pertinent information a hard problem for both technological infrastructures and humans. SMART VORTEX uses a Data Stream Management for managing, searching, annotating, analysing and performing feature extraction on these data streams.\n\nWithin the lifecycle of design and engineering projects a large number of people need to collaborate in order to achieve the individual project goals, such as bringing the next generation flat panel TV to the market before the competition does, identifying opportunities for improvements of existing products, or the maintenance of products in use. These projects are basically large distributed collaborative processes, where people from different domains of expertise and different organizations have to work together. SMART VORTEX supports these people, systems, and products with collaborative tools and decision support systems managing the constantly produced massive product data streams. SMART VORTEX ensures the efficiency and success of the collaboration by delivering the pertinent information at the right moment.
    none entered
  6. AMIDST (2005–2008): Analytical methods in the development of science and technology of polymers, FP6 Mobility Programme of the European Commission Link
    The purpose of this EST site at the Max Planck Institute for Polymer Research (MPI-P) in Mainz is to provide early stage researchers with a comprehensive training in advanced methods of analysis and characterization of polymer materials. This includes rese arch in how to design and optimize polymer-based materials and processes for advanced technologies in the context of industrial production and market development. Therefore training in Analytical Methods in the Development of Science and Technology of Poly mers (AMIDST) is necessary. The project aims to provide the fellows with thorough expertise in most modern research instrumentation, to teach the development and handling of software tools for polymer materials science, to develop understanding of the nece ssary theoretical background to solve the complex problems of polymer materials research both at the academic level and as a tool for industrial practice. The research goal is concentrated on the characterization of functional polymers in the context of ad vanced technologies. This includes all aspects of surface and interface science of polymer materials. The excellent equipment and the interdisciplinary cooperation of the 6 departments of MPI-P allow integrating these aspects in an efficient manner. The E ST also emphasizes training in soft skills like learning how to approach research tasks by teamwork, presentation and writing skills, language training, intellectual property protection and networking with a scientific community.
    none entered
This page was last changed on 15 February 2017, at 18:47.


Please log in if you do not want to leave your comment anonymously.

Home
To contribute:
Log in
Help
Contact