Fantastic Final Review for DICE

DICE logo.png

We are very proud t announce that after four very intense years of careful hard work, the DICE consortium has received a very warm and encouraging reception at the final review at the European Commission in Brussels.  The reviewers were particularly impressed by the quality of the work done in the project and complimented the standard of the written deliverables submitted.  Again DICE sets a standard by which other projects can follow.  The work done on dissemination was seen as effective and creative with great use of animated video produced by flexiOPS.  Overall the project has exceeded the promised efforts outlined in the Description of Work and we can be safe in referring to it all as an excellent body of research.  

DICE is an Open Source Dev Ops solution for Big Data applications.

Release of the Final DICE Framework

Screen Shot 2018-03-22 at 09.29.22.png

After a 36-months R&D collaboration, the DICE consortium is pleased to announce the final release of the open source DICE framework and its two commercial versions DICE Velocity and DICE BatchPro.

DICE delivers innovative development methods and tools to strengthen the competitiveness of small and medium European ISVs in the market of business-critical data-intensive applications. The barriers that DICE breaks are the shortage of methods to express data-aware quality requirements in model-driven development and the ability to consistently consider these requirements throughout DevOps tool-chains during quality analysis, testing, and deployment of the application. Existing methodologies and tools provide these capabilities for traditional enterprise software systems and cloud-based applications, but when it comes to increasingly popular technologies such as, e.g., Hadoop/MapReduce, Spark, Storm, or Cassandra, it was difficult before DICE to adopt a holistic quality-driven software engineering approach. DICE delivers this capability, providing a quality-driven development environment for data-intensive applications.

In particular, DICE offers a DevOps methodology and platform covering multiple aspects of the lifecycle of a Big data application. A collection of 14 tools has been created and released as open source. The tools can guide in the definition of new Big data applications or in extending existing ones. A knowledge repository has been created to help end users to explore the different features of the tools, as well as to navigate through supporting tutorials and videos.

In particular, the open source release of the DICE framework is available free of charge and offers to development and operations teams:

  • An Eclipse-based IDE implementing the DICE DevOps methodology and guiding the user step-by-step through the use of cheatsheets
  • A new UML profile to design data-intensive applications taking into account quality-of-service requirements and featuring privacy-by-design methods
  • Quality analysis tools to simulate, verify, and optimize the application design and identify possible anti-patterns
  • OASIS TOSCA-compliant deployment and orchestration on cloud VMs and containers
  • Monitoring and anomaly detection tools based on the Elasticsearch-Logstash-Kibana stack
  • Runtime methods for configuration optimization, testing and fault injection
  • Native support for open-source Apache platforms such as Storm, Spark, Hadoop, and Cassandra.

The DICE framework is also available in two commercial versions focused on real-time applications (DICE Velocity) and batch processing system development and delivery (DICE BatchPro).

The DICE tools have been presented and are actively downloaded by a diverse group of stakeholders. Videos that illustrate cross-cutting benefits of the solution for different needs and use case scenarios are available on the DICE YouTube channel, together with tutorials on the DICE blog, as well as regular announcements on the DICE Twitter newsfeed.

ENTICE Has Received a Very Positive Final Review


The team met at Tirol House in the European capital of Brussels for the final rehearsal meeting to fine tune presentations and prepare for the final review.  The project has been three years in the making and has overcome a series of technical challenges to finally deliver to the community a suite of tools that could can enhance the cloud experience for all users.  


Our feedback was very positive and reviewers were very impressed with how well the team presented their work on the day.  There was a positive conversation also on how public data sets published by the ENTICE project will be of great interest to the research and development community.

All in it was a fruitful and pleasant day and gave great confidence to the consortium as we look ahead to our further exploitation and collaboration intentions. 

ENTICE Final Project Meeting in Innsbruck

Screen Shot 2018-02-20 at 14.31.38.png

The ENTICE team met for the final time ahead of the final review in Brussels on Wednesday the 14th of March where all of the ENTICE tools will be demonstrated live before the project officers.  The meeting was very productive and hosted by research partners UIBK in the ski resort city of Innsbruck, Austria amidst a mountainous landscape of icy peaks and snow powdered forests.  The clear mountain air certainly influenced the discussions and conclusions as the remaining loose ends were securely tied together with a clear vision of the road ahead.  

flexiOPS Use Case for ENTICE

Screen Shot 2018-02-20 at 14.25.35.png

Cloud computing has dramatically transformed the way data is stored and accessed.  Currently many services/applications use cloud-computing technology to provide their users means to store, process and access data. Although cloud computing is undoubtedly a huge success, there are some challenges and concerns. In terms of image deployment time it can take a considerable amount of time from choosing an image to getting it up and running. 

The deployment of the ENTICE tools to our FCO platform is still currently ongoing.  Despite this, some preliminary results have been gathered in order to validate the effectiveness of the tools running within our testbed.

The following results were gathered using the original Use Case images and compared with the same images after being optimised by the ENTICE VMI Optimiser.  These images were then downloaded into FCO and used to create new VMs in order to measure storage/time differences in various areas.  

Screen Shot 2018-02-20 at 14.28.00.png

As can be seen, with this ENTICE tool we have made a substantial difference to both the size of the VMI images, and the time difference in image deployment time and image boot time on FCO.  The ENTICE team are confident that our product will be ready to hit the ground running as a a sustainable and attractive solution for the cloud market.   

BEACON Has Been Approved!

beacon final meeting 3.jpg

The team met for a final preparation meeting at the offices of CETIC in Charleroi, Belgium to fine tune presentations and get ready to sit before the European Commission in Brussels.  We are very pleased to announce that the commission have passed our project which has taken 2 years to create and the BEACON product is now set to take it's part in the world of cloud federation as a living deliverable. 

beacon final meeting 1.jpg

BEACON Presented at FICloud 2017

Capture d’écran 2017-10-18 à 13.44.43.png

On 22/08/2017 Philippe Massonet presented the paper "Security in Lightweight Network Function Virtualisation for Federated Cloud and IoT" at FICloud 2017 (The IEEE 5th International Conference on Future Internet of Things and Cloud) in Prague, Czech Republic. The paper describes how the BEACON security architecture for federated cloud networking could be extended to federate with sensor networks. The paper proposes securing sensor networks at the edge using Network Function Virtualisation and Service Function Chaining. A key conclusion drawn from discussions at the conference is that there is a need for lightweight NFV/SFC. NFV/SFC solutions for clouds assume the availability of cloud resources to scale and adapt to processing demand. Sensor and actuator networks with limited processing and storage resources need lightweight NFV/SFC solutions.

BEACON Security Use Case Animation

See the BEACON project come to life with this real life use case animation which sets out the problems faced by hybrid cloud users who need migrate across clouds and how the BEACON solution fits into an SME's toolkit for protecting their client's VMs.  The Security Use Case Scenario was handled by flexiOPS and we are confident in what we have produced.  We very much look forward to the final review in October but look out for BEACON as the product takes to market.  


BEACON Represented at SummerSOC Poster Session, Crete

Managing Director of flexiOPS, Craig Sheridan and University of Messina Associate Professor Massimo Villari represented the BEACON project at a poster session at this year's Symposium on Service Oriented Computing on Crete. The well established summer school has proven to have been a great opportunity for generating interest and feedback on not only the concepts but also the results of BEACON.  

SummerSOC 2017, Crete

Craig Sheridan, Managing Director of industrial partner flexiOPS presented 'Deployment-time Multi-cloud Application Security' to the summer school on Crete this June.  The 11th Symposium on Service Oriented Computing heard the case for a concrete security baseline for VM applications with keen interest shown in the QA session.  


Quorum is a software solution that supports organisations, entity management and companies’ secretarial operations.  As well as assists their corporate compliance. It is used by major auditing, legal, trust and specialist providers offering corporate secretarial and other professional services in more than 25 countries worldwide.   Quorum governs principally entity management or company administration, contacts and clients management, KYC compliance, banking administration, time and billing.  The main benefits of using Quorum involve, optimising client and entity management operations, increasing client billing by better tracking and monitoring chargeable work.  Improve compliance in the quality of your work, security and traceability and reduce the opportunity for human error.  Manage information and documents accurately, reliably and efficiently.  Doing this, information becomes instantly accessible and available at the touch of a button and allows you to achieve high levels of productivity from your staff.  There are two very important advantages for companies that will use Paasage.  Increased flexibility this is because companies using Paasage are not bound to a single cloud provider, they can seamlessly switch a cloud provider simply by changing the cloud model.  Rapid elasticity, with the local cloud infrastructure in place it is very difficult, time consuming and costly to plan before hand for occasions that additional resources will be required.  IBSCY’s cloud strategy can be enhanced by Paasage as Paasage allows customers to deploy and move an application across multiple cloud providers and configurations.  Paasage helps IBSCY stay competitive and increase its flexibility so as to address diverse clients, cloud requirements, as well as scale on demand when more resources are needed.  To find out more goto and get started today.  

ENTICE and Elecnor Deimos: Earth Observation

Let’s find out how ENTICE technology is helping to improve the Earth observation industry.  Earth observation is all about collecting spatial and temporal data of the world.  This data can be useful for all sorts of users in a diverse range of industries including monitoring the environment, observing natural disasters and civil security systems.  The last decade started with $200,000,000 worth of commercial sales in Earth Observation.  2010 saw the figure rise to $1.1 billion.  The forecast is to begin 2019 with $4 billion worth of sales.  It is a market that is growing at a steady rate.  In order to take advantage of this the European Commission in partnership with the European Space Agency and the European Environment Agency created the system Copernicus to provide Europe operational and autonomous capability to observe the Earth. Despite the importance of Earth Observation across multiple industries access to information obtained from satellites follows traditional and expensive paths to cover demand.  Of course this presents several drawbacks.  The cost of acquiring up to date images of the Earth is inhibitively expensive for new entrants to the market.  Existing customer s cannot access images directly and current methods require a great deal of processing and ad hoc delivery and the service lacks flexibility to cope with sudden changes in demand.  Here at ENTICE we believe that cloud computing could be the solution because cloud computing is scalable, it is flexible and it is globally accessible. 

Wellness Use Case

Let’s find out how Wellness Telecom are utilising ENTICE virtual machine image reduction technology to improve their services and win new customers.  Unified communication is an integrated and tailored service that allows you to have all business communication in the same application.  The custom images needed for the service are stored and managed by Wellness.  The users pay for the resources used in their storage.  While there are solutions to allocate extra resource to attend to unforeseen demand, there is often a drop in quality of service given the difficulty in meeting a spike in demand the challenge and business opportunity for us here at Wellness is to find a solution where users only pay for the resources they need without a reduction in quality.  Working  with ENTICE we have a solution that lets the service use new resources only when needed taking advantage of ENTICE’s faster deployment speeds and adapting to demand and as Wellness manages all tailored images needed for the service users leverage size reduce if provided by ENTICE to pay lower prices whilst we use less resource all round.  For more information about how ENTICE is helping businesses enhance their service and to learn about the innovations behind ENTICE go to  

We offer a catalogue of services which provides third party enterprise solutions.  These are aimed at companies that don’t have the knowledge to instal and deploy themselves.  The customer is billed based on resource used for their service and the storage utilised for virtual machine images.  Currently the images are not optimised leaving the customer paying for extra resources.  Our objective here at Wellness is that the customer only pays for the real resources that are needed.  By taking advantage of size reduction of virtual machine images offered by ENTICE we make our services more attractive, lower costs, improve competitiveness and reduce resource use.  ENTICE helps us pass resource savings along to our customers, winning us new business and making our service users happy.  

Budapest - Plenary Meeting

The team met up for their Plenary Meeting this January in Budapest to discuss discuss and present the progress so far ahead of the next commission review later in the year.  

BEACON Meeting in Madrid

The team met recently in Madrid at the OpenNebula offices to discuss the final phase of the project.  The team are happy to say that everything is on track and they look forward to the Open Stack Summit in Boston in May and also the Beacon workshop which is part of the SmartCOMP conference in Hong Kong.

The call for papers for this workshop is still open, the deadline being April 9th.  
See more here:

Rich Client Platform for the DIA-integrated Development

DICE focuses on the quality assurance for data-intensive applications (DIA) developed through the Model-Driven Engineering (MDE) paradigm. The project aims at delivering methods and tools that will help satisfying quality requirements in data-intensive applications by iterative enhancement of their architecture design. One component of the tool chain developed within the project is the DICE IDE. It is an Integrated Development Environment (IDE) that accelerates the development of data-intensive applications.

The Eclipse-based DICE IDE integrates most of the tools of the DICE framework and it is the base of the DICE methodology. As highlighted in the deliverable D1.1 State of the Art Analysis, there does not exist yet any MDE IDE on the software market through which a designer can create models to describe and analyse data-intensive or Big Data applications and their underpinning technology stack. This is the motivation for defining the DICE IDE.

The DICE IDE is based on Eclipse, which is the de-facto standard for the creation of software engineering models based on the MDE approach. DICE customizes the Eclipse IDE with suitable plug-ins that integrate the execution of the different DICE tools, in order to minimize learning curves and simplify adoption. In this blog post we explain how the DICE tools introduced to the reader earlier have been integrated into the IDE. So, How’s the DICE IDE built?


How the DICE IDE is built?

The DICE IDE is an application based on Eclipse. While the Eclipse platform is designed to serve as an open platform for tool integration, it is architected so that its components could be used to build any arbitrary client application. The minimal set of plug-ins needed to build a rich client application is collectively known as the Rich Client Platform (RCP). Applications other than IDEs can be built using a subset of the platform. These rich applications are still based on a dynamic plug-in model, and the UI is built using the same toolkits and extension points. The layout and function of the workbench is under fine-grained control of the plug-in developer.

An Eclipse application consists of several Eclipse components, as a developer you can extend the Eclipse IDE via plug-ins (components). Eclipse applications incorporate runtime features based on OSGi. In this runtime environment, you can update/delete/create features to your application using OSGi Bundles (Components).

The minimum piece of software that can be integrated in Eclipse is called a plug-in. The Eclipse platform allows the developer to extend Eclipse applications like the Eclipse IDE with additional functionalities via plug-ins.

Eclipse applications use a runtime based on a specification called OSGi. A software component in OSGi is called a bundle. An OSGi bundle is also always an Eclipse plug-in. Both terms can be used interchangeably.

The Eclipse IDE is basically an Eclipse RCP application to support development activities. Even core functionalities of the Eclipse IDE are provided via a plug-in. For example, both the Java and C development tools are contributed as a set of plug-ins. Therefore, the Java or C development capabilities are available only if these plug-ins are present.

The Eclipse IDE functionality is heavily based on the concept of extensions and extension points. For example, the Java Development Tools provide an extension point to register new code templates for the Java editor.

Via additional plug-ins you can contribute to an existing functionality, for example new menu entries, new toolbar entries or provide completely new functionality. But you can also create completely new programming environments.

The minimal required plug-ins to create and run a minimal Eclipse RCP application (with UI) are the two plug-ins “org.eclipse.core.runtime” and “org.eclipse.ui”. Based on these components an Eclipse RCP application must define the following elements:

  • Main program – a RCP main application class implementing the interface “IApplication”. This class can be viewed as the equivalent to the main method for standard Java application. Eclipse expects that the application class is defined via the extension point “org.eclipse.core.runtime.application”.
  • A Perspective – it defines the layout of your application. Must be declared via the extension point “org.eclipse.ui.perspective”.
  • Workbench Advisor- invisible technical component which controls the appearance of the application (menus, toolbars, perspectives, etc.)

DICE Tools integration approaches

The Eclipse-based DICE IDE integrates most of the tools of the DICE framework. Due to the different nature of the tools, not all of them have the ability to get integrated completely within the IDE. It is necessary to provide a solution for that. Some of the tools have the real execution environment available outside the IDE (not eclipse plugins), for instance, in an external web site, or in an external server.

The DICE IDE offers two ways of get integrated:

  • Fully integrated
  • Externally integrated

Both integrations have a common component of integration within the IDE. This component contributes the IDE with a menu, through which the user can interact with all the integrated tools (Figure 1).


Figure 1. The menu for a DICE tool in the DICE IDE.

External integration:

This approach is the easiest. It is used when the real execution environment of the tool is placed outside the IDE, for instance within an external server or web service.

The only required information for this approach is to provide the needed information to connect to the external application, typically a URL:

  • Protocol: HTTP or HTTPS
  • Server: the address of the server
  • Port: the port where the server remains available
  • Parameters: possible parameters to be passed when the web service is visited (user id, token…)

There exists a plug-in that implements an abstract mechanism that is offered to all of the tools that prefers this kind of integration. This plug-in adds support to open the internal web browser of Eclipse with the given page, allowing the user to access to it within the IDE. An example of such an integration is given on the Figure 2 with the DICE Monitoring tool.


Figure 2. Example of Monitoring Tool, an external tool integration.

The IDE also provides an abstract Eclipse Preferences page that allows the user to modify these properties (Figure 3). In this way, the external web server tool integration can be modified dynamically by the user if needed.


Figure 3. Example of Monitoring Tool external web service configuration.

Full integration:

This approach requires much effort by the tool owner, as it is intended to develop a fully functional architecture in the IDE that allows the user to interact with the tool and perform all the needed operations.

It is required to have some Eclipse development skills. There are lots of Eclipse tutorials available on Internet that can be used to learn how to develop Eclipse plug-ins and contribute the IDE to provide new functionality like wizards, dialogs, launchers, views…

Depending on how complex is the tool, it will be more or less difficult to integrate it within the IDE.

The Figure 4 shows an example of fully integrated tool. In this case, it is the Simulation tool.


Figure 4. An example of the Simulation Tool, a fully integrated tool.


This post described the basic features of the DICE IDE, in particular the dual integration patterns provided by the integrated environment, and examples of integrated DICE Tools. Due to the different nature of the tools, not all of them have the ability to get integrated completely within the IDE. All tools, independently of the integration used, are accessible through the menu item.

The IDE has been released in January 2017 on GitHub as part of the DICE Knowledge Repository.  A complete tutorial and a Youtube channel allow any interested designers, administrators, quality engineers or system architect to start quickly with the IDE.

Christophe Joubert, Ismael Torres (PRO)

Rich Client Platform for the DIA-integrated Development

DICE focuses on the quality assurance for data-intensive applications (DIA) developed through the Model-Driven Engineering (MDE) paradigm. The project aims at delivering methods and tools that will help satisfying quality requirements in data-intensive applications by iterative enhancement of their architecture design. One component of the tool chain developed within the project is the DICE IDE. It is an Integrated Development Environment (IDE) that accelerates the development of data-intensive applications.

The Eclipse-based DICE IDE integrates most of the tools of the DICE framework and it is the base of the DICE methodology. As highlighted in the deliverable D1.1 State of the Art Analysis, there does not exist yet any MDE IDE on the software market through which a designer can create models to describe and analyse data-intensive or Big Data applications and their underpinning technology stack. This is the motivation for defining the DICE IDE.

The DICE IDE is based on Eclipse, which is the de-facto standard for the creation of software engineering models based on the MDE approach. DICE customizes the Eclipse IDE with suitable plug-ins that integrate the execution of the different DICE tools, in order to minimize learning curves and simplify adoption. In this blog post we explain how the DICE tools introduced to the reader earlier have been integrated into the IDE. So, How’s the DICE IDE built?

Securing federated cloud networks using Service Function Chaining

Sébastien Dupont - CETIC

Software defined networks networks (SDN), network function virtualization (NFV) and network function chaining (SFC) technologies enable more advanced and flexible cloud federation mechanisms. In this blog post, we will show how to use those technologies in federated clouds to improve security.

Protecting network overlays using Service Function Chaining

Cloud networks security can be significantly improved by composing network functions such as firewalls, intrusion detection, deep packet inspection, etc. The image below illustrates how data flows through different paths depending on network security policies.


What about protecting federated networks?

SFC and NFV provide a way to secure each individual network inside a cloud federation. The following figure shows two federated networks belonging to different clouds that are protected using SFC/NFV. Each cloud administrator manages its own network security policy, and an additional global federated network security policy is applied on top. For each cloud, the intra-cloud inbound and outbound traffics go through a series of NFV.



Protecting an OpenStack federation with SFC/NFV

The OpenStack Heat project provides a template-based orchestration mechanism, formalised in YAML (YAML Ain’t Markup Language) that can be extended to support SFC network security policies. The TOSCA project proposes a service manifest specification for NFV, which can be translated in Heat.



We are currently investigating two Openstack components to protect an OpenStack cloud federation: Tackerfor the NFV management and networking SFC for the NFV orchestration.

Case studies

SFC/NFV Encryption

In this scenario we consider three clouds, the connection with one of those clouds is untrusted. To secure the communications, we can add encryption and decryption at the network level using dedicated SFC/NFV.


Here is an extract of the service manifest that describes the global security policy:


SFC/NFV Encryption and Deep Packet Inspection

Some network functions should be done asynchronously to avoid slowing down the traffic. In this scenario, the encryption and firewalling operations are done synchronously because the security system needs to respond directly when traffic goes through those NFV, whereas DPI could be applied after the traffic has already gone through.



Philippe Massonet, Anna Levin, Massimo Villari, Sébastien Dupont and Arnaud Michot: Enforcement of Global Security Policies in Federated Cloud Networks with Virtual Network Functions. NCA 2016.

Philippe Massonet, Sébastien Dupont, Arnaud Michot, Anna Levin, Massimo Villari: An architecture for securing federated cloud networks with Service Function Chaining. ISCC 2016: 38-43

Philippe Massonet, Anna Levin, Antonio Celesti, Massimo Villari: Security Requirements in a Federated Cloud Networking Architecture. ESOCC Workshops 2015: 79-88

Formal Verification of Data-Intensive Applications with Temporal Logic

Beside functional aspects, designers of Data-Intensive Applications have to consider various quality aspects that are specific to the applications processing huge volumes of data with high throughput and running in clusters of (many) physical machines. A broad set of non-functional aspects positioned in the areas of performance and safety should be included at the early stage of the design process to guarantee high-quality software development.

The evaluation of the correctness of such applications, and when functional and non-functional aspects are both involved, is definitely not trivial. In the case of Data-Intensive Applications, the inherent distributed architecture, the software stratification and the computational paradigm implementing the logic of the applications pose new questions on the criteria that should be considered to evaluate their correctness.


Data-intensive applications are commonly realized through independent computational nodes that are managed by a supervisor providing resources allocation and node synchronization functionalities. Message exchange is guaranteed by an underlying network infrastructure over which the (data-intensive) framework might implement suitable mechanisms to guarantee the correct message transfer among the nodes. The logic of the application is the tip of the iceberg of a very complex software architecture which the developer cannot completely govern. Between the application code and the deployed running executables there are many interconnected layers, offering abstractions and running control automatisms, that are not visible to the developers (such as, for instance, the containerization mechanisms, the cluster manager, etc.).

Besides the architectural aspects of the framework, the functionality of data-intensive applications requires, in some cases, a careful analysis of the notion of correctness adopted to evaluate the outcomes. This is the case, for instance, of streaming applications. The functionality of streaming applications is defined through the combination and concatenation of operations on streams, i.e., infinite sequences of messages originated from external data sources or by the computational nodes constituting the application. The operations can transform a stream into a new stream or can aggregate a result by reducing a stream into data. Sometimes, the operations are defined over portions of streams, called windows, that partition the streams on the basis of specific grouping criteria of the messages in the stream. The complexity and the variety of parameters defining the operations make the definition of the streaming transformation semantics not obvious and the assessment of their correctness far from being trivial.

In DICE, the evaluation of correctness concerns “safety” aspects of data intensive applications. Verification of safety properties is done automatically by means of a model checking analysis that the designer performs at design time. The formal abstraction which models the application behavior is first extracted from the application UML diagrams and later verified to check the existence of incorrect executions, i.e., executions that do not conform with specific criteria identifying the required behavior. Time and the ordering relation among the events of the application are the main aspects characterizing the formalism used for verification, that is based on specific extensions of Linear Temporal Logic (LTL). As already pointed out, since the technological framework affects the definition of correctness to be adopted for evaluating the final application, the formal modeling devised for DICE verification combines an abstraction of functional aspects with a simplified representation of the computational paradigm adopted to implement the application.

DICE verification is carried out by D-verT and focuses on Apache Storm and (soon) Spark, two baseline technologies for streaming and batch applications. The computational mechanism they implement is captured by means of logical formulae that, when instantiated, given a specific DTSM application model, represent the executions of the Storm (or Spark) application. The analyses that the user can perform, from the DICE IDE, are bottleneck analysis of Storm applications and worst time analysis of Spark applications (the latter is a work in progress).

In the first case, the developer can verify the existence of a node of a Storm application that cannot process the incoming workload with a timely computation. In such a case, the node is likely to be a bottleneck node for the application that might cause memory saturation and drop the overall performance. In the second case, the developer can perform a worst case analysis of the total time span required by a Spark application to complete a job. The overall job time, that must meet a given deadline at runtime, can be evaluated on the basis of a task time estimation, for the physical resources available in the cluster, that must be known before running the verification.

Details about verification techniques can be found in Deliverable D3.5 – Verification tool Initial Version and on the DICE Github repository.

Related material:

  1. Francesco Marconi, Marcello M. Bersani, Madalina Erascu, Matteo Rossi:
    Towards the Formal Verification of Data-Intensive Applications Through Metric Temporal Logic. ICFEM 2016
  2. Francesco Marconi, Marcello Maria Bersani and Matteo Rossi: Formal Verification of Storm Topologies through D-verT. SAC 2017

Marcello M. Bersani and Verification team (PMI)

ENTICE & TEDX - Radu Prodan: The Dark, Disruptive Side of the Cloud

In our latest blog we look back at a recent TEDx talk from the ENTICE Scientific Coordinator, Radu Prodan, where he provides insight into the technology of clouds, the historical development, their interconnection today and what kind of possibilities there are for the future. 

Radu Prodan is a trained engineer and Doctor of Technical Sciences, and Technical Coordinator of the ENTICE project. This talk discusses the mysterious Clouds as today’s de-facto interconnection, storage, and computing paradigm, gathering billions of devices spread around the globe. 

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at