BEACON Has Been Approved!

beacon final meeting 3.jpg

The team met for a final preparation meeting at the offices of CETIC in Charleroi, Belgium to fine tune presentations and get ready to sit before the European Commission in Brussels.  We are very pleased to announce that the commission have passed our project which has taken 2 years to create and the BEACON product is now set to take it's part in the world of cloud federation as a living deliverable. 

beacon final meeting 1.jpg

BEACON Presented at FICloud 2017

Capture d’écran 2017-10-18 à 13.44.43.png

On 22/08/2017 Philippe Massonet presented the paper "Security in Lightweight Network Function Virtualisation for Federated Cloud and IoT" at FICloud 2017 (The IEEE 5th International Conference on Future Internet of Things and Cloud) in Prague, Czech Republic. The paper describes how the BEACON security architecture for federated cloud networking could be extended to federate with sensor networks. The paper proposes securing sensor networks at the edge using Network Function Virtualisation and Service Function Chaining. A key conclusion drawn from discussions at the conference is that there is a need for lightweight NFV/SFC. NFV/SFC solutions for clouds assume the availability of cloud resources to scale and adapt to processing demand. Sensor and actuator networks with limited processing and storage resources need lightweight NFV/SFC solutions.

BEACON Security Use Case Animation

See the BEACON project come to life with this real life use case animation which sets out the problems faced by hybrid cloud users who need migrate across clouds and how the BEACON solution fits into an SME's toolkit for protecting their client's VMs.  The Security Use Case Scenario was handled by flexiOPS and we are confident in what we have produced.  We very much look forward to the final review in October but look out for BEACON as the product takes to market.  

 

BEACON Represented at SummerSOC Poster Session, Crete

Managing Director of flexiOPS, Craig Sheridan and University of Messina Associate Professor Massimo Villari represented the BEACON project at a poster session at this year's Symposium on Service Oriented Computing on Crete. The well established summer school has proven to have been a great opportunity for generating interest and feedback on not only the concepts but also the results of BEACON.  

SummerSOC 2017, Crete

Craig Sheridan, Managing Director of industrial partner flexiOPS presented 'Deployment-time Multi-cloud Application Security' to the summer school on Crete this June.  The 11th Symposium on Service Oriented Computing heard the case for a concrete security baseline for VM applications with keen interest shown in the QA session.  

PAASAGE, QUORUM AND IBSCY : WINNING CUSTOMERS TO THE CLOUD

Quorum is a software solution that supports organisations, entity management and companies’ secretarial operations.  As well as assists their corporate compliance. It is used by major auditing, legal, trust and specialist providers offering corporate secretarial and other professional services in more than 25 countries worldwide.   Quorum governs principally entity management or company administration, contacts and clients management, KYC compliance, banking administration, time and billing.  The main benefits of using Quorum involve, optimising client and entity management operations, increasing client billing by better tracking and monitoring chargeable work.  Improve compliance in the quality of your work, security and traceability and reduce the opportunity for human error.  Manage information and documents accurately, reliably and efficiently.  Doing this, information becomes instantly accessible and available at the touch of a button and allows you to achieve high levels of productivity from your staff.  There are two very important advantages for companies that will use Paasage.  Increased flexibility this is because companies using Paasage are not bound to a single cloud provider, they can seamlessly switch a cloud provider simply by changing the cloud model.  Rapid elasticity, with the local cloud infrastructure in place it is very difficult, time consuming and costly to plan before hand for occasions that additional resources will be required.  IBSCY’s cloud strategy can be enhanced by Paasage as Paasage allows customers to deploy and move an application across multiple cloud providers and configurations.  Paasage helps IBSCY stay competitive and increase its flexibility so as to address diverse clients, cloud requirements, as well as scale on demand when more resources are needed.  To find out more goto Paasage.eu and get started today.  

ENTICE and Elecnor Deimos: Earth Observation

Let’s find out how ENTICE technology is helping to improve the Earth observation industry.  Earth observation is all about collecting spatial and temporal data of the world.  This data can be useful for all sorts of users in a diverse range of industries including monitoring the environment, observing natural disasters and civil security systems.  The last decade started with $200,000,000 worth of commercial sales in Earth Observation.  2010 saw the figure rise to $1.1 billion.  The forecast is to begin 2019 with $4 billion worth of sales.  It is a market that is growing at a steady rate.  In order to take advantage of this the European Commission in partnership with the European Space Agency and the European Environment Agency created the system Copernicus to provide Europe operational and autonomous capability to observe the Earth. Despite the importance of Earth Observation across multiple industries access to information obtained from satellites follows traditional and expensive paths to cover demand.  Of course this presents several drawbacks.  The cost of acquiring up to date images of the Earth is inhibitively expensive for new entrants to the market.  Existing customer s cannot access images directly and current methods require a great deal of processing and ad hoc delivery and the service lacks flexibility to cope with sudden changes in demand.  Here at ENTICE we believe that cloud computing could be the solution because cloud computing is scalable, it is flexible and it is globally accessible. 

Wellness Use Case

Let’s find out how Wellness Telecom are utilising ENTICE virtual machine image reduction technology to improve their services and win new customers.  Unified communication is an integrated and tailored service that allows you to have all business communication in the same application.  The custom images needed for the service are stored and managed by Wellness.  The users pay for the resources used in their storage.  While there are solutions to allocate extra resource to attend to unforeseen demand, there is often a drop in quality of service given the difficulty in meeting a spike in demand the challenge and business opportunity for us here at Wellness is to find a solution where users only pay for the resources they need without a reduction in quality.  Working  with ENTICE we have a solution that lets the service use new resources only when needed taking advantage of ENTICE’s faster deployment speeds and adapting to demand and as Wellness manages all tailored images needed for the service users leverage size reduce if provided by ENTICE to pay lower prices whilst we use less resource all round.  For more information about how ENTICE is helping businesses enhance their service and to learn about the innovations behind ENTICE go to entice-project.eu.  

We offer a catalogue of services which provides third party enterprise solutions.  These are aimed at companies that don’t have the knowledge to instal and deploy themselves.  The customer is billed based on resource used for their service and the storage utilised for virtual machine images.  Currently the images are not optimised leaving the customer paying for extra resources.  Our objective here at Wellness is that the customer only pays for the real resources that are needed.  By taking advantage of size reduction of virtual machine images offered by ENTICE we make our services more attractive, lower costs, improve competitiveness and reduce resource use.  ENTICE helps us pass resource savings along to our customers, winning us new business and making our service users happy.  

Budapest - Plenary Meeting

The team met up for their Plenary Meeting this January in Budapest to discuss discuss and present the progress so far ahead of the next commission review later in the year.  

BEACON Meeting in Madrid

The team met recently in Madrid at the OpenNebula offices to discuss the final phase of the project.  The team are happy to say that everything is on track and they look forward to the Open Stack Summit in Boston in May and also the Beacon workshop which is part of the SmartCOMP conference in Hong Kong.


The call for papers for this workshop is still open, the deadline being April 9th.  
See more here: http://fenci2017.unime.it/

Rich Client Platform for the DIA-integrated Development

DICE focuses on the quality assurance for data-intensive applications (DIA) developed through the Model-Driven Engineering (MDE) paradigm. The project aims at delivering methods and tools that will help satisfying quality requirements in data-intensive applications by iterative enhancement of their architecture design. One component of the tool chain developed within the project is the DICE IDE. It is an Integrated Development Environment (IDE) that accelerates the development of data-intensive applications.

The Eclipse-based DICE IDE integrates most of the tools of the DICE framework and it is the base of the DICE methodology. As highlighted in the deliverable D1.1 State of the Art Analysis, there does not exist yet any MDE IDE on the software market through which a designer can create models to describe and analyse data-intensive or Big Data applications and their underpinning technology stack. This is the motivation for defining the DICE IDE.

The DICE IDE is based on Eclipse, which is the de-facto standard for the creation of software engineering models based on the MDE approach. DICE customizes the Eclipse IDE with suitable plug-ins that integrate the execution of the different DICE tools, in order to minimize learning curves and simplify adoption. In this blog post we explain how the DICE tools introduced to the reader earlier have been integrated into the IDE. So, How’s the DICE IDE built?

 

How the DICE IDE is built?

The DICE IDE is an application based on Eclipse. While the Eclipse platform is designed to serve as an open platform for tool integration, it is architected so that its components could be used to build any arbitrary client application. The minimal set of plug-ins needed to build a rich client application is collectively known as the Rich Client Platform (RCP). Applications other than IDEs can be built using a subset of the platform. These rich applications are still based on a dynamic plug-in model, and the UI is built using the same toolkits and extension points. The layout and function of the workbench is under fine-grained control of the plug-in developer.

An Eclipse application consists of several Eclipse components, as a developer you can extend the Eclipse IDE via plug-ins (components). Eclipse applications incorporate runtime features based on OSGi. In this runtime environment, you can update/delete/create features to your application using OSGi Bundles (Components).

The minimum piece of software that can be integrated in Eclipse is called a plug-in. The Eclipse platform allows the developer to extend Eclipse applications like the Eclipse IDE with additional functionalities via plug-ins.

Eclipse applications use a runtime based on a specification called OSGi. A software component in OSGi is called a bundle. An OSGi bundle is also always an Eclipse plug-in. Both terms can be used interchangeably.

The Eclipse IDE is basically an Eclipse RCP application to support development activities. Even core functionalities of the Eclipse IDE are provided via a plug-in. For example, both the Java and C development tools are contributed as a set of plug-ins. Therefore, the Java or C development capabilities are available only if these plug-ins are present.

The Eclipse IDE functionality is heavily based on the concept of extensions and extension points. For example, the Java Development Tools provide an extension point to register new code templates for the Java editor.

Via additional plug-ins you can contribute to an existing functionality, for example new menu entries, new toolbar entries or provide completely new functionality. But you can also create completely new programming environments.

The minimal required plug-ins to create and run a minimal Eclipse RCP application (with UI) are the two plug-ins “org.eclipse.core.runtime” and “org.eclipse.ui”. Based on these components an Eclipse RCP application must define the following elements:

  • Main program – a RCP main application class implementing the interface “IApplication”. This class can be viewed as the equivalent to the main method for standard Java application. Eclipse expects that the application class is defined via the extension point “org.eclipse.core.runtime.application”.
  • A Perspective – it defines the layout of your application. Must be declared via the extension point “org.eclipse.ui.perspective”.
  • Workbench Advisor- invisible technical component which controls the appearance of the application (menus, toolbars, perspectives, etc.)

DICE Tools integration approaches

The Eclipse-based DICE IDE integrates most of the tools of the DICE framework. Due to the different nature of the tools, not all of them have the ability to get integrated completely within the IDE. It is necessary to provide a solution for that. Some of the tools have the real execution environment available outside the IDE (not eclipse plugins), for instance, in an external web site, or in an external server.

The DICE IDE offers two ways of get integrated:

  • Fully integrated
  • Externally integrated

Both integrations have a common component of integration within the IDE. This component contributes the IDE with a menu, through which the user can interact with all the integrated tools (Figure 1).

 

Figure 1. The menu for a DICE tool in the DICE IDE.

External integration:

This approach is the easiest. It is used when the real execution environment of the tool is placed outside the IDE, for instance within an external server or web service.

The only required information for this approach is to provide the needed information to connect to the external application, typically a URL:

  • Protocol: HTTP or HTTPS
  • Server: the address of the server
  • Port: the port where the server remains available
  • Parameters: possible parameters to be passed when the web service is visited (user id, token…)

There exists a plug-in that implements an abstract mechanism that is offered to all of the tools that prefers this kind of integration. This plug-in adds support to open the internal web browser of Eclipse with the given page, allowing the user to access to it within the IDE. An example of such an integration is given on the Figure 2 with the DICE Monitoring tool.

 

Figure 2. Example of Monitoring Tool, an external tool integration.

The IDE also provides an abstract Eclipse Preferences page that allows the user to modify these properties (Figure 3). In this way, the external web server tool integration can be modified dynamically by the user if needed.

 

Figure 3. Example of Monitoring Tool external web service configuration.

Full integration:

This approach requires much effort by the tool owner, as it is intended to develop a fully functional architecture in the IDE that allows the user to interact with the tool and perform all the needed operations.

It is required to have some Eclipse development skills. There are lots of Eclipse tutorials available on Internet that can be used to learn how to develop Eclipse plug-ins and contribute the IDE to provide new functionality like wizards, dialogs, launchers, views…

Depending on how complex is the tool, it will be more or less difficult to integrate it within the IDE.

The Figure 4 shows an example of fully integrated tool. In this case, it is the Simulation tool.

 

Figure 4. An example of the Simulation Tool, a fully integrated tool.

Conclusions

This post described the basic features of the DICE IDE, in particular the dual integration patterns provided by the integrated environment, and examples of integrated DICE Tools. Due to the different nature of the tools, not all of them have the ability to get integrated completely within the IDE. All tools, independently of the integration used, are accessible through the menu item.

The IDE has been released in January 2017 on GitHub as part of the DICE Knowledge Repository.  A complete tutorial and a Youtube channel allow any interested designers, administrators, quality engineers or system architect to start quickly with the IDE.

Christophe Joubert, Ismael Torres (PRO)

Rich Client Platform for the DIA-integrated Development

DICE focuses on the quality assurance for data-intensive applications (DIA) developed through the Model-Driven Engineering (MDE) paradigm. The project aims at delivering methods and tools that will help satisfying quality requirements in data-intensive applications by iterative enhancement of their architecture design. One component of the tool chain developed within the project is the DICE IDE. It is an Integrated Development Environment (IDE) that accelerates the development of data-intensive applications.

The Eclipse-based DICE IDE integrates most of the tools of the DICE framework and it is the base of the DICE methodology. As highlighted in the deliverable D1.1 State of the Art Analysis, there does not exist yet any MDE IDE on the software market through which a designer can create models to describe and analyse data-intensive or Big Data applications and their underpinning technology stack. This is the motivation for defining the DICE IDE.

The DICE IDE is based on Eclipse, which is the de-facto standard for the creation of software engineering models based on the MDE approach. DICE customizes the Eclipse IDE with suitable plug-ins that integrate the execution of the different DICE tools, in order to minimize learning curves and simplify adoption. In this blog post we explain how the DICE tools introduced to the reader earlier have been integrated into the IDE. So, How’s the DICE IDE built?

Securing federated cloud networks using Service Function Chaining

Sébastien Dupont - CETIC

Software defined networks networks (SDN), network function virtualization (NFV) and network function chaining (SFC) technologies enable more advanced and flexible cloud federation mechanisms. In this blog post, we will show how to use those technologies in federated clouds to improve security.

Protecting network overlays using Service Function Chaining

Cloud networks security can be significantly improved by composing network functions such as firewalls, intrusion detection, deep packet inspection, etc. The image below illustrates how data flows through different paths depending on network security policies.

 

What about protecting federated networks?

SFC and NFV provide a way to secure each individual network inside a cloud federation. The following figure shows two federated networks belonging to different clouds that are protected using SFC/NFV. Each cloud administrator manages its own network security policy, and an additional global federated network security policy is applied on top. For each cloud, the intra-cloud inbound and outbound traffics go through a series of NFV.

 

 

Protecting an OpenStack federation with SFC/NFV

The OpenStack Heat project provides a template-based orchestration mechanism, formalised in YAML (YAML Ain’t Markup Language) that can be extended to support SFC network security policies. The TOSCA project proposes a service manifest specification for NFV, which can be translated in Heat.

 

 

We are currently investigating two Openstack components to protect an OpenStack cloud federation: Tackerfor the NFV management and networking SFC for the NFV orchestration.

Case studies

SFC/NFV Encryption

In this scenario we consider three clouds, the connection with one of those clouds is untrusted. To secure the communications, we can add encryption and decryption at the network level using dedicated SFC/NFV.

 

Here is an extract of the service manifest that describes the global security policy:

 

SFC/NFV Encryption and Deep Packet Inspection

Some network functions should be done asynchronously to avoid slowing down the traffic. In this scenario, the encryption and firewalling operations are done synchronously because the security system needs to respond directly when traffic goes through those NFV, whereas DPI could be applied after the traffic has already gone through.

 

References

Philippe Massonet, Anna Levin, Massimo Villari, Sébastien Dupont and Arnaud Michot: Enforcement of Global Security Policies in Federated Cloud Networks with Virtual Network Functions. NCA 2016.

Philippe Massonet, Sébastien Dupont, Arnaud Michot, Anna Levin, Massimo Villari: An architecture for securing federated cloud networks with Service Function Chaining. ISCC 2016: 38-43

Philippe Massonet, Anna Levin, Antonio Celesti, Massimo Villari: Security Requirements in a Federated Cloud Networking Architecture. ESOCC Workshops 2015: 79-88

Formal Verification of Data-Intensive Applications with Temporal Logic

Beside functional aspects, designers of Data-Intensive Applications have to consider various quality aspects that are specific to the applications processing huge volumes of data with high throughput and running in clusters of (many) physical machines. A broad set of non-functional aspects positioned in the areas of performance and safety should be included at the early stage of the design process to guarantee high-quality software development.

The evaluation of the correctness of such applications, and when functional and non-functional aspects are both involved, is definitely not trivial. In the case of Data-Intensive Applications, the inherent distributed architecture, the software stratification and the computational paradigm implementing the logic of the applications pose new questions on the criteria that should be considered to evaluate their correctness.

 

Data-intensive applications are commonly realized through independent computational nodes that are managed by a supervisor providing resources allocation and node synchronization functionalities. Message exchange is guaranteed by an underlying network infrastructure over which the (data-intensive) framework might implement suitable mechanisms to guarantee the correct message transfer among the nodes. The logic of the application is the tip of the iceberg of a very complex software architecture which the developer cannot completely govern. Between the application code and the deployed running executables there are many interconnected layers, offering abstractions and running control automatisms, that are not visible to the developers (such as, for instance, the containerization mechanisms, the cluster manager, etc.).

Besides the architectural aspects of the framework, the functionality of data-intensive applications requires, in some cases, a careful analysis of the notion of correctness adopted to evaluate the outcomes. This is the case, for instance, of streaming applications. The functionality of streaming applications is defined through the combination and concatenation of operations on streams, i.e., infinite sequences of messages originated from external data sources or by the computational nodes constituting the application. The operations can transform a stream into a new stream or can aggregate a result by reducing a stream into data. Sometimes, the operations are defined over portions of streams, called windows, that partition the streams on the basis of specific grouping criteria of the messages in the stream. The complexity and the variety of parameters defining the operations make the definition of the streaming transformation semantics not obvious and the assessment of their correctness far from being trivial.

In DICE, the evaluation of correctness concerns “safety” aspects of data intensive applications. Verification of safety properties is done automatically by means of a model checking analysis that the designer performs at design time. The formal abstraction which models the application behavior is first extracted from the application UML diagrams and later verified to check the existence of incorrect executions, i.e., executions that do not conform with specific criteria identifying the required behavior. Time and the ordering relation among the events of the application are the main aspects characterizing the formalism used for verification, that is based on specific extensions of Linear Temporal Logic (LTL). As already pointed out, since the technological framework affects the definition of correctness to be adopted for evaluating the final application, the formal modeling devised for DICE verification combines an abstraction of functional aspects with a simplified representation of the computational paradigm adopted to implement the application.

DICE verification is carried out by D-verT and focuses on Apache Storm and (soon) Spark, two baseline technologies for streaming and batch applications. The computational mechanism they implement is captured by means of logical formulae that, when instantiated, given a specific DTSM application model, represent the executions of the Storm (or Spark) application. The analyses that the user can perform, from the DICE IDE, are bottleneck analysis of Storm applications and worst time analysis of Spark applications (the latter is a work in progress).

In the first case, the developer can verify the existence of a node of a Storm application that cannot process the incoming workload with a timely computation. In such a case, the node is likely to be a bottleneck node for the application that might cause memory saturation and drop the overall performance. In the second case, the developer can perform a worst case analysis of the total time span required by a Spark application to complete a job. The overall job time, that must meet a given deadline at runtime, can be evaluated on the basis of a task time estimation, for the physical resources available in the cluster, that must be known before running the verification.

Details about verification techniques can be found in Deliverable D3.5 – Verification tool Initial Version and on the DICE Github repository.

Related material:

  1. Francesco Marconi, Marcello M. Bersani, Madalina Erascu, Matteo Rossi:
    Towards the Formal Verification of Data-Intensive Applications Through Metric Temporal Logic. ICFEM 2016
  2. Francesco Marconi, Marcello Maria Bersani and Matteo Rossi: Formal Verification of Storm Topologies through D-verT. SAC 2017

Marcello M. Bersani and Verification team (PMI)

ENTICE & TEDX - Radu Prodan: The Dark, Disruptive Side of the Cloud

In our latest blog we look back at a recent TEDx talk from the ENTICE Scientific Coordinator, Radu Prodan, where he provides insight into the technology of clouds, the historical development, their interconnection today and what kind of possibilities there are for the future. 

Radu Prodan is a trained engineer and Doctor of Technical Sciences, and Technical Coordinator of the ENTICE project. This talk discusses the mysterious Clouds as today’s de-facto interconnection, storage, and computing paradigm, gathering billions of devices spread around the globe. 

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

ENTICE: 5th Cloud Assisted Conference

In collaboration with Slovenia’s Chamber of Commerce, the University of Ljubljana and the ENTICE project are co-organising the 5th Cloud Assisted Conference on November 9th, 2016. The programme and the presentations are available online.

At the CLASS 2016 event several projects of Slovenia’s Smart Specialization 55 Mio EUR funding programme are presented along with the results of Horizon 2020 projects related to smart cities, homes, communities, eHealth and Industry 4.0.

If you want to know more about ENTICE then why not take a look at our excellent commercial use cases?

Performance and Reliability in DIA Development

Worried about the performance and reliability of your data-intensive application?

A Capgemini research shows that only 13% of organizations have achieved full-scale production for their Data-Intensive applications (DIA). In particular the research refers to applications using Big Data implementations, such as Hadoop MapReduce, Apache Storm or Apache Spark. Apart of the correct deployment and optimization of a DIA, software engineers face the problem of achieving performance and reliability requirements. Definitely, a framework to assist in guaranteeing these requirements in the very early phases of the development could be of great help. Consider that in later phases, the ecosystem of a cluster is not completely controllable. Therefore, predictions of throughputs, service times or scalabilities with varying number of users, workloads, network traffic or failures are a need. Within the DICE project, Simulation tool has been developed to help achieve that.

 

If you are looking for a quality-driven framework for DIA development, the Simulation tool [1] of the DICE project can be your choice. This tool makes it easier to simulate the behavior of the system prior to the deployment. Hence, you get a real-world testbed that allows the performance assessment of the DIA. The Simulation tool features:

  • Prediction of performance metrics: throughput, utilization or service time;
  • Detection of performance bottlenecks;
  • Detection of reliability issues.

Once the software developers get the simulation results, they can consequently configure, adapt, or optimize their DIA to the specific execution context. The Simulation tool offers a modeling environment integrated within the Papyrus Eclipse tool. It guides the software developer through the design and analysis phases. The Simulation tool covers all the steps of a simulation workflow, as follows:

  1. The modeling with high-level description languages, in particular UML, using a novel profile for describing the parameters and characteristics of the system,
  2. the transformation to performance models, specifically to Stochastic Petri Nets, that are suitable for prediction, and
  3. last but not least, the analysis of the model and the retrieval of the performance results.

The following image offers an overview of the simulation workflow with the internal tools, modules, and configurations. The transformation of UML to a Stochastic Petri Net is done by a model-to-model (M2M) transformation using the QVTo language. The Stochastic Petri Net is analyzed by the GreatSPN tool, that produces the performace results.

 

The Simulation Tool has been integrated within the DICE IDE, but it can also be used as a stand-alone application. Currently, the Simulation tool supports platform-independent models as well as the Storm technology. We plan to extend the technology support to Apache Spark, Tez and Hadoop in the following releases. For more details about the Simulation tool, please visit our Github page.

José Merseguer, José I. Requeno and Diego Pérez (ZAR)

References:

[1] A. Gómez, C. Joubert and J. Merseguer. A Tool for Assessing Performance Requirements of Data-Intensive Applications. XXIV Jornadas de Concurrencia y Sistemas Distribuidos (JCSD 2016).

BEACON's Federated SDN

This blog post by OpenNebula Systems, outlines the features of the Federated SDN in BEACON and how it is structured.

BEACON is all about federating networks across clouds infrastructures securely. The Federated SDN is the software component that allows to build a Federated Network by aggregating two or more Federated Network Segments. It features an API to allow for Federated Network definitions, and uses adapters to talk to the federation agents APIs in different cloud infrastructures as well as to the Cloud Management Platforms (CMP). It is in charge of cross site networking, managing federated networks, and as such will address the following functionality in the first cycle. This component addresses the "Management of L2 overlays" software requirement of the project.  

 

This component features a REST interface to expose the functionality of the core component, which manages pools for the different data objects that represent the networking infrastructure that federates. A database is used to persist the data moel, and a well defined API allows the interaction between the Federated SDN core with the underlying cloud by the use of different adapters for OpenStack and OpenNebula based infrastructures. A high level view of this component architecture is depicted in the following figure. 

 

The Federated SDN features four first class data citizens, the federated network and federated segment objects, the tenant representation and the different cloud sites abstractions. Also, to interact with the different clouds that needs to be federated at the network level, the Federated SDN features cloud adapters. Initially two adapters, OpenNebula and OpenStack, have been developed. Each adapter is composed of a set of scripts that receive parameters from standard input and return results with standard output.

Using Apache Storm for Trend Detection in the Social Media

As it is widely known, especially in the media industry, messages posted in social media contain valuable information related to events and trends in the real world. Various industries and brands that analyze social media are gaining valuable insights and information which they use in a number of operations.

For example, in the news industry, trend detection is useful for:

  • identifying emerging news based on the popularity of a certain topic and
  • defining areas of great public interest that should be closely monitored as even a small development affects many people and leads to emerging news.

 

As another example, in the financial sector, trends may have both short-term and long-term consequences, affecting from the daily price of stock to a country’s macroeconomic indicator. As an example, a trend demanding military action in the Middle East as a result of a terrorist attack may affect oil prices and subsequently decrease car sales.

To this end, and taking into account the large scale of that type of content, it is essential to develop methods for efficient trend detection in real-time.

For example, in recent years the pace of decision-making in breaking-news journalism has significantly increased. This is due to the multiplication of digital sources and incoming data streams, digital production processes, automation, real-time publishing and largely mobile news audiences.

The Storm topology in the following figure is a first sketch for the implementation of a known trend detection method in a distributed manner. The method is a feature-pivot method that analyzes the temporal distributions of words and discovers trends by grouping trending keywords together.

 

There are different possible inputs to the topology: Candidate spouts include the Twitter streaming API and queues that inject messages into the topology (Redis, Apache Kafka). The first processing bolt is responsible for the extraction of entities and keywords from the incoming messages.

Trivial keywords (e.g. stop-words) are discarded while the rest of them are forwarded to the next bolt. The Timeline Generation bolt aggregates tuples of keywords –timestamps and creates a set of statistics for each keyword. In other words, this bolt calculates a background model of expected frequencies based on historical data. Tuples associated with the same keywords are aggregated in the same worker of the Timeline Generation bolt in a similar fashion as in map-reduce.

The resulting baseline model is forwarded to the next bolt each time there is an update. Then, the Bursty Keywords Detection bolt compares current frequencies to the baseline model and detects keywords, for which their difference is extraordinary.

Finally, the detected bursty keywords are clustered together in the final bolt of the topology based on keywords co-occurrences. The extracted trends are stored in a database.

We are currently conducting experiments on this Trend Detector topology and trying out changes that may improve the quality of results.

The flexiOPS Use Case

“I see only murk. Murk outside; murk inside. I hope, for everyone’s sake, the scanners do better.”— from A Scanner Darkly, by Philip K Dick

In this post, flexiOPS developer Andrew Phee details the implementation of the flexiOPS Use Case for the BEACON project.

The BEACON Use Case involves using an open source security scanner to highlight security limitations of Virtual Machine (VM) deployments. The scanner is configured to support scanning of VMs from multiple cloud platforms. The scanner that was chosen for use was OpenVAS, a powerful open source vulnerability scanning framework.

The overall result of the work carried out is that in the case where a new VM is created on the platform, it is scanned by OpenVAS for security vulnerabilities. Next, the generated security report is emailed to the VM owner, and a firewall is created and applied to the VM for additional security measures.

Existing on the FCO platform is a program known as a trigger. In the FCO platform, a trigger is a program that “allows an action in Flexiant Cloud Orchestrator to initiate a second action”[1]. In this case, the code is executed in the event that a new VM is created on a specific customer account. The trigger codes resulting action first launches the new VM into a running state, then uses the client socket executable located on the FCO management box to send the VM details (IP, UUID etc) to the vulnerability scanner listener program located on a separate server.

Assuming the vulnerability scanner is listening properly, it receives the VM details and uses them to build commands to be sent to the OpenVAS deployment. OpenVAS then performs actions based on the received commands. The main task performed by OpenVAS is to carry out a security vulnerability scan on the VM which was newly created at the beginning of the process. This scan generates a report, which provides insight into how vulnerable to security attackers the VM is.

This report is sent to the customer email associated with the account used to create the VM. This could potentially be extremely useful for a VM owner, as they can use the report to understand exactly where the security failings are on their VM and make improvements accordingly.

Finally, the vulnerability scanner listener creates a generic firewall on the FCO platform, and applies it to the VM. While not specifically configured to address the security problems highlighted by the OpenVAS scan report, it nonetheless provides an additional security layer for the VM.

This process helps provide immediate security improvements in the form of creating and applying a firewall to the VM. Possible future improvements are also feasible, as the VM owner has the OpenVAS report which highlights areas in which the security of the VM can be improved.

References

[1]. http://docs.flexiant.com/display/DOCS/Triggers