Showing posts with label Microserivices. Show all posts
Showing posts with label Microserivices. Show all posts

Thursday, June 14, 2018

Securing Containers and Microservices with HPE ProLiant Servers



Cloud-native software built on container technologies and microservices architectures is rapidly modernizing applications and infrastructure, and Containers are the preferred means of deploying Microservices.   Cloud-native applications and infrastructure require a radically different approach to security. In cloud native applications, Service Oriented Architecture based on Microservices are commonly employed. These Microservices are running on containers and each containers has to be individually secured.

This calls for new ways to secure applications and one need to start with a comprehensive secure infrastructure, container management platform and tools to secure cloud-native software to addresses the new security paradigm.

This article proposes one such solution. Running VMware Instantiated Containers on HPE Proliant Gen10 DL 325 & DL 385 servers using AMD EPYC processors can address the security challenges.

HPE Proliant Gen10 DL 325 & DL 385 servers using AMD EPYC processors provide a solid security foundation. HPE's silicon root of trust, FIPS 140-2 Level 1 certified platform and AMD's Secure Memory Encryption provides the foundation layer for a secure IT Infrastructure.

About AMD EPYC Processor

AMD EPYC processor is the x86 architecture server processor from AMD. Designed to meet the needs of today's software defined data centers. The AMD EPYC SoC bridges the gaps with innovations designed from the ground up to efficiently support the needs of existing and future data center requirements.

AMD EPYC SoC brings a new balance to your data center. The highest core count in an x86-architecture server processor, largest memory capacity, most memory bandwidth, and greatest I/O density are all brought together with the right ratios to get the best performance.

AMD Secure Memory Encryption

AMD EPYC processor incorporates a hardware AES encryption engine for inline encryption & decryption of DRAM. AMD EPYC SoC uses 32-bit micro-controller (ARM Cortex-A5), which provides cryptographic functionality for secure key generation and key management.
Encrypting main memory keeps data private from malicious intruders having access to the hardware. Secure Memory Encryption protects against physical memory attacks. Single key is used for encryption of system memory – Can be used on systems with VMs or Containers. Hypervisor chooses pages to encrypt via page tables - thus giving users control over which applications use memory encryption.

Secure Memory encryption allows running secure OS/Kernel so that encryption is transparent to applications with minimal performance impact. Other hardware devices such as Storage, Network, graphics cards etc., can access encrypted pages seamlessly through Direct Memory Access (DMA)

VMware virtualization solutions

VMware virtualization solutions including NSX-T, NSX-V & vSAN along with VMWare Instantiated Containers provide network virtualization which includes security inform of micro segmentation and virtual firewalls for each container to provide runtime security.

Other VMware components include vRealize Suite for continuous monitoring and container visibility. This enhanced visibility helps in automated detection, prevention & response to security threats.

Securing container builds and deployment

Security starts at the build and deploy phase. Only tested & approved builds are held in the container registry – from which all container images are used for production deployment. Each container image has to be digitally verified prior to deployment. Signing images with private keys provides cryptographic assurances that each image used to launch containers was created by a trusted party.

Harden & Restrict access to host OS. Since containers running on a host share the same OS, it is important to ensure that they start with an appropriately restricted set of capabilities. This can be achieved using kernel security feature such as secure boot and secure memory encryption.

Secure data generated by containers. Data encryption starts at the memory level – even before data is written to the disk. Secure memory encryption on HPE DL 325 & 385 servers allow a seamless integration with vSAN – so that all data is encrypted according to global standards such as FIPS 140-2. In addition kernel security features and modules such as Seccomp, AppArmor, and SELinux can also be used.

Specify application-level segmentation policies.  Network traffic between microservices can be segmented to limit how they connect to each other. However, this needs to be configured based on application-level attributes such as labels and selectors, abstracting away the complexity of dealing with traditional network details such as IP addresses. The challenge with segmentation is having to define policies upfront that restrict communications without impacting the ability of containers to communicate within and across environments as part of their normal activity.

Securing containers at runtime

Runtime phase security encompasses all the functions—visibility, detection, response, and prevention—required to discover and stop attacks and policy violations that occur once containers are running. Security teams need to triage, investigate, and identify the root causes of security incidents in order to fully remediate them. Here are the key aspects of successful runtime phase security:

Instrument the entire environment for continuous visibility.  Being able to detect attacks and policy violations starts with being able to capture all activity from running containers in real time to provide an actionable "source of truth." Various instrumentation frameworks exist to capture different types of container-relevant data. Selecting one that can handle the volume and speed of containers is critical.

Correlate distributed threat indicators.  Containers are designed to be distributed across compute infrastructure based on resource availability. Given that an application may be comprised of hundreds or thousands of containers, indicators of compromise may be spread out across large numbers of hosts, making it harder to pinpoint those that are related as part of an active threat. Large-scale, fast correlation is needed to determine which indicators form the basis for particular attacks.

Analyze container and microservices behavior. Microservices and containers enable applications to be broken down into minimal components that perform specific functions and are designed to be immutable. This makes it easier to understand normal patterns of expected behavior than in traditional application environments. Deviations from these behavioral baselines may reflect malicious activity and can be used to detect threats with greater accuracy.

Augment threat detection with machine learning. The volume and speed of data generated in container environments overwhelms conventional detection techniques. Automation and machine learning can enable far more effective behavioral modeling, pattern recognition, and classification to detect threats with increased fidelity and fewer false positives. Beware solutions that use machine learning simply to generate static whitelists used to alert on anomalies, which can result in substantial alert noise and fatigue.

Intercept and block unauthorized container engine commands. Commands issued to the container engine, e.g., Docker, are used to create, launch, and kill containers as well as run commands inside of running containers. These commands can reflect attempts to compromise containers, meaning it is essential to disallow any unauthorized ones.

Automate actions for response and forensics. The ephemeral life spans of containers mean that they often leave very little information available for incident response and forensics. Further, cloud-native architectures typically treat infrastructure as immutable, automatically replacing impacted systems with new ones, meaning containers may be gone by the time of investigation. Automation can ensure information is captured, analyzed, and escalated quickly enough to mitigate the impact of attacks and violations.

Closing Thoughts

Faced with these new challenges, security professionals will need to build on new secure IT infrastructure that supports the required levels of security for their cloud-native technologies. Secure IT Infrastructure must address the entire lifecycle of cloud-native applications: Build/Deploy & Runtime. Each of these phases has a different set of security considerations which is addressed to form a comprehensive security program.

Thursday, January 18, 2018

Software Defined Security

 
 In the virtual world an organization might have thousands of virtual machines... The organization cannot manage them manually. That's where the Software Defined Security comes handy. Applying security policy uniformly and automatically in all environments.
 
 With runtime virtualization, different containers/VMs can reside together in the same cloud infrastructure – and have different security protections. Each application can have a different security profile. A software bases security solutions helps automate secure deployments in clouds and allow for customization of protection across different applications
 

 In a hybrid cloud environments, applications can span across multiple clouds – and yet have an uniform security settings and responses. Automate responses to security events to minimize damages and increase vigilance with automated monitoring.

Wednesday, November 29, 2017

Software Architecture for Cloud Native Apps


Microservices are a type of software architecture where large applications are made up of small, self-contained units working together through APIs that are not dependent on a specific language. Each service has a limited scope, concentrates on a particular task and is highly independent. This setup allows IT managers and developers to build systems in a modular way. 

Microservices are small, focused components built to do a single thing very well.

Componentization: 
Microservices are independent units that are easily replaced or upgraded. The units use services to communicate with things like remote procedure or web service requests.

Business capabilities: 
Legacy application development often splits teams into areas like the "server-side team" and the "database team." Microservices development is built around business capability, with responsibility for a complete stack of functions such as UX and project management.

Products rather than projects:
Instead of focusing on a software project that is delivered following completion, microservices treat applications as products of which they take ownership. They establish an ongoing dialogue with a goal of continually matching the app to the business function.

Dumb pipes, smart endpoints: 
Microservice applications contain their own logic. Resources that are often used are cached easily.Decentralized governance: Tools are built and shared to handle similar problems on other teams.

Problems microservices solve


Larger organizations run into problems when monolithic architectures cannot be scaled, upgraded or maintained easily as they grow over time.

Microservices architecture is an answer to that problem. It is a software architecture where complex tasks are broken down into small processes that operate independently and communicate through language-agnostic APIs.

Monolithic applications are made up of a user interface on the client, an application on the server, and a database. The application processes HTTP requests, gets information from the database, and sends it to the browser. Microservices handle HTTP request response with APIs and messaging. They respond with JSON/XML or HTML sent to the presentation components.

Microservices proponents rebel against enforced standards of architecture groups in large organizations but enthusiastically engage with open formats like HTTP, ATOM and others.As applications get bigger, intricate dependencies and connections grow. Whether youare talking about monolithic architecture or smaller units, microservices let you splitthings up into components. This allows horizontal scaling, which makes it much easier tomanage and maintain separate components.The relationship of microservices to DevOpsIncorporating new technology is just part of the challenge. Perhaps a greater obstacle is developing a new culture that encourages risk-taking and taking responsibility for an entire project "from cradle to crypt."

Developers used to legacy systems may experience culture shock when they are given more autonomy than ever before. Communicating clear expectations for accountability and performance of each team member is vital. DevOps is critical in determining where and when microservices should be utilized. It is an important decision because trying to combine microservices with bloated, monolithic legacy systems may not always work. Changes cannot be made fast enough. With microservices, services are continually being developed and refined on-the-fly.

DevOps must ensure updated components are put into production, working closely with internal stakeholders and suppliers to incorporate updates. Microservices are an easier solution than SOA, much like JSON was considered to be simpler than XML and people viewed REST as simpler than SOAP.

With Microservices, we are moving toward systems that are easier to build, deploy and understand. 

Thursday, November 16, 2017

Why Use Containers for Microservices?



Microservices deliver three benefits: speed to market, scalability, and flexibility.

Speed to Market
Microservices are small, modular pieces of software. They are built independently. As such, development teams can deliver code to market faster. Engineers iterate on features, and incrementally deliver functionality to production via an automated continuous delivery pipeline.

Scalability
At web-scale, it's common to have hundreds or thousands of microservices running in production. Each service can be scaled independently, offering tremendous flexibility. For example, let's say you are running IT for an insurance firm. You may scale enrollment microservices during a month-long open enrollment period. Similarly, you may scale member inquiry microservices at a different time E.g., during the first week of the coverage year, as you anticipate higher call volumes from subscribed members. This type of scalability is very appealing, as it directly helps a business boost revenue and support a growing customer base.

Flexibility
With microservices, developers can make simple changes easily. They no longer have to wrestle with millions of lines of code. Microservices are smaller in scale. And because microservices interact via APIs, developers can choose the right tool (programming language, data store, and so on) for improving a service.

Consider a developer updating a security authorization microservice. The dev can choose to host the authorization data in a document store. This option offers more flexibility in adding and removing authorizations than a relational database. If another developer wants to implement an enrollment service, they can choose a relational database its backing store. New open-source options appear daily. With microservices, developers are free to use new tech as they see fit.
Each service is small, independent, and follows a contract. This means development teams can choose to rewrite any given service, without affecting the other services, or requiring a full-fledged deployment of all services.

This is incredibly valuable in an era of fast-moving business requirements.

Sunday, November 12, 2017

Serverless Computing for Microservices


Microservices is a new architecture of developing software. Microservices is best defined as:

"Service Oriented Architecture composed of loosely coupled components that have clearly defined boundaries"

This can be interpreted a set of software functions that work together based on predefined rules for example take a restaurant website. A typical restaurant website does not have high traffic all through the day, and traffic increases during lunch & dinner time. So having this website on a dedicated VM is a waste of resources. Also the website can be broked down into few distinct functions. The main webpage would be the landing zone, and from there each section like Photos, Menu, Location, etc., could be another independent function. The user triggers these funtions by clicking on the hyperlinks - and users will be served with the requested data.

This implies no coupling or loosely coupled functions that make up the entire website and each funtion can be modified/updated independently. This implies, the business owner can independently - without the need to bringdown the entire website.

From cost prespective also, building a website with Function-as-a-Service - allows the business to pay for the actual usage and each segment of the site can scale independently.

Monday, November 06, 2017

Run Big Data Apps on Containers with Mesosphere


Apache Mesos was created in 2009 at UC Berkeley. Designed to run large scale webapps like Twitter, Uber, etc. It can scale upto 10,000s of nodes and supports Docker Containers.

Mesos is a distributed OS kernel:

  • Two level resource scheduling
  • Launch tasks across the cluster
  • Communication between tasks (like IPC)
  • APIs for building “native” applications (aka frameworks): program against the datacenter
  • APIs in C++, Python, JVM-languages, Go and counting
  • Pluggable CPU, memory, IO isolation
  • Multi-tenant workloads
  • Failure detection & Easy failover and HA

Mesos is a multi-framework platform solution: weighted fair sharing, roles, etc. Runs Docker containers alongside other popular frameworks e.g. Spark, Rails, Hadoop, Allows users to run regular services and batch apps in the same cluster. Mesos has advanced scheduling: resources, constraints, global view of resources, which is designed for HA and self-healing.

Mesos is now a proven at scale, battle-tested in production running the biggest of the web apps. 

Thursday, April 13, 2017

Fintech is built on Microservices


Today, we have the web at our fingertips. We are now deeply connected to Internet for most parts of our daily life. We are moving whole business segments from brick and mortar to the online space. That's not really big news - it is just called as progress and we observe it with more or less interest day in, day out. With the digitalization also comes a shift in the nature of the services offered to us. Consumers will change, the service landscape will change and banks will need to adapt - which means they will drastically change, too.

Customer centricity and brand value experiences are key for Banks

Banks are institutions which touch a multitude of business services. Banks are trusted by the customers more than any other business. If banks make use of their trustworthiness and develop a strong and intelligent branding, they have the chance to not only endure the great transformation of our service landscape, but to step out of it as the big winner.

Until then, banks will have to be open to tremendous structural alteration and put some serious effort into developing better customer experience. Because the change of the service landscape as we know it brings along increasingly sophisticated customers who are subjected to distractions from countless competing products and providers. If a bank can create an exceptional customer journey, it will rise again. If it cannot, it may just sink like a stone. Time to make a move, banks!

FinTech is now built on microservices

In the past two years, I have seen how Fintech has changed finance and banking . In next 20 years, banks will be a hub for financial services in one way or another, with these services developed in cooperation with, or solely by, third-party companies.

In retail finance, we can observe another trend: fragmentation. For ages, banks have been managing all the financial needs of everyone. Want to store your money securely? Bank. Need a loan for your house or your business? Bank. Need to transfer money? Bank. Especially these two areas of the banking business – lending and payment – have been subject to disruption by smaller competitors for some years now and banks have lost tremendous revenue to innovative and tech-savvy fintech companies. What used to be in the hands of very few institutions has been split up between numerous competitors.

There were only one a handful of options to get a loan ten years ago, which were mostly extremely time-consuming and involved providing loads of information about yourself. In this day and age you can choose between many online-lenders using sophisticated algorithms to calculate which loan you qualify for and you will receive the money only hours later. Payment has changed on several levels: not only are there several major service providers which integrate seamlessly into your preferred apps and services. But there are even alternative currencies such as Bitcoin and Ethereum based on the blockchain system.

Other areas will follow and the finance market will arguably become increasingly fragmented with more but smaller service providers, offering more diverse and individual products. Finance serves as a great example for how a new change is coming about, as the fintech scene is booming and most consumers have already used basic fintech products, if they are aware of this or not.

Still, this is only the tip of the iceberg in fintech and analogous to this, many business segments are already evolving. Or are about to.

Bank will become a hub for multiple types of services

In the past two years, I have seen how Fintech has changed finance and banking . In next 20 years, banks will be a hub for financial services in one way or another, with these services developed in cooperation with, or solely by, third-party companies.

Banking and finance are changing and so will insurance, healthcare, automotive, education and even agriculture or legal services. Many areas are ripe for disruption. Through collaboration and partnerships, banks, insurances, healthcare etc., could build strong bonds with customers and build a strong brand.

Introduction to Microservices

Microservices are a type of software architecture where large applications are made up of small, self-contained units working together through APIs that are not dependent on a specific language. Each service has a limited scope, concentrates on a particular task and is highly independent. This setup allows IT managers and developers to build systems in a modular way. In his book, "Building Microservices," Sam Newman said microservices are small, focused components built to do a single thing very well.

Martin Fowler's "Microservices - a Definition of This New Architectural Term" is one of the seminal publications on microservices. He describes some of the key characteristics of microservices as:

Componentization: Microservices are independent units that are easily replaced or upgraded. The units use services to communicate with things like remote procedure or web service requests.

Business capabilities: Legacy application development often splits teams into areas like the "server-side team" and the "database team." Microservices development is built around business capability, with responsibility for a complete stack of functions such as UX and project management.

Products rather than projects: Instead of focusing on a software project that is delivered following completion, microservices treat applications as products of which they take ownership. They establish an ongoing dialogue with a goal of continually matching the app to the business function.

Dumb pipes, smart endpoints:
Microservice applications contain their own logic. Resources that are often used are cached easily.

Decentralized governance: Tools are built and shared to handle similar problems on other teams.



History of microservices

The phrase "Micro-Web-Services" was first used at a cloud computing conference by Dr. Peter Rodgers in 2005, while the term "microservices" debuted at a conference of software architects in the spring of 2011. More recently, they have gained popularity because they can handle many of the changes in modern computing, such as:

  • Mobile devices
  • Web apps
  • Containerization of operating systems
  • Cheap RAM
  • Server utilization
  • Multi-core servers
  • 10 Gigabit Ethernet


The concept of microservices is not new. Google, Facebook, and Amazon have employed this approach at some level for more than ten years. A simple Google search, for example, calls on more than 70 microservices before you get the results page. Also, other architectures have been developed that address some of the same issues microservices handle. One is called Service Oriented Architecture (SOA), which provides services to components over a network, with every service able to exchange data with any other service in the system. One of its drawbacks is the inability to handle asynchronous communication.

How microservices differ from service-oriented architecture

Service-oriented architecture (SOA) is a software design where components deliver services through a network protocol. This approach gained steam between 2005 and 2007 but has since lost momentum to microservices. As microservices began to move to the forefront a few years ago, a few engineers called it "fine-grained SOA." Still others said microservices do what SOA should have done in the first place.

SOA is a different way of thinking than microservices. SOA supports Web Services Definition Language (WSDL), which defines service end points rigidly and is strongly typed while microservices have dumb connections and smart end points. SOA is stateless; microservices are stateful and use object-oriented programming (OOP) structures that keep data and logic together.

Some of the difficulties with SOA include:
SOA is heavyweight, complex and has multiple processes that can reduce speed.
While SOA initially helped prevent vendor lock-in, it eventually wasn't able to move with the trend toward democratization of IT.

Just as CORBA fell out of favor when early Internet innovations provided a better option to implement applications for the Web, SOA lost popularity when microservices offered a better way to incorporate web services.

Problems microservices solve

Larger organizations run into problems when monolithic architectures cannot be scaled, upgraded or maintained easily as they grow over time. Microservices architecture is an answer to that problem. It is a software architecture where complex tasks are broken down into small processes that operate independently and communicate through language-agnostic APIs.

Monolithic applications are made up of a user interface on the client, an application on the server, and a database. The application processes HTTP requests, gets information from the database, and sends it to the browser. Microservices handle HTTP request response with APIs and messaging. They respond with JSON/XML or HTML sent to the presentation components. Microservices proponents rebel against enforced standards of architecture groups in large organizations but enthusiastically engage with open formats like HTTP, ATOM and others.

As applications get bigger, intricate dependencies and connections grow. Whether you
are talking about monolithic architecture or smaller units, microservices let you split
things up into components. This allows horizontal scaling, which makes it much easier to
manage and maintain separate components.

The relationship of microservices to DevOps
Incorporating new technology is just part of the challenge. Perhaps a greater obstacle is developing a new culture that encourages risk-taking and taking responsibility for an entire project "from cradle to crypt." Developers used to legacy systems may experience culture shock when they are given more autonomy than ever before. Communicating clear expectations for accountability and performance of each team member is vital. DevOps is critical in determining where and when microservices should be utilized. It is an important decision because trying to combine microservices with bloated, monolithic legacy systems may not always work. Changes cannot be made fast enough. With microservices, services are continually being developed and refined on-the-fly. DevOps must ensure updated components are put into production, working closely with internal stakeholders and suppliers to incorporate updates.

The move toward simpler applications.

As DreamWorks' Doug Sherman said on a panel at the Appsphere 15 Conference, the film-production company tried an SOA approach several years ago but ultimately found it counterproductive. Sherman's view is that IT is moving toward simpler applications. At times, SOA seemed more complicated than it should be.

Microservices were seen as an easier solution than SOA, much like JSON was considered to be simpler than XML and people viewed REST as simpler than SOAP. We are moving toward systems that are easier to build, deploy and understand. While SOA was initially designed with that in mind, it ended up being more complex than needed.

SOA is geared for enterprise systems because you need a service registry, a service repository and other components that are expensive to purchase and maintain. They are also closed off from each other.

Microservices handle problems that SOA attempted to solve more than a decade ago, yet they are much more open.

How microservices differ among different platforms
Microservices is a conceptual approach, and as such it is handled differently in each language. This is a strength of the architecture because developers can use the language they are most familiar with. Older languages can use microservices by using a structure unique to that platform. Here are some of the characteristics of microservices on different platforms:

Java
Avoids using Web Archive or Enterprise Archive files
Components are not auto-deployed. Instead, Docker containers or Amazon Machine Images are auto-deployed.

Uses fat jars that can be run as a process

PHP
REST-style PHP microservices have been deployed for several years now because they
are:

  • Highly scalable at enterprise level
  • Easy to test rapidly


Python

  • Easy to create a Python service that acts as a front-end web service for microservices in other languages such as ASP or PHP 
  • Lots of good frameworks to choose from, including Flask and Django
  • Important to get the API right for fast prototyping 
  • Can use Pypy, Cython, C++ or Golang if more speed or efficiency is required.


Node.js
Node.js is a natural for microservices because it was made for modern web applications.
Its benefits include:

  • Takes advantage of JavaScript and Google's high-performance, open-source V8 engine
  • Machine code is optimized dynamically during runtime
  • HTTP server processes are lightweight
  • Nonblocking, event-driven I/O
  • High-quality package management
  • Easy for developers to create packages
  • Highly scalable with asynchronous I/O end-to-end


.NET

In the early 2000s, .NET was one of the first platforms to create applications as services
using Simple Object Access Protocol (SOAP), a similar goal of modern microservices.
Today, one of the strengths of .NET is a heavy presence in enterprise installations. Here
are two examples of using .NET microservices:





Responding to a changing market

The shift to microservices is clear. The confluence of mobile computing, inexpensive hardware, cloud computing and low-cost storage is driving the rush to this exciting new approach. In fact, organizations do not have any choice. Matt Miller's article in The Wall Street Journal sounded the alarm; "Innovate or Die: The Rise of Microservices" explains that software has become the major differentiator among businesses in every industry across the board. The monolithic programs common to many companies cannot change fast enough to adapt to the new realities and demands of a competitive marketplace.

Service-oriented architecture attempted to address some of these challenges but eventually failed to achieve liftoff. Microservices arrived on the scene just as these influences were coming to a head; they are agile, resilient and efficient, qualities many legacy systems lack. Companies like Netflix, Paypal, Airbnb and Goldman Sachs have heeded the alarm and are moving forward with microservices at a rapid pace.