Wednesday, November 29, 2017

Software Architecture for Cloud Native Apps


Microservices are a type of software architecture where large applications are made up of small, self-contained units working together through APIs that are not dependent on a specific language. Each service has a limited scope, concentrates on a particular task and is highly independent. This setup allows IT managers and developers to build systems in a modular way. 

Microservices are small, focused components built to do a single thing very well.

Componentization: 
Microservices are independent units that are easily replaced or upgraded. The units use services to communicate with things like remote procedure or web service requests.

Business capabilities: 
Legacy application development often splits teams into areas like the "server-side team" and the "database team." Microservices development is built around business capability, with responsibility for a complete stack of functions such as UX and project management.

Products rather than projects:
Instead of focusing on a software project that is delivered following completion, microservices treat applications as products of which they take ownership. They establish an ongoing dialogue with a goal of continually matching the app to the business function.

Dumb pipes, smart endpoints: 
Microservice applications contain their own logic. Resources that are often used are cached easily.Decentralized governance: Tools are built and shared to handle similar problems on other teams.

Problems microservices solve


Larger organizations run into problems when monolithic architectures cannot be scaled, upgraded or maintained easily as they grow over time.

Microservices architecture is an answer to that problem. It is a software architecture where complex tasks are broken down into small processes that operate independently and communicate through language-agnostic APIs.

Monolithic applications are made up of a user interface on the client, an application on the server, and a database. The application processes HTTP requests, gets information from the database, and sends it to the browser. Microservices handle HTTP request response with APIs and messaging. They respond with JSON/XML or HTML sent to the presentation components.

Microservices proponents rebel against enforced standards of architecture groups in large organizations but enthusiastically engage with open formats like HTTP, ATOM and others.As applications get bigger, intricate dependencies and connections grow. Whether youare talking about monolithic architecture or smaller units, microservices let you splitthings up into components. This allows horizontal scaling, which makes it much easier tomanage and maintain separate components.The relationship of microservices to DevOpsIncorporating new technology is just part of the challenge. Perhaps a greater obstacle is developing a new culture that encourages risk-taking and taking responsibility for an entire project "from cradle to crypt."

Developers used to legacy systems may experience culture shock when they are given more autonomy than ever before. Communicating clear expectations for accountability and performance of each team member is vital. DevOps is critical in determining where and when microservices should be utilized. It is an important decision because trying to combine microservices with bloated, monolithic legacy systems may not always work. Changes cannot be made fast enough. With microservices, services are continually being developed and refined on-the-fly.

DevOps must ensure updated components are put into production, working closely with internal stakeholders and suppliers to incorporate updates. Microservices are an easier solution than SOA, much like JSON was considered to be simpler than XML and people viewed REST as simpler than SOAP.

With Microservices, we are moving toward systems that are easier to build, deploy and understand. 

Managing Hybrid IT with HPE OneSphere


 HPE OneSphere simplifies multi-cloud management for enterprises. With HPE OneSphere:
 1. One can deliver everything “as-a-service”
  a. Present all resources as ready-to-deploy services
  b. Build VM Farm that spans across private and public clouds
  c. Dynamically scale resources
  d. Lower Opex
 2. Control IT spend and utilization of public cloud services
  a. Mange subscription based consumption
  b. Optimize app placement using insights/reports
  c. Get visibility into cross-cloud resource utilization & costs
 3. Respond faster by enabling fast app deployment
  a. Provide a quota based project work spaces
  b. Provide self-service access to curated tools, resources & templates
  c. Streamline DevOps process

Tuesday, November 28, 2017

The Digital Workplace


Today's digital workforce demands a secure, high-speed Wi-Fi connectivity. Pervasive wireless access to business-critical applications is now expected wherever users work. Wireless LANs (WLANs) need massive scalability, uncompromising security, and rock solid reliability to accommodate the soaring demand. 

Embracing a mobile first digital workplace

Designing and building a high-performance workplace needs a wireless network, and the applications that run on them, is where services from Hewlett Packard Enterprise (HPE) excel.

With Aruba wireless technology can deliver a mobile first workplace which connects to Microsoft Skype for Business and Office 365, making the transition to a digital workplace a seamless process.  

The digital workplace enables people to bring your own (BYO)-everything with pervasive wireless connectivity, security, and reliability. This enables IT to focus on automation and centralized management. The mobile first workplace will be simpler to manage and maintain. 

Benefits include:

• Higher productivity with fast, secure, and always-on 802.11ac Wi-Fi connectivity
• Lower operating expenditures (OPEX) through reduced reliance on cellular networks
• Better user experiences  
• Reduce infrastructure cost in an all wireless workplace by 34%
• Increase business productivity
• Reduce hours spent on-boarding and performing adds, moves, and changes

HPE Elastic Platform for Big Data Analytics


Big data analytics platform has to be elastic - i.e., scale out with additional servers as needed.

In my previous post, I had given the software architecture for Big Data analytics. This article is all about the hardware infrastructure needed to deploy it.

HPE Apollo 4510 offers scalable dense storage system for your Big Data, object storage or data analytics? The HPE Apollo 4510 Gen10 System offers revolutionary storage density in a 4U form factor. Fitting in HPE standard 1075 mm rack, with one of the highest storage capacities in any 4U server with standard server depth. When you are running Big Data solutions, such as object storage, data analytics, content delivery, or other data-intensive workloads, the HPE Apollo 4510 Gen10 System allows you to save valuable data center space. Its unique, density-optimized 4U form factor holds up to 60 large form factor (LFF) and additional 2 small form factor (SFF) or M.2 drives. For configurability, the drives can be NVMe, SAS, or SATA disk drives or solid state drives.

HPE ProLiant DL560 Gen10 Server is a high-density, 4P server with high-performance, scalability, and reliability, in a 2U chassis. Supporting the Intel® Xeon® Scalable processors with up to a 68% performance gain1, the HPE ProLiant DL560 Gen10 Server offers greater processing power, up to 3 TB of faster memory, I/O of up to eight PCIe 3.0 slots, plus the intelligence and simplicity of automated management with HPE OneView and HPE iLO 5. The HPE ProLiant DL560 Gen10 Server is the ideal server for Bigdata Analytics workloads: YARN Apps, Spark SQL, Stream, Mlib, Graph, NoSQL, kafka, sqoop, flume etc., database, business processing, and data-intensive applications where data center space and the right performance are of paramount importance.

The main benefits of this platform are:


  1. Flexibility to scaleScale compute and storage independently
  2. Cluster consolidationMultiple big data environments can directly access a shared pool of data
  3. Maximum elasticityRapidly provision compute without affecting storage
  4. Breakthrough economicsSignificantly better density, cost and power through workload optimized components

Big Data Warehouse Reference Architecture


Data needs to get into Hadoop in some way and needs to be securely accessed through different tools. There is a massive choice of tools for each component, dependent on the choice of Hadoop distribution selected – each having their own versions of the tools, but, nonetheless providing the same functionality.

Just like core Hadoop, the tools that ingest and access data in Hadoop can scale independently.  For example, a Spark cluster, flume cluster, or kafka cluster

Each specific tool has it's own infrastructure requirements.

For example: Spark requires more memory and processor power, but is less dependent on hard disk drives

Hbase doesn't require as many cores within the processor but requires more servers and faster non-volatile memory such as SSD and NVMe based flash.    

Wednesday, November 22, 2017

Why Use Lusture File System?


Lusture File System is designed for a large-scale, high-performance data storage system. Lusture was designed for High Performance Computing requirements – which scales linearly to meet the most stringent and highly demanding requirements of Media applications. 

Lustre file systems have high performance capabilities and open source licensing, it is often used in supercomputers. Since June 2005, it has consistently been used by at least half of the top ten, & more than 60 of the top 100 fastest supercomputers in the world, including the world's No. 2 and No. 3 ranked TOP500  supercomputers in 2014,Titan & Sequoia. 

 Lustre file systems are scalable and can be part of multiple computer clusters with tens of thousands of client nodes, hundreds of petabytes (PB) of storage on thousands of servers, and more than a terabyte per second (TB/s) of aggregate I/O throughput.  This makes Lustre file systems a popular choice for businesses with large data centers, including those in industries such as Media Service, Finance, Research, Life sciences, and Oil & Gas.

Why Object Store is ideal data storage for Media Apps?




We are seeing a tremendous explosion of media content on Internet. Today, its not just YouTube for video distribution, there are million mobile apps which dostribute media - Audio & Video content over Internet.

Users today expect on-demand audio/video, anywhere, anytime access from any device. This increases the number of transcoded copies - to accommodate devices with various screen sizes. 

Companies are now using Video and Audio as major means of distributing information in their websites. This media content is cataloged online and is always available for users.

Even the content creation is adding new challenges to data storage. The advent of new audio & video technologies is making raw content capture much larger: 3D, 4K/8K, High Dynamic Range, High Frame Rates (120 fps, 240fps), Virtual and Augmented Reality, etc.

Content creation workflow has changed from file-based workflows to cloud-based workflows for production, post-production processing such as digital effects, rendering, or transcoding, as well as distribution and archiving. This has created a need for real-time collaboration need for distributed environments and teams scattered all over the globe, across many locations and time zones

All this changes in how media is created and consumed has resulted in such massive dataset sizes, traditional storage architectures just can't keep up any longer in terms of scalability.

Traditional storage array technolofies such as RAID will no longer capable of serving the new data demands. For instance, routine RAID rebuilds would be taking way too long in case of a failure, heightening data loss risks upon additional failures during that dangerously longer time window. Furthermore, even if current storage architectures could technically keep up, they are cost-prohibitive, especially considering the impending data growth tsunami about to hit. To top it off, they just can't offer the agility, efficiency and flexibility new business models have come to expect in terms of instant and unfettered access, rock-solid availability, capacity elasticity, deployment time and so on.

Facing such daunting challenges, the good news is that a solution does exist and is here today: Object Storage.

Object Storage is a based on sophisticated storage software algorithms running on a distributed, interconnected cluster of high-performance yet standard commodity hardware nodes, delivering an architected solution suitable for the stringent performance, scalability, and cost savings requirements required for massive data footprints. The technology has been around for some time but is now coming of age.

The Media and Entertainment industry is well aware of the benefits Object Storage provides, which is why many players are moving toward object storage and away from traditional file system storage. These benefits include:


  • Virtually unlimited scalability
    Scale out by adding new server node
  • Low cost with leverage of commodity hardware
  • Flat and global namespace, with no locking or volume semantics
  • Powerful embedded metadata capabilities (native as well as user-defined)
  • Simple and low-overhead RESTful API for ubiquitous, straightforward access over HTTP from any client anywhere
  • Self-healing capabilities with sophisticated and efficient data protection through erasure coding (local or geo-dispersed)
  • Multi-tenant management and data access capabilities (ideal for service providers)
  • Reduced complexity (of initial deployment/staging as well as ongoing data management)
  • No forklift upgrades, and no need for labor-intensive data migration projects
  • Software-defined storage flexibility and management


HPE. A leading sellers of servers and Hyperconverged Systems offers several low cost, high performance solutions for Object Storage on its servers using Software Defined Storage solutions:

1. Object Store with Scality Ring
2. Lusture File System

Scality Ring Object Store is a paid SDS offering from Scality Inc which is ideal for enterprise customers.

The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. Born from from a research project at Carnegie Mellon University, the Lustre file system has grown into a file system supporting some of the Earth's most powerful supercomputers. The Lustre file system provides a POSIX compliant file system interface, can scale to thousands of clients, petabytes of storage and hundreds of gigabytes per second of I/O bandwidth. The key components of the Lustre file system are the Metadata Servers (MDS), the Metadata Targets (MDT), Object Storage Servers (OSS), Object Server Targets (OST) and the Lustre clients.

In short, Lusture is ideal for large scale storage needs of service providers and large enterprises.

Thursday, November 16, 2017

Why Use Containers for Microservices?



Microservices deliver three benefits: speed to market, scalability, and flexibility.

Speed to Market
Microservices are small, modular pieces of software. They are built independently. As such, development teams can deliver code to market faster. Engineers iterate on features, and incrementally deliver functionality to production via an automated continuous delivery pipeline.

Scalability
At web-scale, it's common to have hundreds or thousands of microservices running in production. Each service can be scaled independently, offering tremendous flexibility. For example, let's say you are running IT for an insurance firm. You may scale enrollment microservices during a month-long open enrollment period. Similarly, you may scale member inquiry microservices at a different time E.g., during the first week of the coverage year, as you anticipate higher call volumes from subscribed members. This type of scalability is very appealing, as it directly helps a business boost revenue and support a growing customer base.

Flexibility
With microservices, developers can make simple changes easily. They no longer have to wrestle with millions of lines of code. Microservices are smaller in scale. And because microservices interact via APIs, developers can choose the right tool (programming language, data store, and so on) for improving a service.

Consider a developer updating a security authorization microservice. The dev can choose to host the authorization data in a document store. This option offers more flexibility in adding and removing authorizations than a relational database. If another developer wants to implement an enrollment service, they can choose a relational database its backing store. New open-source options appear daily. With microservices, developers are free to use new tech as they see fit.
Each service is small, independent, and follows a contract. This means development teams can choose to rewrite any given service, without affecting the other services, or requiring a full-fledged deployment of all services.

This is incredibly valuable in an era of fast-moving business requirements.

Monday, November 13, 2017

What is Flocker


Flocker is an open-source Container Data Volume Manager for your Dockerized applications. By providing tools for data migrations, Flocker gives ops teams the tools they need to run containerized stateful services like databases in production. Flocker manages Docker containers and data volumes together.

Container Ecosystem


Serverless Computing is ideal for IoT with Edge & Cloud Computing


Sunday, November 12, 2017

Serverless Computing for Microservices


Microservices is a new architecture of developing software. Microservices is best defined as:

"Service Oriented Architecture composed of loosely coupled components that have clearly defined boundaries"

This can be interpreted a set of software functions that work together based on predefined rules for example take a restaurant website. A typical restaurant website does not have high traffic all through the day, and traffic increases during lunch & dinner time. So having this website on a dedicated VM is a waste of resources. Also the website can be broked down into few distinct functions. The main webpage would be the landing zone, and from there each section like Photos, Menu, Location, etc., could be another independent function. The user triggers these funtions by clicking on the hyperlinks - and users will be served with the requested data.

This implies no coupling or loosely coupled functions that make up the entire website and each funtion can be modified/updated independently. This implies, the business owner can independently - without the need to bringdown the entire website.

From cost prespective also, building a website with Function-as-a-Service - allows the business to pay for the actual usage and each segment of the site can scale independently.

Thursday, November 09, 2017

Containers Vs Serverless Functions


Today, Containers and Functions are  "very hot" among the developer community. If you are using Fission, Serverless Functions run inside a container and this creates some confusion among young developers and students. As an expert in latest computing architectures and solution, I got too many questions on what exactly is the difference between the two?

In this post, I have posted the key differences between the two and the differences are not very clear cut in some aspects such as scaling and management, but at least here are some major differences and hope this gives you a better understanding of how containers and Serverless functions work.

Wednesday, November 08, 2017

VMware VIC on HPE Proliant Gen 10 DL 360 DL 560 Servers


VMware's vSphere Integrated Containers (VIC) enables IT teams to seamlessly run traditional and container workloads side-by-side on existing vSphere infrastructure.

vSphere Integrated Containers comprises three major components:
  1. vSphere Integrated Containers Engine: A container runtime for vSphere that allows you to provision containers as virtual machines, offering the same security and functionality of virtual machines in VMware ESXi™ hosts or vCenter Server® instances.
  2. vSphere Integrated Containers Registry:
    An enterprise-class container registry server that stores and distributes container images. vSphere Integrated Containers Registry extends the Docker Distribution open source project by adding the functionalities that an enterprise requires, such as security, identity and management.
  3. vSphere Integrated Containers Management Portal:
    A container management portal that provides a UI for DevOps teams to provision and manage containers, including the ability to obtain statistics and information about container instances. Cloud administrators can manage container hosts and apply governance to their usage, including capacity quotas and approval workflows.
Cloud administrators can create projects, and assign users and resources such as registries and virtual container hosts to those projects.

These components currently support the Docker image format. vSphere Integrated Containers is entirely Open Source and free to use. Support for vSphere Integrated Containers is included in the vSphere Enterprise Plus license.


Monday, November 06, 2017

Run Big Data Apps on Containers with Mesosphere


Apache Mesos was created in 2009 at UC Berkeley. Designed to run large scale webapps like Twitter, Uber, etc. It can scale upto 10,000s of nodes and supports Docker Containers.

Mesos is a distributed OS kernel:

  • Two level resource scheduling
  • Launch tasks across the cluster
  • Communication between tasks (like IPC)
  • APIs for building “native” applications (aka frameworks): program against the datacenter
  • APIs in C++, Python, JVM-languages, Go and counting
  • Pluggable CPU, memory, IO isolation
  • Multi-tenant workloads
  • Failure detection & Easy failover and HA

Mesos is a multi-framework platform solution: weighted fair sharing, roles, etc. Runs Docker containers alongside other popular frameworks e.g. Spark, Rails, Hadoop, Allows users to run regular services and batch apps in the same cluster. Mesos has advanced scheduling: resources, constraints, global view of resources, which is designed for HA and self-healing.

Mesos is now a proven at scale, battle-tested in production running the biggest of the web apps. 

Thursday, November 02, 2017

Corda is not a Blockchain



On November 30th, 2016 the R3 foundation publicly released the code for its Corda  decentralized ledger platform along with a bevy of developer tools, repositories, and community features including both a Slack and a Forum. A little under a month out, and it is safe to say that the Corda platform is well underway under the guidance of the well known Mike Hearn who also wrote the technical whitepaper on Corda.

Notably, in this white paper and in the code, the development team has taken a new approach to decentralized ledgers: Corda is not a blockchain. Many aspects of Corda resembles something in blockchain, Corda is not a block chain. Transaction races are deconflicted using pluggable notaries. A single Corda network may contain multiple notaries that provide their guarantees using a variety of different algorithms. Thus Corda is not tied to any particular consensus algorithm.

This is a fascinating addition to the distributed/decentralized ledger race in that one of the most well known consortia in the blockchain space has moved away from using blocks of transactions linked together. This is an intriguing peer-2-peer architecture since the transactions utilize the UTXO input/output model which is very similar to the transaction system used in more traditional blockchains such as Bitcoin but the storage and verification do not get written into blocks.

Likewise, Corda does not contain a general gossip protocol which broadcasts all transactions to the network. The validation function of the contract code only needs the validation chain of each individual transaction that it is working with and transactions that occur on the ledger are not broadcast to a public depository or written into blocks. Likewise, the consensus protocol of each deployment of Corda can change allowing the platform to conform to the needs & specifications of each client. These simplifications allow Corda to sidestep the scalability issues dogging blockchains like Bitcoin while allowing for a system that conforms to the needs of an enterprise rather than forcing a multi-gajillion dollar company to fundamentally change the way they need to handle payments.

Corda architecture is a highly client-sensitive private ledger that allows for nodes tailored to the kinds of transactions that their operators need. The ledger allows for mistakes to be fixed and states to be edited and is stored on a H2 database engine interfaced with the SQL relational database language. However, any changes to states must also conform and be validated by the code. This realist approach to an enterprise distributed ledger as it takes into account the need for both familiar integration, headroom for the inevitable human mistake, and a single truth between parties. As mentioned before, the state system also contains a direct reference to an actual legal document that governs this truth.

Wednesday, November 01, 2017

Network Architecture for Ceph Storage



 As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge. Traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores. OpenStack Ceph Storage is a novel approach to manage present day data volumes and provide users with reasonable access time at a manageable cost.
 
 Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph Storage Cluster.
 Ceph uniquely delivers object, block, and file storage in one unified system. Ceph is highly reliable, easy to manage, and open-source. The power of Ceph can transform your company’s IT infrastructure and your ability to manage vast amounts of data. Ceph delivers extraordinary scalability –thousands of clients accessing petabytes or even Exa-bytes of data. A Ceph Node leverages commodity hardware and intelligent daemons, and a Ceph Storage Cluster accommodates large numbers of nodes, which communicate with each other to replicate and redistribute data dynamically. A Ceph Monitor can also be placed into a cluster of Ceph monitors to oversee the Ceph nodes in the Ceph Storage Cluster, thereby ensuring high availability.
 

 This architecture is a cost effective solution based on HPE Synergy Platform which can scale out to meet multi Peta Byte scale.