Showing posts with label Data Center. Show all posts
Showing posts with label Data Center. Show all posts

Thursday, August 23, 2018

Common Options for Disaster Recovery


Disaster recovery (DR) is based on three standard DR sites.

In this article, lets take a look at the differences in hot site vs. warm and cold sites in disaster recovery.

Hot site 

In a hot site approach, the organization duplicates its entire environment as the basis of its DR strategy — an approach which, as you’d expect, costs a lot in terms of investment and upkeep. Even with data duplication, keeping hot site servers and other components in sync is time consuming. A typical hot site consists of servers, storage systems, and network infrastructure that together comprise a logical duplication of the main processing site. Servers and other components are maintained and kept at the same release and patch level as their primary counterparts. Data at the primary site is usually replicated over a WAN link to the hot site. Failover may be automatic or manual, depending on business requirements and available resources. Organizations can run their sites in “active‐active” or “active‐ passive” mode. In active‐active mode, applications at primary and recovery sites are live all the time, and data is replicated bi‐directionally so that all databases are in sync. In active‐ passive mode, one site acts as primary, and data is replicated to the passive standby sites.

Warm site 

With a warm site approach, the organization essentially takes the middle road between the expensive hot site and the empty cold site. Perhaps there are servers in the warm site, but they might not be current. It takes a lot longer (typically a few days or more) to recover an application to a warm site than a hot site, but it’s also a lot less expensive.

Cold site 

Effectively a non‐plan, the cold site approach proposes that, after a disaster occurs, the organization sends backup media to an empty facility, in hopes that the new computers they purchase arrive in time and can support their applications and data. This is a desperate effort guaranteed to take days if not weeks. I don’t want to give you the impression that cold sites are bad for this reason. Based on an organization’s recoverability needs, some applications may appropriately be recovered to cold sites. Another reason that organizations opt for cold sites is that they are effectively betting that a disaster is not going to occur, and thus investment is unnecessary. 


Friday, August 17, 2018

4 Types of Data Analytics


Data analytics can be classified into 4 types based on complexity & Value. In general, most valuable analytics are also the most complex.

1. Descriptive analytics

Descriptive analytics answers the question:  What is happening now?

For example, in IT management, it tells how many applications are running in that instant of time and how well those application are working. Tools such as Cisco AppDynamics, Solarwinds NPM etc., collect huge volumes of data and analyzes and presents it in easy to read & understand format.

Descriptive analytics compiles raw data from multiple data sources to give valuable insights into what is happening & what happened in the past. However, this analytics does not what is going wrong or even explain why, but his helps trained managers and engineers to understand current situation.

2. Diagnostic analytics

Diagnostic Analytics uses real time data and historical data to automatically deduce what has gone wrong and why? Typically, diagnostic analysis is used for root cause analysis to understand why things have gone wrong.

Large amounts of data is used to find dependencies, relationships and to identify patterns to give a deep insight into a particular problem. For example, Dell - EMC Service Assurance Suite can provide fully automated root cause analysis of IT infrastructure. This helps IT organizations to rapidly troubleshoot issues & minimize downtimes.

3. Predictive analytics

Predictive analytics tells what is likely to happen next.

It uses all the historical data to identify definite pattern of events to predict what will happen next. Descriptive and diagnostic analytics are used to detect tendencies, clusters and exceptions, and predictive analytics us built on top to predict future trends.

Advanced algorithms such as forecasting models are used to predict. It is essential to understand that forecasting is just an estimate, the accuracy of which highly depends on data quality and stability of the situation, so it requires a careful treatment and continuous optimization.

For example, HPE Infosight can predict what can happen to IT systems, based on current & historical data. This helps IT companies to manage their IT infrastructure to prevent any future disruptions.



4. Prescriptive analytics

Prescriptive analytics is used to literally prescribe what action to take when a problem occurs.

It uses a vast data sets and intelligence to analyze the outcome of the possible action and then select the best option. This state-of-the-art type of data analytics requires not only historical data, but also external information from human experts (also called as Expert systems) in its   algorithms to choose the bast possible decision.

Prescriptive analytics uses sophisticated tools and technologies, like machine learning, business rules and algorithms, which makes it sophisticated to implement and manage.

For example, IBM Runbook Automation tools helps IT Operations teams to simplify and automate repetitive tasks.  Runbooks are typically created by technical writers working for top tier managed service providers. They include procedures for every anticipated scenario, and generally use step-by-step decision trees to determine the effective response to a particular scenario.

Wednesday, May 23, 2018

Build Highly Resilient Web Services


Digitization has led to new business models that rely on web services. Digital banks, payment gateways & other Fintech services are now available only on web. These web services need to be highly resilient with uptime of greater than 99.9999%

Building such high resilient Web services essentially boils down to seven key components:

High Resilient IT Infrastructure: 
All underlying IT infrastructure (Compute, Network & Storage) is running in HA mode. High availability implies node level resilience and site level resilience. This ensures that a node failure or even a site failure does not bring down the web services.

Data Resilience:
All app related data is backed up in timely snapshots and also replicated in real time in multiple sites - so that data is never lost and RPO, RTO is maintained at "Zero"
This ensures that Data Recovery site is always maintained as an active state.

Application Resilience:
Web Applications have to be designed for high resilience. SOA based web apps, container apps are preferred than large monolith applications.

Multiple instances of the application should be run behind a load balancer - so that workload gets evenly distributed. Load balancing can also be done across multiple sites or even multiple cloud deployments to ensure web apps are always up and running.

Application performance monitoring plays an important role to ensure apps are available and performing as per required SLA. Active Application Performance Management is needed to ensure customers have good web experience.

Security Plan: 
Security planning implies building in security features into the underlying infrastructure, applications & data. Security plan is a mandatory and must be detailed enough to pass security audits and all regulatory compliance requirements.
Software-Defined-Security is developed based on this security plan and this helps avoid several security issues found in operations.
Security plan includes security policies like: encryption standards, access control, DMZ etc.

Security operations: 
Once the application is in production, the entire IT infrastructure stack must be monitored for security. There are several security tools for: Autonomous Watchdogs, Web Policing, web intelligence, continuous authentication, traffic monitoring, endpoint security & user training against phishing.
IT security is always an ongoing operation and one must be fully vigilant of any security attacks, threats or weaknesses.

IT Operations Management:
All web services need constant monitoring for Availability & Performance. All IT systems that are used to provide a service must be monitored and corrective actions, proactive actions need to be taken in order to keep the web applications running.

DevOps & Automation:
DevOps & automation is a lifeline of web apps. DevOps is used for all system updates to provide a seamless, non disruptive upgrades to web apps.  DevOps also allows new features of web apps be tested in a controlled ways - like exposing new versions/capabilities to select group of customers and then using that data to harden the apps.

Closing Thoughts

High resilient apps are not created by accident. It takes a whole lot of work and effort to keep the web applications up and running at all times. In this article, I have just mentioned 7 main steps needed to build high resilience web applications - but there are more depending on the nature of the application and business use cases, but these seven are common to all types of applications.

Tuesday, May 08, 2018

Build Modern Data Center for Digital Banking



Building a digital bank needs a modern data center. The dynamic nature of fintech and digital banking calls for a new data center which is  highly dynamic, scalable, agile, highly available, and offers all compute, network, storage, and security services as a programmable object with unified management.

A modern data center enables banks to respond quickly to the dynamic needs of the business.
Rapid IT responsiveness is architected into the design of a modern infrastructure that abstracts traditional infrastructure silos into a cohesive virtualized, software defined environment that supports both legacy and cloud native applications and seamlessly extends across private and public clouds .

A modern data center can deliver infrastructure as code to application developers for even
faster provisioning both test & production deployment via rapid DevOps.

Modern IT infrastructure is built to deliver automation - to rapidly configure, provision, deploy, test, update, and decommission infrastructure and applications (Both legacy, Cloud native and micro services.

Modern IT infrastructure is built with security as a solid foundation to help protect data, applications, and infrastructure in ways that meet all compliance requirements, and also offer flexibility to rapidly respond to new security threats.

Tuesday, January 02, 2018

Tuesday, November 28, 2017

HPE Elastic Platform for Big Data Analytics


Big data analytics platform has to be elastic - i.e., scale out with additional servers as needed.

In my previous post, I had given the software architecture for Big Data analytics. This article is all about the hardware infrastructure needed to deploy it.

HPE Apollo 4510 offers scalable dense storage system for your Big Data, object storage or data analytics? The HPE Apollo 4510 Gen10 System offers revolutionary storage density in a 4U form factor. Fitting in HPE standard 1075 mm rack, with one of the highest storage capacities in any 4U server with standard server depth. When you are running Big Data solutions, such as object storage, data analytics, content delivery, or other data-intensive workloads, the HPE Apollo 4510 Gen10 System allows you to save valuable data center space. Its unique, density-optimized 4U form factor holds up to 60 large form factor (LFF) and additional 2 small form factor (SFF) or M.2 drives. For configurability, the drives can be NVMe, SAS, or SATA disk drives or solid state drives.

HPE ProLiant DL560 Gen10 Server is a high-density, 4P server with high-performance, scalability, and reliability, in a 2U chassis. Supporting the Intel® Xeon® Scalable processors with up to a 68% performance gain1, the HPE ProLiant DL560 Gen10 Server offers greater processing power, up to 3 TB of faster memory, I/O of up to eight PCIe 3.0 slots, plus the intelligence and simplicity of automated management with HPE OneView and HPE iLO 5. The HPE ProLiant DL560 Gen10 Server is the ideal server for Bigdata Analytics workloads: YARN Apps, Spark SQL, Stream, Mlib, Graph, NoSQL, kafka, sqoop, flume etc., database, business processing, and data-intensive applications where data center space and the right performance are of paramount importance.

The main benefits of this platform are:


  1. Flexibility to scaleScale compute and storage independently
  2. Cluster consolidationMultiple big data environments can directly access a shared pool of data
  3. Maximum elasticityRapidly provision compute without affecting storage
  4. Breakthrough economicsSignificantly better density, cost and power through workload optimized components

Monday, November 06, 2017

Run Big Data Apps on Containers with Mesosphere


Apache Mesos was created in 2009 at UC Berkeley. Designed to run large scale webapps like Twitter, Uber, etc. It can scale upto 10,000s of nodes and supports Docker Containers.

Mesos is a distributed OS kernel:

  • Two level resource scheduling
  • Launch tasks across the cluster
  • Communication between tasks (like IPC)
  • APIs for building “native” applications (aka frameworks): program against the datacenter
  • APIs in C++, Python, JVM-languages, Go and counting
  • Pluggable CPU, memory, IO isolation
  • Multi-tenant workloads
  • Failure detection & Easy failover and HA

Mesos is a multi-framework platform solution: weighted fair sharing, roles, etc. Runs Docker containers alongside other popular frameworks e.g. Spark, Rails, Hadoop, Allows users to run regular services and batch apps in the same cluster. Mesos has advanced scheduling: resources, constraints, global view of resources, which is designed for HA and self-healing.

Mesos is now a proven at scale, battle-tested in production running the biggest of the web apps. 

Tuesday, October 31, 2017

Use Cases of Ceph Block & Ceph Object Storage


OpenStack Ceph was designed to run on general purpose server hardware. Ceph supports elastic provisioning, which makes building and maintaining petabyte-to-exabyte scale data clusters economically feasible.

Many mass storage systems are great at storage, but they run out of throughput or IOPS well before they run out of capacity—making them unsuitable for some cloud computing applications. Ceph scales performance and capacity independently, which enables Ceph to support deployments optimized for a particular use case.

Here, I have listed the most common use cases for Ceph Block & Object Storage.

Friday, June 02, 2017

Managing Big data with Intelligent Edge



The Internet of Things (IoT) is nothing short of a revolution. Suddenly, vast numbers of intelligent sensors and devices are generating vast amounts of data that contain potentially game-changing information.

In traditional, data analytics, all the data is shipped to a central data warehouse for processing in order to get strategic insights, like all other Big data projects, tossing large amounts of data of varying types into a data lake to be used later.

Today, most companies are collecting data at the edge of their network : PoS, CCTV, RFID scanners, etc. IoT data being churned out in bulk by sensors in factories, warehouses, and other facilities. The volume of data generated on the edge is huge and transmitting this data to a central data center and processing it in a central data center turns out to be very expensive.

The big challenge for IT leaders is to gather insights from this data rapidly, while keeping costs under control and maintaining all security & compliance mandates.

The best way to deal with this huge volume of data is to process this data right at the edge - near the point where data generated.
 

Advantages of analyzing data at the edge  


To understand, lets consider a factory.  Sensors on a drilling machine that makes engine parts - generates hundreds of bits of data each second. Over time, there are set patterns of data. Data showing vibrations, for example - it could be an early sign of a manufacturing defect about to happen.

Instead of sending the data across a network to a central data warehouse - where it will be analyzed. This is costly and time consuming. By the time the analysis is completed and plant engineers are alerted, there may be several defective engines already manufactured.

In contrast, if this analysis was done right at the site, plant managers could have taken corrective action before defect occurs. Thus, processing the data locally at the edge lowers costs while increasing productivity.

Also keeping data locally improves security and compliance. As all IoT sensors - could potentially be hacked & compromised. If data from a compromised sensor makes its way to the central data warehouse, the entire data warehouse could be at risk. Avoiding data from traveling across a network prevents malware from wreaking the main data warehouse.  If all sensor data is locally analyzed, then only the key results can be stored in a central warehouse - this reduces cost of data management and avoid storing useless data.

In case of banks, the data at the edge could be Personally Identifiable Information (PII), which is bound by several privacy laws and data compliance laws, particularly in Europe.

In short, analyzing data on the edge - near the point where data is generated is beneficial in many ways:

  • Analysis can be acted on instantly as needed.
  • Security & compliance is enhanced.
  • Costs of data analysis are lowered.


Apart from these above mentioned obvious advantages, there are several other advantages:

1. Manageability:

It is easy to manage IoT sensors when they are connected to an edge analysis system. The local server that runs data analysis can also be used to keep track of all the sensors, monitor sensor health, and alert administrators if any sensors fail. This helps in handling a wide plethora of IoT devices used at the edge.

2. Data governance: 

It is important to know what data is collected, where it is stored and to where it is sent. Sensors also generate lots of useless data that can be discarded or compressed or eliminated. Having an intelligent analytic system at the edge - allows easy data management via data governance policies.

3. Change management: 

IoT sensors and devices also need a strong change management( Firmware, software, configurations etc.). Having an intelligent analytic system at the edge - enables all change management functions to be off loaded to the edge servers. This frees up central IT systems to do more valuable work.

Closing Thoughts


IoT presents a huge upside in terms of rapid data collection. Having an intelligent analytic system at the edge gives a huge advantage to companies - with the ability to process this data in real time and take meaningful actions.

Particularly in case of smart manufacturing, smart cities, security sensitive installations, offices, branch offices etc. - there is a huge value in investing in an intelligent analytic system at the edge.

As conventional business models are being disrupted. Change is spreading across nearly all industries, and organizations must move quickly or risk being left behind their faster moving peers. IT leaders should go into the new world of IoT with their eyes open to both the inherent challenges they face and the new horizons that are opening up.

Its no wonder that a large number of companies are already looking to data at the edge.

Hewlett Packard Enterprise makes specialized servers called Edgeline Systems - designed to analyze data at the edge.  

Tuesday, May 30, 2017

Getting your Big Data Strategy right

An expert advice on what you need from big data analytics, and how to get there.

Business and technology leaders in most organizations understand the power of big data analytics—but few are able to harness that power in the way they want. The challenges are complex, and so are the technologies. Identifying and investing in key principles will help you navigate that complexity to find the right way to tap the growing pools of information available to your organization.

There are six main factors required to get a big data analytics platform right. Lets take a look at each one of them and explain how companies can get their big data right.

1. Blazing speed

Expectations around data are higher than ever. Business users and customers demand results almost instantly, but meeting those expectations can be challenging, especially with legacy systems. Speed is not the only factor in implementing a big data analytics strategy, but it's top of the list. Customers typically need to run queries on data sets that are 10 terabyte or larger and want results in few minutes.

Typical Business warehouse solutions would take 48 hours or more, In today's high speed business world, results after 48 hours is almost useless.

Time to insight is a top priority for any new analytics platform. Companies need to invest in High Performance Computing (HPC) to get the results in few minutes. With newer in-memory analytics systems  - such as SPARK or SAP HANA, the wait times can shrink to less than a second!

New solutions are fully optimized, enabling it to provide insights on time to fuel bottom-line results.

2. Scalable capacity

Today, It's a given that any big data analytics solution must accommodate huge quantities of data, but it also needs to grow organically with data volumes. Analytics solution must be able to grow in scale as the data size increases. Today, customers can't afford a "rip and replace" options when the database sizes grow.

Business needs a system that can handle all the data growth in a  way that is transparent to the data consumer or analyst - with very little downtime, if any at all. Capacity and computer expansion must all happen in the background.

3. Intelligent integration of legacy tools

Big Data Analytics must work with legacy tools so that business management can have seemless continuity. But it is also important to know which tools must be replaced and when.

Businesses have made investments in these older tools - such as Business Warehouse, Databases, ETL tools etc. Top management is comfortable with these legacy tools. But as data size grows newer data analysis tools will be needed and these new tools will have to work along with legacy tools.

4. Must play well with Hadoop

Big data and Hadoop has almost become synonymous with big data analytics. But Hadoop alone is not enough.While Hadoop is well known, it is built on generic low cost servers, it is also slow.

Hadoop, an open source big data framework, is a batch processing system, meaning that when a job is launched to analyze data, it goes into a queue, and it finishes when it finishes - i.e., users have to wait for results.

Today, Big data analysis needs to be fast - we are talking about high-concurrency in-memory analytics. Companies will still use Hadoop - but find newer ways to run Hadoop without incurring the performance penalties. Newer implementations of Hadoop (ver 2.7.x) and Spark will allow both systems to run in parallel.

5. Invest in data scientists

Organizations must build teams of data analytics experts. Not just hire data scientists, but also invest in tools that allow them to conduct more robust analyses on larger sets of data.

The key to move forward with best possible data analysis solution is to enable data scientists work with actual data sets and not a sample subset. The data analytics development environment must have the scale and size needed to work on actual data sizes, else the answers can go wrong and also leads to longer and more iterative development process.

6. Advanced analytics capabilities

Data Analytics tools and capabilities are rapidly evolving. Newer analytical tools use Artificial Intelligence (AI) tools as businesses move toward predictive analytics.

Big data has moved beyond reporting. Big data analytics is being used to answer very complex questions based on the data in your database. Analytics are now being more predictive, geospatial, and sentiment focused.

The shift toward predictive analysis and other advanced analysis has started. Organizations now—with the way data science has become more and more a corporate asset—there's definitely greater interest in becoming more predictive and more data-science savvy in nature.

Closing Thoughts  

Globally, data is growing at a very rapid rate: 40-50 percent per year. In this environment, every business is going to struggle against an overwhelming volume of data. New technologies are there that can help manage data at that speed and scale.

But having a right big data strategy is vital for success. As new tools, technologies emerge, it becomes critical to have the right strategy to incorporate them into the existing eco-system in a seemless non-disruptive way.

In this blog, I have highlighted 6 main aspects of a big data strategy that helps organizations to get its big data strategy right. 

Thursday, May 04, 2017

HPE Reference Architecture for VMware vRealize Suite on HPE ProLiant DL380 with HPE Service Manager

Executive summary 

To remain competitive, organizations are looking for unprecedented agility and efficiency. Budgets for IT are now funded by line of business, which means IT needs to be agile to provision workloads faster. For some, public cloud is the answer; for others who demand higher levels of security, regulatory compliance, specific SLAs, advanced automation, and more efficient ways to track their IT resource consumptions, private cloud is increasingly the de-facto answer. However, architecting and optimizing a private cloud can be complex and requires cross-domain expertise that might not be easily available. Hewlett Packard Enterprise Reference Architectures can help.

This Reference Architecture document provides a step by step guide to building a private cloud with automation built upon the VMware® vRealize Suite running on HPE ProLiant rack-mount servers using VMware vSAN for cost optimized storage backend. In addition, it includes integration with HPE Service Manager and HPE Universal Configuration Management Database (UCMDB) to speed up incident management of cloud services.

VMware vRealize Suite is a leading cloud management platform for creating and managing hybrid clouds. It consists of a set of products to help speed up deployment of a private cloud, quickly set up the private cloud environment to enable cloud service deployment, facilitate Day 2 operations with the ability to create Anything as a Service (XaaS) services, as well as monitor and automate management of provisioned cloud services.

VMware vSAN is a scalable distributed storage solution that is simple to deploy and manage. Because it is built into vSphere, vSAN can be enabled quickly with a few simple steps and managed using vCenter. Storage capacity can be easily added to existing hosts in the vSAN cluster without disruption to ongoing operations.

The HPE ProLiant DL380 Gen9 server is designed to adapt to the needs of any environment, from large enterprise to remote office/branch office, offering enhanced reliability, serviceability, and continuous availability.

HPE Service Manager and HPE UCMDB enables IT to collaborate and quickly identify and resolve service outages.

The combination of VMware vRealize Suite with HPE ProLiant rack-mount servers, VMware vSAN, HPE Service Manager and HPE UCMDB creates a private cloud solution with flexibility, efficiency and agility that responds to business needs.

Target audience:

This document is intended for IT architects, system integrators, and partners that are planning to deploy an enterprise grade private cloud using VMware vRealize Suite and HPE Service Manager software on HPE infrastructure. Document purpose: The purpose of this document is to demonstrate the value of combining VMware vRealize Suite for private cloud deployment and HPE Service Manager with HPE UCMDB for incident management using Hewlett Packard Enterprise servers and storage to create a highly manageable and highly available solution that meets the needs of the business, IT personnel, and the user community.

This Reference Architecture describes testing performed in January 2017.

See: http://h20195.www2.hpe.com/V2/GetDocument.aspx?docname=a00003395enw 

Tuesday, April 18, 2017

20 Basic ITIL Metrics


ITIL breaks major IT functions down into nice bite sized processes — ripe to be measured with metrics. Here are 20 of our favorite metrics for ITIL processes:


Incident and Problem Management

1. Percentage of Incidents Resolved by First Level Support 
Support costs can be dramatically reduced when first line support resolves basic issues such as user training, password problems, menu navigation issues etc... The target for this metric is often set above 80%.

2. Mean Time to Repair (MTTR) 
Average time to fix an incident. Often the most closely watched ITIL related metric. It is not unusual for MTTR reporting to go to CxO level executives.

3. Percentage of Incidents Categorized as Problems 
The percentage of incidents that are deemed to be the result of problems.

4. Problems Outstanding 
The total number of problems that are unresolved.


Service Desk

5. Missed Calls
The number of times someone called the help desk, were put on hold and eventually hung up. May include the number of times someone called when the help desk was closed. Impacts customer service and core metrics such as MTTR.

6. Customer Satisfaction 
Usually captured in a survey. Try not to go overboard: asking for feedback in an inappropriate way can irritate customers.

7. Staff Turnover 
Service Desk jobs can be stressful — retaining experienced staff is critical to optimizing core ITIL metrics.


Change Management

8.Number of Successful Changes (change throughput) 
Change throughput is a good measure of change management productivity.

9. Percentage of Failed Changes 
A change management quality metric — can impact customer satisfaction and availability management.

10. Change Backlog 
Total number of changes waiting in the queue.

11. Mean RFC (Request for Change) Turnaround Time (MRTT) 
The average time it takes to implement a change after it is requested.


Release Management

12. Percentage of Failed Releases 
The percentage of releases that fail — a key Release Management quality metric.

13. Total Release Downtime (TRD) 
Total downtime due to release activity.

Availability Management

14. Total Downtime
Total downtimes broken down by service.

15. Total SLA Violations 
Number of times that the availability terms laid out in SLAs were violated.

IT Financial Management

16. Percentage of Projects Within Budget 
The percentage of projects that did not go over/under their prescribed budget.

17. Total Actual vs Budgeted Costs 
Total actual project costs as a percentage of budgeted project costs. Calculated for an entire project portfolio. A number over 100% indicates over spending.


Service Level Management

18. Total SLA violations 
The number of SLA violations in a given period.

19. Mean Time to Resolve SLA Violations 
The average time it takes to restore SLA compliance when a violation occurs.


Configuration Management


20. CI Data Quality
Percentage of CIs with data issues. Can be determined by sampling methods.

Thursday, April 13, 2017

Introduction to Microservices

Microservices are a type of software architecture where large applications are made up of small, self-contained units working together through APIs that are not dependent on a specific language. Each service has a limited scope, concentrates on a particular task and is highly independent. This setup allows IT managers and developers to build systems in a modular way. In his book, "Building Microservices," Sam Newman said microservices are small, focused components built to do a single thing very well.

Martin Fowler's "Microservices - a Definition of This New Architectural Term" is one of the seminal publications on microservices. He describes some of the key characteristics of microservices as:

Componentization: Microservices are independent units that are easily replaced or upgraded. The units use services to communicate with things like remote procedure or web service requests.

Business capabilities: Legacy application development often splits teams into areas like the "server-side team" and the "database team." Microservices development is built around business capability, with responsibility for a complete stack of functions such as UX and project management.

Products rather than projects: Instead of focusing on a software project that is delivered following completion, microservices treat applications as products of which they take ownership. They establish an ongoing dialogue with a goal of continually matching the app to the business function.

Dumb pipes, smart endpoints:
Microservice applications contain their own logic. Resources that are often used are cached easily.

Decentralized governance: Tools are built and shared to handle similar problems on other teams.



History of microservices

The phrase "Micro-Web-Services" was first used at a cloud computing conference by Dr. Peter Rodgers in 2005, while the term "microservices" debuted at a conference of software architects in the spring of 2011. More recently, they have gained popularity because they can handle many of the changes in modern computing, such as:

  • Mobile devices
  • Web apps
  • Containerization of operating systems
  • Cheap RAM
  • Server utilization
  • Multi-core servers
  • 10 Gigabit Ethernet


The concept of microservices is not new. Google, Facebook, and Amazon have employed this approach at some level for more than ten years. A simple Google search, for example, calls on more than 70 microservices before you get the results page. Also, other architectures have been developed that address some of the same issues microservices handle. One is called Service Oriented Architecture (SOA), which provides services to components over a network, with every service able to exchange data with any other service in the system. One of its drawbacks is the inability to handle asynchronous communication.

How microservices differ from service-oriented architecture

Service-oriented architecture (SOA) is a software design where components deliver services through a network protocol. This approach gained steam between 2005 and 2007 but has since lost momentum to microservices. As microservices began to move to the forefront a few years ago, a few engineers called it "fine-grained SOA." Still others said microservices do what SOA should have done in the first place.

SOA is a different way of thinking than microservices. SOA supports Web Services Definition Language (WSDL), which defines service end points rigidly and is strongly typed while microservices have dumb connections and smart end points. SOA is stateless; microservices are stateful and use object-oriented programming (OOP) structures that keep data and logic together.

Some of the difficulties with SOA include:
SOA is heavyweight, complex and has multiple processes that can reduce speed.
While SOA initially helped prevent vendor lock-in, it eventually wasn't able to move with the trend toward democratization of IT.

Just as CORBA fell out of favor when early Internet innovations provided a better option to implement applications for the Web, SOA lost popularity when microservices offered a better way to incorporate web services.

Problems microservices solve

Larger organizations run into problems when monolithic architectures cannot be scaled, upgraded or maintained easily as they grow over time. Microservices architecture is an answer to that problem. It is a software architecture where complex tasks are broken down into small processes that operate independently and communicate through language-agnostic APIs.

Monolithic applications are made up of a user interface on the client, an application on the server, and a database. The application processes HTTP requests, gets information from the database, and sends it to the browser. Microservices handle HTTP request response with APIs and messaging. They respond with JSON/XML or HTML sent to the presentation components. Microservices proponents rebel against enforced standards of architecture groups in large organizations but enthusiastically engage with open formats like HTTP, ATOM and others.

As applications get bigger, intricate dependencies and connections grow. Whether you
are talking about monolithic architecture or smaller units, microservices let you split
things up into components. This allows horizontal scaling, which makes it much easier to
manage and maintain separate components.

The relationship of microservices to DevOps
Incorporating new technology is just part of the challenge. Perhaps a greater obstacle is developing a new culture that encourages risk-taking and taking responsibility for an entire project "from cradle to crypt." Developers used to legacy systems may experience culture shock when they are given more autonomy than ever before. Communicating clear expectations for accountability and performance of each team member is vital. DevOps is critical in determining where and when microservices should be utilized. It is an important decision because trying to combine microservices with bloated, monolithic legacy systems may not always work. Changes cannot be made fast enough. With microservices, services are continually being developed and refined on-the-fly. DevOps must ensure updated components are put into production, working closely with internal stakeholders and suppliers to incorporate updates.

The move toward simpler applications.

As DreamWorks' Doug Sherman said on a panel at the Appsphere 15 Conference, the film-production company tried an SOA approach several years ago but ultimately found it counterproductive. Sherman's view is that IT is moving toward simpler applications. At times, SOA seemed more complicated than it should be.

Microservices were seen as an easier solution than SOA, much like JSON was considered to be simpler than XML and people viewed REST as simpler than SOAP. We are moving toward systems that are easier to build, deploy and understand. While SOA was initially designed with that in mind, it ended up being more complex than needed.

SOA is geared for enterprise systems because you need a service registry, a service repository and other components that are expensive to purchase and maintain. They are also closed off from each other.

Microservices handle problems that SOA attempted to solve more than a decade ago, yet they are much more open.

How microservices differ among different platforms
Microservices is a conceptual approach, and as such it is handled differently in each language. This is a strength of the architecture because developers can use the language they are most familiar with. Older languages can use microservices by using a structure unique to that platform. Here are some of the characteristics of microservices on different platforms:

Java
Avoids using Web Archive or Enterprise Archive files
Components are not auto-deployed. Instead, Docker containers or Amazon Machine Images are auto-deployed.

Uses fat jars that can be run as a process

PHP
REST-style PHP microservices have been deployed for several years now because they
are:

  • Highly scalable at enterprise level
  • Easy to test rapidly


Python

  • Easy to create a Python service that acts as a front-end web service for microservices in other languages such as ASP or PHP 
  • Lots of good frameworks to choose from, including Flask and Django
  • Important to get the API right for fast prototyping 
  • Can use Pypy, Cython, C++ or Golang if more speed or efficiency is required.


Node.js
Node.js is a natural for microservices because it was made for modern web applications.
Its benefits include:

  • Takes advantage of JavaScript and Google's high-performance, open-source V8 engine
  • Machine code is optimized dynamically during runtime
  • HTTP server processes are lightweight
  • Nonblocking, event-driven I/O
  • High-quality package management
  • Easy for developers to create packages
  • Highly scalable with asynchronous I/O end-to-end


.NET

In the early 2000s, .NET was one of the first platforms to create applications as services
using Simple Object Access Protocol (SOAP), a similar goal of modern microservices.
Today, one of the strengths of .NET is a heavy presence in enterprise installations. Here
are two examples of using .NET microservices:





Responding to a changing market

The shift to microservices is clear. The confluence of mobile computing, inexpensive hardware, cloud computing and low-cost storage is driving the rush to this exciting new approach. In fact, organizations do not have any choice. Matt Miller's article in The Wall Street Journal sounded the alarm; "Innovate or Die: The Rise of Microservices" explains that software has become the major differentiator among businesses in every industry across the board. The monolithic programs common to many companies cannot change fast enough to adapt to the new realities and demands of a competitive marketplace.

Service-oriented architecture attempted to address some of these challenges but eventually failed to achieve liftoff. Microservices arrived on the scene just as these influences were coming to a head; they are agile, resilient and efficient, qualities many legacy systems lack. Companies like Netflix, Paypal, Airbnb and Goldman Sachs have heeded the alarm and are moving forward with microservices at a rapid pace.

Monday, November 07, 2016

Tuesday, August 16, 2016

Understanding the Business Benefits of Colocation

Digital transformation and the move towards private cloud is really shaking up the design and implementation of data centers. As companies start their journey to the cloud, they realize that having sets of dedicated servers for each application will not help them and they need to change their data centers.

Historically, companies started out with a small server room to host a few servers that run their business applications. The server room was located in their office space and it was a small set up. As business became more compute centric, the small server room became unviable. This led to data centers - which was often located in their office.

The office buildings were not designed for hosting data centers and had to be modified to get more air-conditioning, networking and power into the data center. The limitation of existing buildings created limitation on efficient cooling and power management.

But now when companies are planning to move to private cloud, they are now seeing huge benefits of having a dedicated data center - a purpose built facility for hosting large number of computers, switches, storage and power systems. These purpose built data centers are built with better power supply solutions, better & more efficient liquid cooling solutions and more importantly offer a wide range of networking connectivity and network services.

As a result these dedicated data centers can save money on IT operations and also provide greater reliability & resilience.

But not all companies need a large data center which can benefit from economies of scale. Very few large enterprises really have a need for such large dedicated purpose built data centers. So there is a new solution - Colocation of Data Centers.

For CIOs colocation provides the perfect win-win scenario, providing cost savings and delivering state-of-the-art infrastructure. When comparing the capabilities of a standard server room to a colocated data center solution, often times, the benefit from power bills alone is enough to justify the project.

These dedicated data centers are built on large scale in Industrial zones with dedicated power lines and backup power systems - that the power cost will be much lower than before. Moreover, these dedicated data centers can employ newer & more efficient cooling systems that reduces the over all power consumption of the data center.

Business Benefits of Colocation

Apart from reductions in operational expenditure, there are several other benefits from colocation. Having a dedicated team where people are available 24/7/365 to monitor and manage the IT infrastructure is a huge benefit.


  1. Cost Savings on Power & Taxes

    Dedicated data centers are built in locations that offer cheap power. Companies can also negotiate the tax breaks for building in remote or industrial areas. In addition to a lower price of power, the data centers are designed to include diverse power feeds and efficient distribution paths. These data centers are dual generator systems that can be refueled while in operation as well as on-site fuel reserves, and have multiple UPS support in place.

    In addition to power costs, dedicated data centers will have engineers and technicians who will monitor the power levels, battery levels 24/7 so that the center has 100% uptime.

    Additionally, data centers have the time, resources and impetus to continually invest in and research green technologies. This means that businesses can reduce their carbon footprint at their office locations and benefit from continual efficiency saving research. Companies that move their servers from in-house server rooms typically save 90 percent on their own carbon emissions.

  2. Network Connected Globally, Securely and QuicklyToday, high speed network connectivity is the key to business. And it is lot more difficult to get big fat pipes of network connectivity into central office locations. It is lot more easier to get network connectivity to a centralized data center. A dedicated data center will have many network service providers providing connectivity and often at a lower price than at a office location.

    Dedicated data centers also provide resilient connectivity at a fairly low price – delivering 100 Mbps of bandwidth might be hard at an office location and trying to create a redundant solution is often financially unviable. Data centers are connected to multiple transit providers and also have large bandwidth pipes meaning that businesses often benefit from a better service for less cost.

    Colocation enables organizations to benefit from faster networking and cheaper network connections.

  3. Monitoring IT InfrastructureBuilding a dedicated data center makes it easier to monitor the health of IT infrastructure. The economies of scale that comes from colocation helps to build a robust IT Infrastructure monitoring solution that can monitor the entire IT infrastructure and ensure SLAs are being met.

  4. Better SecurityA dedicated data center and colocation will have better physical security than a data center in a office location. The physical isolation of the data center enables the service provider to provide better security measures that include biometric scanners, closed circuit cameras, on-site security, coded access, alarm systems, ISO 27001 accredited processes, onsite security teams and more. With colocation, all these service costs are shared - thus bringing down the costs while improving the level of security.

  5. ScalabilityPlatform 3 paradigms such as digital transformation, IoT, Big Data, etc are driving up the scale of IT infrastructure. As the demand for computing shoots up with time, the data center must be able to cope up with it. With colocation, the scale up requirements can be negotiated ahead of time and with just one call to the colocation provider and scale of the IT  infrastructure can be increased as needed.

    Data centers and colocation providers have the ability to have businesses up and running within hours, as well as provide the flexibility to grow alongside your organization. Colocation space, power, bandwidth and connection speeds can all be increased when required.

    The complexity of rack space management, power  management etc is outsourced to the colocation service provider.

  6. Environment friendly & Green ITA large scale data center has more incentives to run a greener IT operations as it results in lower energy costs. Often times these data centers are located in Industrial areas where better cooling technologies can be safely deployed - which makes it possible to improve the over all operational efficiency. Typically, a colocated data center often adhere to global green standards such as ASHRAE 90.1-2013, ASHRAE 62.1, LEED certifications etc.

    A bigger data centers enable better IT ewaste management and recycling. Old computers, UPS Batteries and other equipment can be safely & securely disposed.

  7. Additional Services from Colocational PartnersColocational service providers may also host other cloud services such as:

    a. Elastic salability to ramp up IT resources when there is seasonal demand and scale down when demand falls.

    b. Data Backups and Data Archiving to tapes and storing it in secure location

    c. Disaster Recovery in a multiple data centers & Data Redundancy to protect from data loss incase of natural disasters.

    d. Network Security and Monitoring against malicious network attacks - this is usually in form of a "Critical Incident Center." Critical Incident centers are like Network Operation Center - but monitors the data security and active network security. 
Closing Thoughts

In conclusion, a dedicated data center offers tremendous cost advantages. But if the company's IT scale does not warrant a dedicated data center, then the best option is to move to a colocated data center. Colocation providers are able to meet business requirements at a lower cost than if the service was kept in-house.

A colocation solution provides companies with a variety of opportunities, with exceptional SLAs and having data secured off-site, providing organizations with added levels of risk management and the chance to invest in better equipment and state-of-the-art servers.