Tuesday, June 27, 2017

Key Metrics to measure Cloud Services


As a business user, if you are planning to host your IT workloads on a public cloud and you want to know how to measure the performance of the cloud service, here are seven important metrics you should consider.

1. System Availability

A cloud service must be available 24x7x365. However, there could be downtimes due to various reasons. This system availability is defined as the percentage of time that a service or system is available. Often expressed as a percentage. For example, a downtime of 7.5 hours unavailable per year or 99.9% availability! A downtime of few hours can potentially cause millions of dollars in losses.

365 or 3.65 days of downtime per year, which is typical for non redundant hardware if you include the time to reload the operating system and restore backups (if you have them) after a failure. Three nines is about 8 hours of downtime, four nines is about 52 minutes and the holy grail of 5 nines is 7 minutes.

2. Reliability or also known as Mean Time Between Failure (MTBF)  and Mean Time To Repair(MTTR)

Reliability is a function of two components: Mean Time Between Failures (MTBF) and Mean Time To Repair (MTTR) - i.e., time taken to fix the problem. In the world of cloud services, it is important to know MTTR is often defined as the average time required to bring back a failed service back into production status.

Hardware failure of IT equipment can lead to a degradation in performance for end users and can result in losses to the business. For example, a failure of a hard drive in a storage system can slow down the read speed - which in turn causes delays in customer response times.

Today, most cloud systems are built with high levels of hardware redundancies - but this increases the cost of cloud service.        

3. Response Time

Response time is defined as the time it takes for any workload to place a request for
work on the cloud system and for the cloud system to complete the request. Response time is heavily dependent on the network latencies.

Today, If the user and the data center are located in the same region, the average overall response time is 50.35 milliseconds. When the user base and data centers are located in different regions, the response time increases significantly, to an average of 401.72 milliseconds.

Response Time gives a clear picture of the overall performance of the cloud. It is therefore very important to know the response times to understand the impact on application performance and availability - which in-turn impacts customer experience.

4. Throughput or Bandwidth

The performance of cloud services are also measured with throughput; i.e., Number of tasks completed by the cloud service over a specific period. For transaction processing systems, it is normally measured as transactions/second. For systems processing bulk data, such as audio or video servers, it is measured as a data rate (e.g., Megabytes per second).

Web server throughput is often expressed as the number of supported users – though clearly this depends on the level of user activity, which is difficult to measure consistently. Alternatively, cloud service providers publish their throughputs in terms of bandwidth - i.e., 300MB/Sec, 1GB/sec etc. This bandwidth numbers most often exceeds the rate of data transfer required by the software application.

In case of mobile apps or IoT, there can be a very large number of apps or devices streaming data to or from the cloud system. Therefore it is important to ensure that there is sufficient bandwidth to support the current user base.

5. Security

For cloud services, security is often defined as the set of control based technologies and policies designed to adhere to regulatory compliance rules and protect information, data applications and infrastructure associated with cloud computing use. The processes will also likely include a business continuity and data backup plan in the case of a cloud security breach.

Often times, cloud security is categorized into multiple areas: Security Standards, Access Control, Data Protection (Data unavailability & Data loss prevention), Network  - Denial of service (DoS or DDoS)

6. Capacity

Capacity is the size of the workload compared to available infrastructure for that workload in the cloud. For example, capacity requirements can be calculated by tracking average utilization over time of workloads with varying demand, and working from the mean to find the capacity to handle 95% of all workloads.  If the workloads increases beyond a point, then one needs to add more capacity - which increases costs.

7. Scalability

Scalability refers to the ability to service a theoretical number of users - degree to which the service or system can support a defined growth scenario.

In cloud systems, scalability is often mentioned as scalable up to tens of thousands, hundreds of thousands, millions, or even more, simultaneous users. That means that at full capacity (usually marked as 80%), the system can handle that many users without failure to any user or without crashing as a whole because of resource exhaustion. The better an application's scalability, the more users the cloud system can handle simultaneously.

Closing Thoughts


Cloud service providers often publish their performance metrics - but one needs to dive in deeper and understand how these metrics can impact the applications being run on that cloud. 

Wednesday, June 14, 2017

How to Design a Successful Data Lake

Today, business leaders are continuously envisioning new and innovative ways to use data for operational reporting and advanced data analytics. Data Lake is a next-generation data storage and management solution, was developed to meet the ever increasing demands of business & data analytics.

In this article I will explore some of the existing challenges with the traditional enterprise data warehouse and other existing data management and analytic solutions. I will describe the necessary features of the Data Lake architecture and the capabilities required to leverage a Data and Analytics as a Service (DAaaS) model, characteristics of a successful Data Lake implementation and critical considerations for designing a Data Lake.

Current challenges with Enterprise Data Warehouse 

Business leaders are continuously demanding new and innovative ways to use data analysis to gain competitive advantages.

With the development of new data storage and data analytic tools, the traditional enterprise data warehouse solutions have become inadequate and are impeding maximum usage of data analytics and even prevent users from maximizing their analytic capabilities.

Traditional data warehouse tools has the following shortcomings:

Timeliness 
Introducing new data types and content to an existing data warehouse is usually a time consuming and cumbersome process.

When users want quick access to data,  processing delays can be frustrating and cause users to ignore/stop using data warehouse tools, and instead develop an alternate ad-hoc systems  which costs more, waste valuable resources and bypasses proper security systems.

Quality
If users do not know the origin or source of data  - currently stored in the data warehouse, users view such data with suspicion and may not trust the data. Current data warehousing solutions often store processed data - in which source information is often lost.

Historical data often have some parts missing or inaccurate, the source of data is usually not captured. All this leads to situations where analysis results provide wrong or conflicting results.

Flexibility 
Today's on-demand world needs data to be accessed on-demand and results available in near real time. If users are not able to access this data in time, they lose the ability to analyze the data and derive critical insights when needed.

Traditional data warehouses "pull" data from different sources - based on a pre-defined business needs. This implies that users will have to wait till the data is brought into the data warehouse. This seriously impacts the on-demand capability of business data analysis.

Searchablity
In the world of Google, users demand a rapid and easy search for all their enterprise data. Many of the traditional data warehousing solutions - do not support an easy search tools. Thus customers cannot find the required data and it limits users' ability to make best use of data warehouses for rapid on-demand data analysis.

Today's Need


Modern data analytics - be it Big Data or BI or BW require:


  1. Support multiple types (structured/unstructured) of data to be stored in its raw form - along with source details.
     
  2. Allow rapid ingestion of data - to support real time or near real time analysis
     
  3. Handle & manage very large data sets - both in terms of data streams and data sizes.
     
  4. Allow multiple users to search, access and use this data simultaneously from a well known secure place.
     


Looking at all the demands of modern business, the solution that fits all of the above criteria is the Data lake.

What is a Data Lake? 


A Data Lake is a data storing solution featuring a scalable data stores - to store vast amounts of data in various formats. Data from multiple sources: Databases, Web server logs, Point-of-sale devices, IoT sensors, ERP/business systems, Social media, third party information sources etc are all collected, curated into this data lake via an ingestion process. Data can flow into the Data Lake by either batch processing or real-time processing of streaming data.

Data lake holds both raw & processed data along with all the metadata and lineage of the data which is available in a common searchable data catalog. Data is no longer restrained by initial schema decisions, and can be used more freely across the enterprise.

Data Lake is an architected data solution - on which all the common compliance & security policies also applied.

Businesses can now use this data on demand to provide Data and Analytics as a Service  (DAaaS) model to various consumers. ( Business users, data scientists, business analysts)

Note: Data Lakes are often built around a strong scalable, globally distributed storage systems. Please refer my other articles regarding storage for Data lake

Data Lake: Storage for Hadoop & Big Data Analytics

Understanding Data in Big Data

Uses of Data Lake

Data Lake is the place were raw data is ingested, curated and used for modification via ETL tools. Existing data warehouse tools can use this data for analysis along with newer big data, AI tools.

Once a data lake is created, users can use a wide range of analytics tools of their choice to develop reports, develop insights and act on it. The data lake holds both raw data & transformed data along with all the metadata associated with the data.

DAaaS model enables users to self-serve their data and analytic needs. Users browse the data lake's catalog to find and select the available data and fill a metaphorical "shopping cart" with data to work with.

Broadly speaking, there are six main uses of data lake:


  1. Discover: Automatically and incrementally "fingerprint" data at scale by analyzing source data.
     
  2. Organize: Use machine learning to automatically tag and match data fingerprints to glossary terms. Match the unmatched terms through crowd sourcing
     
  3. Curate: Human review accepts or rejects tags and automates data access control via tag based security
  4. Search: Search for data through the Waterline GUI or through integration via 3rd party applications
     
  5. Rate: Use objective profiling information along with subjective crowdsourced input to rate data quality
     
  6. Collaborate: Crowdsource annotations and ratings to collaborate and share "tribal knowledge" about your data

Characteristics of a Successful Data Lake Implementation


Data Lake enables users to analyze the full variety and volume of data stored in the lake. This necessitates features and functionalities to secure and curate the data, and then to run analytics, visualization, and reporting on it. The characteristics of a successful Data Lake include:


  1. Use of multiple tools and products. Extracting maximum value out of the Data Lake requires customized management and integration that are currently unavailable from any single open-source platform or commercial product vendor. The cross-engine integration necessary for a successful Data Lake requires multiple technology stacks that natively support structured, semi-structured, and unstructured data types.
     
  2. Domain specification. The Data Lake must be tailored to the specific industry. A Data Lake customized for biomedical research would be significantly different from one tailored to financial services. The Data Lake requires a business-aware data-locating capability that enables business users to find, explore, understand, and trust the data. This search capability needs to provide an intuitive means for navigation, including key word, faceted, and graphical search. Under the covers, such a capability requires sophisticated business processes, within which business terminology can be mapped to the physical data. The tools used should enable independence from IT so that business users can obtain the data they need when they need it and can analyze it as necessary, without IT intervention.
     
  3. Automated metadata management. The Data Lake concept relies on capturing a robust set of attributes for every piece of content within the lake. Attributes like data lineage, data quality, and usage history are vital to usability. Maintaining this metadata requires a highly-automated metadata extraction, capture, and tracking facility. Without a high-degree of automated and mandatory metadata management, a Data Lake will rapidly become a Data Swamp.
     
  4. Configurable ingestion workflows. In a thriving Data Lake, new sources of external information will be continually discovered by business users. These new sources need to be rapidly on-boarded to avoid frustration and to realize immediate opportunities. A configuration-driven, ingestion workflow mechanism can provide a high level of reuse, enabling easy, secure, and trackable content ingestion from new sources.
     
  5. Integrate with the existing environment. The Data Lake needs to meld into and support the existing enterprise data management paradigms, tools, and methods. It needs a supervisor that integrates and manages, when required, existing data management tools, such as data profiling, data mastering and cleansing, and data masking technologies.


Keeping all of these elements in mind is critical for the design of a successful Data Lake.


Designing the Data Lake


Designing a successful Data Lake is an intensive endeavor, requiring a comprehensive understanding of the technical requirements and the business acumen to fully customize and integrate the architecture for the organization's specific needs. Data Scientists and Engineers provide the expertise necessary to evolve the Data Lake to a successful Data and Analytics as a Service solution, including:

DAaaS Strategy Service Definition. Data users can leverage define the catalog of services to be provided by the DAaaS platform, including data onboarding, data cleansing, data transformation, data catalogs, analytic tool libraries, and others.

DAaaS Architecture. Datalake help data users create a right DAaaS architecture, including architecting the environment, selecting components, defining engineering processes, and designing user interfaces.

DAaaS PoC. Rapidly design and execute Proofs-of-Concept (PoC) to demonstrate the viability of the DAaaS approach. Key capabilities of the DAaaS platform are built/demonstrated using leading-edge bases and other selected tools.

DAaaS Operating Model Design and Rollout. Customize DAaaS operating models to meet the individual business users' processes, organizational structure, rules, and governance. This includes establishing DAaaS chargeback models, consumption tracking, and reporting mechanisms.

DAaaS Platform Capability Build-Out. Provide an iterative build-out of all data analytics platform capabilities, including design, development and integration, testing, data loading, metadata and catalog population, and rollout.

Closing Thoughts  


Data Lake can be an effective data management solution for advanced analytics experts and business users alike. A Data Lake allows users to analyze a large variety and volume when and how they want. DAaaS model provides users with on-demand, self-serve data for all their analysis needs

However, to be successful, a Data Lake needs to leverage a multitude of products while being tailored to the industry and providing users with extensive, scalable customization- In short, it takes a blend of technical expertise and business acumen to help organizations design and implement their perfect Data Lake. 

Tuesday, June 13, 2017

Key Product Management Principle - People are the core Asset


2017 is turning out to be a tumuntous year for IT industry worldwide. Large established IT companies such as Cisco, HPE, Dell-EMC, IBM are serious cutting down costs.  Unfortunately, companies tend to look at people as "expenses" and layoffs have become common.

Product managers have From a product managers often answers three main questions:

1. Where is the Product Today?
2. Where do we want to take the product & by what time?
3. How can the team get the product there?

Therefore, product managers have a different view when it comes to employees. From a product development perspective, people are "assets" - especially engineering teams and customer facing teams. Success of new product development depends of people.

Product managers treat people as true assets as the success of the new products - which creates future revenue for the company. Without people, the new product will never be able to reach its intended goal.

In IT, engineers and their intellect, skills, knowledge, character, integrity are - the true value in any organization. Because of the nature of IT product development, it is vital that product managers treat their engineering colleagues as true assets.  Product manager must spend time with the team. This means talking with them, listening to their concerns and fears about the current phase of the project, and occasionally taking them out for lunch. ( Lunch is a truly amazing way to motivate people)

Product managers have to make the team members feel valued. That's when engineers they care more about the product on which they are working. Face-time with the team also helps product managers understand individuals and personally assist them. Time spent with the team pays financial dividends as high-quality products make it to market on time and with enough vitality to excite the sales force.

Closing Thoughts

When product managers focus on the people with whom they work, the products succeed as a result.

Monday, June 12, 2017

Taking Analytics to the edge


In my previous article, I had written about HPE's EdgeLine servers for IoT analytics.

In 2017, we are seeing a steady wave of growth in data analytics that's happening on the edge and HPE is in the forefront of this wave - leveraging its strengths in hardware, software, services, and partnership to build powerful analytic capabilities.

With HPE EdgeLine, customers are able to move  analytics from the data center to the to the edge, providing rapid insights from remote sensors to solve critical challenges in multiple industries like energy, manufacturing, telecom,  and financial services.

Why IoT project fail?


Recently, Cisco reported that  ~75% of IoT projects fail. This is because IoT data has been managed in centralized, cloud-based systems. In traditional settings, data is moved from a connected 'thing' to a central system over a combination of cell-phone, Wi-Fi and enterprise IT network, to be managed, secured, and analyzed.

But with IoT devices generating huge volumes of data, and data being generated at multiple sites - even in remote areas with intermittent connectivity. This meant that analysis could not be done in a meaningful way as the data collection was taking time, and when the analysis was completed, results were computed, it was irrelevant.

Centralized cloud systems for IoT data analysis just does not scale nor can it perform at speeds needed.

HPE Solution - EdgeLine servers for Analytics on the Edge


With HPE EdgeLine servers, we now have a  solution that optimizes data for immediate analysis and decision making at the edge of the network and beyond.

For the first time ever, customers have the first holistic experience of the connected condition of things (machines, networks, apps, devices, people, etc.) through the combined power of HPE EdgeLine servers and Aruba wireless networks.

Analysis on the edge is just picking up momentum and it's just the beginning of good things to come.

In June 2017 at HPE Discover, customers were delighted to get an in-depth view of this solution.

HPE's continued investments in data management and analytics will deliver a steady stream of innovation. Customers can safely invest in HPE technologies and win.

HPE along with Intel is future proofing investments in data and analytics for hyper distributed environments. HPE has taken a new approach to analytics to provide the flexibility of processing and analyzing data everywhere - from right at the edge where data is generated for immediate action and for future analysis in the cloud at a central data center.

Customers are using IoT data to gain insight through analytics, both at the center and the edge of the network to accelerate digital transformation. With HPE Edgeline, one can take an entirely new approach to analytics that provides the flexibility of processing and analyzing data everywhere—at the edge and in the cloud, so it can be leveraged in time and context as the business needs to use it.

This technology was developed in direct response to requests from customers that were struggling with complexity in their distributed IoT environments. Customers, analysts and partners have embraced intelligent IoT edge and are using it in conjunction with powerful cloud-based analytics.

Analytics on the edge is a game changing approach to analytics that solves major problems for for businesses looking to transform their operations in the age of IoT. The HPE Vertica Analytics Platform now runs at the IoT edge on the Edgeline EL4000. This combination gives enterprises generating massive amounts of data at remote sites a practical solution for analyzing and generating insights.

Customers like CERN, FlowServe,  etc are using Edge analytics to expand its monitoring of equipment conditions such as engine temperature, engine speed and run hours to improve maintenance costs. Telecom services companies are pushing the edge with analytics to deliver 4G LTE connectivity throughout the country, regardless of the location of the business.


Closing Thoughts 


Benefits of centralized deep compute make sense—for traditional data. But the volume and velocity of IoT data has challenged this status quo. IoT data is Big Data. And the more you move Big Data, the more risk, cost, and effort you'll have to assume in order to provide end-to-end care for that data.

Edge computing is rebalancing this equation, making it possible for organizations to get the best of all worlds: deep compute, rapid insights, lower risk, greater economy, and more trust and security.


Wednesday, June 07, 2017

Looking into the future - Right through Automation & Artificial Intelligence



Its no secret that Innovation & creativity is the ultimate source of competitive advantage. But success of any innovative idea depends on a number of other factors: An energetic leadership, Market growth, Significant investments pool, and a real "can do" spirit of the team.

As I write this article, there are thousands of articles being published on web - which state how robots and Artificial Intelligence will replace humans in workforce.

  1. McDonald's is testing a new restaurant run completely by Robots 
  2. Robot lands a Plane. 
  3. Driverless Trucks will eliminate millions of jobs  
  4. Smart Machines will cause mass unemployment  
  5. Google's AI beats world Go champion in first of five matches  


Today's news is filled with the hype and fear of mass unemployment due to Automation. Today the hype cycle is almost at its zenith and in this background, I was asked to talk about what kind of jobs & employment opportunities will be there in future.

How automation will help mankind?


Automation is actually an innate feature of mankind. If we look into our past, we as a species have always come up with creative innovation to automate mundane tasks, and every time we did this, the civilization progressed by leaps and bounds.

The first ever automation was creation of canal system - thereby automating the transportation of water. This led to the rise of early civilizations in Indus River valley, Egypt, Babylon & China. Since then, civilization has been making a steady progress to automate simple, repetitive tasks.

Simple machines replaced human labor, trains & cars replaced horse drawn carts, Computers replaced clerks, and the list goes on and on.

From all our learning's, we know that if a task can be automated, it will be automated. There is no way a civilization will be able to stop automation. People have no pleasure in doing repetitive labor and the society will promote automation. Period!

And Yes, Today, People are being replaced by algorithms, machines and artificial intelligence.

What shall humans do?


As automation, artificial intelligence, machine learning and robotics grow in capability, humans doing simple, repetitive jobs will be pushed out of their jobs. So what will humans do?

The answer to this question can be found in history. When canals were invented, farmers found themselves with more time on their hands to increase the land area under cultivation. This led to more food production and this freed up people to create some of early classics of literature. Ramayana, Mahabaratha, Upanishads etc. were written during this time. People built massive temples, pyramids, palaces, forts, statues etc.

Industrial revolution also led to an explosion in arts - mainly paintings, writing of novels, poems, sculpture, building of palaces, classical music etc.

In 20th century and early 21 century, automation led to more creativity in terms of space travel, adventures, film making, music and new machines.

All this points towards only one direction. When humans are freed up from mundane tasks, they will use their free time to harness their creative potential.

Humans are innately creative. Machines, computers & Robots are not creative. Humans are far more creative and this creativity cannot be reduced into an algorithm & automated. This implies that the new generation of humans who are trained to be engineers, doctors, scientists and artists etc. will dream up new things to do, build and explore. Perhaps we may discover how to travel faster than light.


How to prepare for the new future?


The current education system is designed to create a workforce of yesterday, and people who do mundane, repetitive tasks and this education system will have to change first. We as a civilization will have to train the younger generation to be creative, develop expansive & divergent thinking. We need to nurture and flourish the creative side of humans and this will free up the younger generation to be creative and innovative.

Modern workplaces also have to change in a big way. Traditional hierarchical top down management system which pigeonhole people into narrow jobs need to be revamped. Silos need to be broken up and creative ideas must be fast tracked as quickly as possible. Businesses must be willing to take risks on new ideas. For example, Ford Motors recently fired its CEO - even after record breaking sales - because Ford as a business now needs a new business models - far from the one of building cars.


Cost of Failure - A major war & conflict


History also tells us the dark side of automation. Whenever automation ushered in a new era, lot of people had too much free time and when this was not utilized in a positive way, humans resorted to war and violence.

In fact, both world wars were in a way caused by industrial revolution. Industrialized European countries had surplus human labor and young population which had nothing much to do and the countries rushed headlong into a catastrophic war.

Today, we are seeing a massive surge in terrorism from people in Middle East and Pakistan. This is because their governments have failed to utilize their workforce in productive ways. Similarly, USA & China have increased their military spending in recent times - as they are not able to channel their enormous economic resources towards creative work.


Closing Thoughts 


We as a civilization are at the cusp of a new revolution - ushered by Automation & AI. Will this result in a golden era of creativity & innovation or will it result in a catastrophic war?

I cannot predict the future, but I know for sure that if we invest in building a creative & innovative society - we can usher in a golden era, else we are doomed for a devastating war.