Friday, February 27, 2015

Product Management: Manage expectations & Create value

What do successful companies know about creating new products?

When you study successful products, there is one thing that is common. Successful products create value to customers.

Value can be delivered in many ways. Henry Ford did not invent the car nor did he invent the assembly line. But he combined them to drive down the cost of a new car and passed on that savings to customers - that revolutionized an industry.

In every industry, there are several ways to create value to customers. For example, in case of phones: Graham Bell invented the phone, but Motorola added value to it by making it mobile. Nokia further changed the customer value by making cell phones fashionable. Blackberry converted phones into secure office communication device. Apple made the cell phone personal and interactive. Note that all through the cell phone evolution, the cost of cell phone did not really go down - but customer found value in different ways. Providing value to customers is not just about adding new features or reducing prices.

To customers, the value is the difference between what they perceive and what they pay for the product. In the world of technology - be it Phones, Computers or software, the customer value is not in lowering prices or by reducing costs, the real customer value is done by adding features that make customers feel they are getting more with each version of the product.

Too often, I see companies focus too much on the costs - that they neglect the most important goal: determining why someone would want to buy their product. For example, Nokia fought a long & losing battle by releasing cheaper touch screen phones - but totally forgot the user experience. BY the time, Nokia released a really good touch screen phone, Nokia had ignored the Apps.

Today doing a new product that's cheaper simply doesn't cut it. Microsoft HyperV is free with Windows Server, but customers still prefer VMWare!

For developing successful products, you have to concentrated on growing value for customers. Leaders in successful product companies know their customers' economic expectations and have the skills to deliver it to customers.

Understanding Hyperconverged Infrastructure

Computer technology undergoes a massive shift every so often as new models emerge to meet changing business needs. Explosive growth of Mobile apps  & Big Data has spurred uncontrolled demand on IT & has put more strain on existing resources. The existing data centers were built around setting up purpose built  infrastructure - that just cannot scaleup to the new needs.

Discreet Servers, Network Switches/Routers, Storage array (SAN, NAS) which dominated the datacetner is getting replaced by converged infrastructure such as VBLOCK or Flexpod.

At the basic level, converged infrastructure simply brings together existing individual storage, compute, and network switching products into pre-tested, prevalidated solutions sold as a single solution.

These converged infrastructure was still built out of discreet servers (Cisco UCS), discreet switches (Cisco Nexus) & discreet Storage array (EMC VMAX/VNX).  VCE, the vendor of converged infrastructure would integrate all the discreet components together, and have the setup pre-configured in factory before shipping it off the customers. It simplified the purchase and upgrade cycle.

Converged Infrastructure systems did offer a few benefits:

  1. Single point of contact for their infrastructure, from purchase to end of life.
  2. These systems are always tested and almost always arrive at the customer site fully racked & cabled, so they're ready to go.

While converged infrastructure saved time/money to customers in terms of standardizing IT infrastructure and having faster time to deployment, it still did not solve some of the niggling issues with their IT infrastructure.

Virtualization of compute - with VMWare ESX solved server utilization problem. But for network & storage, Utilization, planning, configuration, & change management was still be big headache.  Different tools were needed to manage underlying components: Servers to be managed with UCS manager, Network with Nexus Manager, Storage with Unisphere, and vCenter for VM management. A common single unified tools were sorely missing.

Converged Infrastructure fails to address ongoing operational challenges that have been introduced with the advent of virtualization. Network LAN, Storage LUNs were still created in the old way, WAN optimizers to acquire and configure, and third party backup and replication products had to be purchased separately and maintained.

There was another big disadvantage. Once the existing converged infrastructure was fully utilized - either on compute, network or storage, Customers will have to buy another BIG chunk of infrastructure. For example, if customer wanted ten additional servers, he would get storage and network bundled with it - which led to poor utilization of other resources.

As a result, there were islands of storage & Network with poor utilization. Customers could not use existing legacy storage with converged infrastructure. Converged infrastructure also did not address the performance issues with legacy application. And system management was not really unified and customers still needed to run individual element managers underneath a unified global management tool.

As time went by, IT vendors learnt from the limitation of converged infrastructure and developed a solution. It is Hyperconverged infrastructure

Hyperconverged infrastructure is the culmination and conglomeration of a number of innovations, all of which provide value to IT infrastructure.

What is hyperconvergence? 

Hyperconverged Infrastructure is a server with large amounts of data storage capacity and also has IP networking - mainly Ethernet switch with Layer-2/3 overlay SDN to connect to other hyperconverged boxes.

These boxes are preconfigured and can be stacked up to create bigger capacities, so that compute and storage can be pooled & shared across multiple boxes. Hyperconvergence is a scalable building-block approach that allows IT to expand by adding units, just like in a LEGO set.

Hyperconvergence is a way to enable cloud like functionality & scale without compromising the availability, performance, & reliability. This is achieved by total virtulization. Compute, Storage (SDS), and Network(SDN). This allows the entire Hyperconverged infrastructure to be treated as one big pool of virtual resources that can be managed completely by software: All provisioning, configuration, performance, security etc., is all done through a common software.

Virtualization of the entire datacenter will fundamentally and permanently change how IT services are delivered from the data center. This enables IT to take a "virtualized first" approach to new application and service deployment - i.e., completely virtual environment is used for running all new applications.

Using the entire infrastructure as a resource pool, organizations can gain efficiency, flexibility and scalability. Hyperconverged infrastructure provides significant benefits:

  • Data efficiency: Hyperconverged infrastructure reduces storage, bandwidth, and IOPS requirements - by one time data de-duplication, compression & optimization.

  • Elasticity: Hyperconvergence makes it easy to scale out/in resources as required by business demands. Hyperconvergence is a scalable building-block approach that allows IT to expand by adding units, just like in a LEGO set. This allows to scale the data center environment easily and linearly.
  • VM-centricity: A focus on the virtual machine (VM) or workload as the cornerstone of enterprise IT, with all supporting constructs revolving around individual VMs. Virtualization fundamentally and permanently changed IT and the data center. Today, most services are running inside virtual environments, and IT often takes a "virtualized first" approach to new application and service deployment. That is, administrators consider the virtual environment for running new applications rather than just building a new physical environment.

  • Data protection: Software Ensuring that data can be restored in the event of loss or corruption is a key IT requirement, made far easier by hyperconverged infrastructure.

  • VM mobility: Hyperconvergence enables greater application/workload mobility. Homogenous resource pools also make it is easier to move applications from one virtual resource to another, 

  • High availability: Hyperconvergence enables higher levels of availability than possible in legacy systems. Homogenous resource pools also make it is easier to afford spare components that serve for increased redundancy. At the same time, a simplified administration leaves less room for human error and thereby increases overall uptime.

  • Cost efficiency: By avoiding overprovisioning of resources. Virtulized resources can now be dynamically provisioned to match the workloads and thus avoid overprovisioning Hyperconverged infrastructure brings to IT a sustainable step-based economic model that eliminates waste. lower CAPEX as a result of lower upfront prices for infrastructure, lower OPEX through reductions in operational expenses and personnel, and faster time-to-value for new business needs.

A side benefit: The hyperconverged infrastructure provides a single vendor approach to procurement, implementation, and operation. There's no more vendor blame game, and there's just one number to call when a data center problem arises.

Closing Thoughts

Hyperconverged infrastructure (also known as hyperconvergence) is a data center architecture that embraces cloud principles and economics. Based on software, hyperconverged infrastructure consolidates server compute, storage, network switch, hypervisor, data protection, data efficiency, global management, and other enterprise functionality on commodity x86 building blocks to simplify IT, increase efficiency, enable seamless scalability, improve agility, and reduce costs

Thursday, February 26, 2015

Product Management - Design For Reliability

The role of quality and reliability in a product success cannot be disputed. Product failures in the field inevitably lead to losses in the form of repair cost, product recalls, lost sales,  warranty claims, customer dissatisfaction, product recalls, loss of sales, and in extreme cases, loss of life. Thus, quality and reliability play a critical role in product development.

Quality and reliability has become a standard in products - Airplanes, medical devices, Cars,  Robotics, Industrial automation etc., Yet when it comes to software products - the reliability and quality seems to be sadly lacking.

Often times during product development cycle, reliability and quality testing is compromised in favor of faster time to market. The general attitude is that - "If customers find a bug, we will fix it in a patch release."

In addition, practice of agile product development and rapid release cadence: A new release every quarter, or month and in case of extreme programing - daily updates!

The idea of quickly fixed all known defects, security failures etc., has led to products that has poor reliability.

Today, customers typically wait for 2-3 quarters after the product release - before getting that product into production. Large enterprise customers have to test new software products before getting it into production. But with shrinking product life cycles, companies are being forced to build products that has specific design features for reliability.

As a result, new enterprise products are now being designed for reliability. From a product design concept, reliability is about an application's ability to operate failure free.

This includes ensuring accurate data is coming into the system and data transformation is error free, Error-free state management, and non-corrupting recovery when failure conditions are detected failure.

Creating a high-reliability application starts early in development life cycle - right at the product specifications and is built right into architecture, design, coding, testing, deployment, and operational maintenance.

Reliability cannot be built into an application at deployment stage. Though it is quite common  from early design specification, through building and testing, to deployment and ongoing operational maintenance. You can't add reliability onto an application just before deployment.

Common steps for building reliability into a product are:

  1. Product Reliability requirements are defined in product specification.
  2. Product architecture includes reliability eg: Distributed Vapp architecture
  3. Application management information is built into the application.
  4. Use redundancy for reliability.
  5. Use quality development tools.
  6. Use built-in application health checks
  7. Use consistent error handling
  8. Build error recovery mechanism into the product
  9. Incorporate Design for Debug functionality - for easy debug.

Many of the reliability design ideas also overlap with high availability - where the system resilience is built into software. In High-Availablity systems two or more instance of the software are running separately - but synchronously. New software systems are designed for geo-distributed deployment, where customers can continue to use the product - even if a data center goes down.

There is a very close relationship between reliability and availability. While reliability is about how long an application runs between failures, availability is about an application's capacity to immediately begin handling all service requests, and especially — if a failure occurs — to recover quickly and thereby minimize the time when the application is not available. Obviously, when an application's components and services are highly reliable, they cause fewer failures from which to recover and thereby help increase availability.

Improving Software Reliability

Software and system reliability can be improved by giving attention to the following factors:

  1. Focus strongly and systematically on requirements development, validation, and traceability, with particular emphasis on software usage and software management aspects. Full requirements development also requires specifying things that the system must do and what the systems must not do. (e.g., heat-seeking missiles should not boomerang and return to the installation that  fired them).
  2. Formally capture a "lessons learned" database and use it to avoid past issues with reliability and thus mitigate potential failures during the design process. Think defensively. Examine how the code handles off-normal program inputs. Design to mitigate these conditions.
  3. Beta software releases are most helpful in clarifying the software's requirements. The user can see what the software will do and what it will not do. This  will help to clarify the user's needs and the developer's understanding of the user's requirements. Beta releases help the user and the developer gather experience and promote better operational and functional definition of the product. Beta releases also help clarify the user environmental and system exception conditions that the code must handle.
  4. Build diagnostic capability into the product.  When software systems fail, the software must collect all required information needed to debug the case automatically.
  5. Carry out a potential failure modes and effects analysis to harden the system against abnormal conditions.
  6. Software Failures  at customer site should always be analyzed down to their underlying  root cause for repair and to prevent reoccurrence. To be the most proactive, the system software should be parsed to see if other instances exist where this same type of failure could result.
  7. Every common failures must be treated as critical and must be resolved to its root cause and remedied.
  8. Capture and document the most significant failures - understand what caused the failure and develop designs to prevent such failures in future.
  9. Fault injection testing must be part of system testing.

Benefits of Design for Reliability

The concept of design for reliability (DFR)  in software is becoming a standard in recent years and will continue to develop and evolve in years to come. Design for reliability shifts the focus from "test-analyze-fix" philosophy to designing reliability into products and processes using best available technologies.

DFR also changes test engineering from product testing for defect detection to testing for system stability and system resilience.

As DFR standards evolve, product companies are setting up reliability engineering teams as an enterprise wide activity - which gives guidance on advice on how to design for reliability, provide risk assessments, provide templates for reliability analysis, develop quantitative models to derive the probability of failure for products.

DFR impacts the entire product lifecycle: reducing life-cycle risks and minimizing the combined cost of design, manufacturing, quality, warranty, and service. Advances in system disgnotics/prognostics and system health management is helping the development of new models and algorithms that can predict the future reliability of a product by assessing the extent of degradation from its expected operating conditions.

DFR principles and methods are aimed proactively to prevent faults, failures, and product malfunctions, which result in cheaper, faster, and better products. Product reliability is best used as a tool to gain customer loyalty and customer trust. For example, lot of customers still use Sun/Oracle Computers, IBM Z series systems, Unix OS for its reliability.  

Tuesday, February 24, 2015

EMC's DataLake Foundation

On February 23rd 2015, EMC announced the "Data Lake Foundation" - which is suite of EMC products and solutions to build a rock solid Data Lake - which is the foundation the supports all big data analytics.

The rise of big data and the demand for real-time information is putting more pressure than ever on enterprise storage.

Big data analytics needs & creates massive volumes of data, This unprecedented data growth - which can quickly overwhelm existing storage systems. Over last one year, EMC has been building storage systems to address the specific needs  of big data.

In 2015, EMC announced  Data Lake Foundation strategy - which is based in products like EMC Isilon and EMC ECS (Elastic Cloud Storage). These storage systems are designed to work with HDFS (Hadoop Files Systems) and is easily integrated with Pivotal, Cloudera & Hortonworks stack data analytics tools - thus make it simple to store and  analyze massive volumes of data.

EMC has certified DataLake Foundation to work with the rich analytics tools from vendors: Pivotal, Cloudera and Hortonworks provide. Pivotal and EMC have worked together to test, benchmark and size the Data Lake Apache Hadoop solution.

Isilon OneFS 7.2 OS will support newer and more current versions of Hadoop protocols including HDFS 2.3 and HDFS 2.4 delivering faster time to insights.  It will also support for OpenStack Swift to support both file and object – the unstructured data types that are growing the fastest.

EMC's DataLake Foundation makes it easy for enterprises to run their analytics tools; Helps eliminate storage silos and provide simpler ways to store and manage data so they can focus efforts more toward gaining insights and value from their data.

Here's what the DataLake Foundation brings to the enterprise:

  1. Efficient Storage: Eliminates storage silos, simplifies management, and improves utilization.
  2. Massive Scalability: Built from scale-out architectures that are massively scalable and simple to manage.
  3. Increased Operational Flexibility: Multi-protocol and next-generation access capabilities support traditional and emerging applications.
  4. Enterprise Attributes: Protects data with efficient and resilient backup, disaster recovery and security options. Enterprise class data protection to maximize availability and security options to meet business requirements.
  5. In-Place Big Data Analytics: Leverages shared storage and support for protocols such as HDFS to deliver cost-efficient, in-place analytics with faster time to results.

Two products from EMC portfolio that form the Data Lake foundation are EMC Isilon and EMC Elastic Cloud Storage (ECS).

EMC Isilon provides an enterprise-scale, file-based Data Lake Foundation with the ability to run traditional and next-gen workloads. Starting at 2PB, scaling upto 50PB per cluster, Isilon provides a great balance of performance and capacity for analytics workloads.

EMC ECS is the scalable Object storage for next generation of modern applications. ECS delivers geo-distributed high availability, nearly infinite capacity for big data analytics - provided on commodity storage.

With ECS and the new Isilon platform and features, customers have everything they need to store, protect, secure, manage and analyze all unstructured data now and is built to scale out for all the future needs.

Business Benefits

EMC DataLake Foundation is replicating VBLOCK strategy, where all the components needed for Bigdata analytics is pre-configured, and comes with Pivotal HAWQ subscriptions and Pivotal HD.

This simplifies deployment of all BigData analytics programs, while providing EMC's enterprise grade support, nearly infinite scalablility, and data security.


  1. IDC Lab Validation Brief.  
  2. Pivotal Blog: Pivotal and EMC Come Together To Shore Up The Data Lake 
  3. EMC Isilon x410 release 
  4. Pivotal HD 
  5. Pivotal Big Data Suite  
  6. Pivotal webinar: Querying External Data Sources with Hadoop 
  7. Hadoop for the Enterprise 

Monday, February 23, 2015

Role of Customer in New Product Development

In my previous article: First Steps in Developing New Software products,  I had made a reference to a stage of defining product features. There are many ways to identify product features and functionality. One of the ways is to involve a potential customer in the early stage of new product development.

The key advantage of involving customers at an early stage is to minimize the risk of developing features & functions - which are of no value to customers. It also helps in taking active customer feedback at the time of product definition and that helps in a BIG way to minimize risks of product failure.

Improve the Odds By working with the Customer

I had good fortune to work at both a large technology company in Santa Clara, and also at a startup and also in a very large technology company. All through my experience, I have been involved in product development, from both technology side and business side. Based on my experience, I seen the value of customer involvement at an early stage of product definition.

Normally, all new product development goes through cycles of  "IDEA" -> "Build Product" -> "Measure Customer Response" -> "Learn from Customer Data" -> "New Idea".

Involving customers at an early stage of product development has several benefits.

  1. It will help avoid mistakes and it allows the developer to explore and iterate during the cheapest phase of development - before any code is written, and when the product is still in the mockup stage. Customers can give valuable inputs and validates the initial assumptions.
  2. It also gives a clearer picture of  customer needs and competitive alternatives. Talking to customer gives a much deeper insight into actual customer usage models and needs - there are invaluable for defining new product features/functions. In my past experience, we had a case where got so impressed with the product idea that they were willing to invest and co-develop the product. Customer, being a Fortune-500 company, thus ensured the product was an overnight success.
  3. It also helps to uncover new opportunities for differentiation from competition, and helps in clear market positioning and helps develop the product launch, and product marketing plans.
  4. It will reduce or eliminate unnecessary features, thus it will reduce the amount of product that needed to be built, and speeds up time to market!
  5. It is always better to be first in the market - even with minimum viable product. This reduces cost of development and time to market.

Customers are eager to help 

It is surprising to know that most customers are eager to help and talk about their needs - even to companions that don't even have a product.

During my interactions with customers, I have noticed that customer often tend to request features that are far more ambitious than their current needs and usage, but are willing to accept a product that meets their minimum needs.  As a result, we were able to release new product in months - and without many features which was initially planned.

Customer are also willing to give time for additional features - which helps in product road map definition. Just asking for customer priority of features helps in identifying the time scales for subsequent product releases and this also helps shape the future direction of the product.

Customer Involvement is actually an opportunity to build stronger relationships with some of our customers. We'll choose those most likely to be receptive, and we'll set expectations appropriately.

However, customer involvement does not mean building a custom product - which meets the needs of only one customer.  Its a process for gathering information, and it will require a skilled product manager to prioritize that information and figure out what and how we respond to it - and help product management do their jobs more effectively.

Customer involvement gives information on how individual customers behave and buy. This type of insights cannot be captured from market research or from usability testing. Market research and usability testing are still very imprint and customer involvement does not eliminate it.

Customer involvement is the best way to validate assumptions on who the customer is, what he needs and what he'll buy.