Monday, July 24, 2017

Product Management 101 - Customer validation is the key


Recently, I was having lunch with a co-founder of a startup in Bangalore. They have a vision which sounds good on the surface: Provide data loss protection on Cloud. Though this sounds as such a old & proven idea, they have a very good secret sauce which gives them a unique value proposition: Security, Cost benefits & much better RPO/RTO than competition.

Like most entrepreneur, he started out by validating his product idea with customers. Starting with customer survey, asking customers about their pain points and asking them:  "If this product solves your problem, will you buy it?"

Customer validation is a good point to start, but one must also be aware that such a survey can lead to several pitfalls.

  1. Customer needs could change with time, and are no longer interested when the product is launched.
  2. Customer just expressed his 'wants' and not his 'needs', and may not pay for the actual product.
  3. Customer has no stake in the product. Just answering few questions was easy - there was no commitment or risks.


All this risks imply that customer validation may result in false positives.

False positive is a known risk factor in new product development and startups often take such risks. In case of my friend's startup, he took that risk and decided to invest in developing a prototype.

Several months have gone by and his company is busy building the prototype and his biggest fear is that customers may not embrace his product and is constantly changing what should be his MVP - Minimum Viable Product.


What is a Minimum Viable Product?


A minimum viable product (MVP) is the most pared down version of a product that can still be released. An MVP has three key characteristics:

It has enough value that people are willing to use it or buy it initially.
It demonstrates enough future benefit to retain early adapters.
It provides a feedback loop to guide future development.

The idea of MVP is to ensure that one can develop a basic product which early adapters will buy, use & give valuable feedback that can help guide the next iteration of product development.

In other words, MVP is the first version of a customer validated product.

The MVP does not generate profits, it is just a starting point for subsequent product development - which in turn results in rapid growth and profits.

Customers who buy the MVP are the innovators & early adapters, and no company can be profitable serving just the early adapters. But a successful MVP opens the pathway towards the next iterations of the product which will be embraced by majority of customers: 'Early Majority', 'Late Majority' and 'Laggards'.

MVP is also expensive for startups

For a lean startup, developing a MVP can be expensive. MVP is based on: Build -> Measure -> Learn process - which is a waterfall model.

There are two ways to reduce risks associated with developing an MVP. One way to reduce risks is to avoid false positives.

While conducting market research during customer validation process, one must ensure that customer is invested in this product development.

At the first sight, it is not easy to get customer to invest in a new product development. Customers can invest their Time, Reputation &/or Money.

By getting customers to spend time on the potential solution to their problem is the first step.

Second step would be get them invest their reputation. Can customer refer someone else who also has the same problem/need? Is the customer willing to put his name down on the product as the Beta user? Getting customer invest their reputation would most often eliminate the risks of false positives.

One good way to get customers invest their reputation is to create a user group or community - where customers with similar needs can interact with each other and new product development team - while helping new product development.

In case of B2B products, customers can also invest money in new product development. Getting customers to invest money is not so tough. I have seen this happen in several occasions. I call this co-development with customers (see my blog on this topic)

Kick Starter programs have now taken hold and today startups are successfully using kick starter programs to get customers invest money in their new product development.


Accelerating the Development Cycle & Lowering Development Costs


A lean startup should avoid developing unwanted features.

Once customers are invested in this new product, the startup will usually start developing the product and march towards creating the MVP.  However, it is common to develop a product and then notice that most customers do not use 50% of the features that are built!

Lean startup calls for lowering wastage by not building unused features. The best way to do this is to run short tests and experiments on simulation models. First build a simulation model and ask customers to use it and get their valuable feedback. Here we are still doing the Build -> Measure -> Learn process, but we are doing it on feature sets and not the entire product. This allows for a very agile product development process and minimizes waste.

Run this simulation model with multiple customers and create small experiments with the simulation model to get the best possible usage behavior from customers. These experimental models are also termed as Minimum Viable Experiment (MVE), which forms the blue print for the actual MVP!

Running such small experiments has several advantages:

  • It ensures that potential customers are still invested in your new product.
  • Helps develop features that are more valuable/rewarding than others.
  • Build a differentiated product - which competes on how customers use the product, rather than having the most set of features.
  • Helps learn how users engage with your product.
  • Help create more bang for the buck!


Closing Thoughts


In this blog, I have described the basic & must-do steps in lean product development, which are the fundamental aspects of product management.

Customer validation is the key to new product success. However, running a basic validation with potential customers runs a big risks of false positives and investing too much money in developing the MVP.

Running a smart customer validation minimizes the risks while creating a lean startup or lean product development. A successful customer validation of a solution helps to get paying customers who are innovators or early adapters. This is first and most important step in any new product development - be it a lean startup or a well established company.

Friday, July 07, 2017

Effective workplace for Digital Startups

Earlier this week, I met with a CEO of a startup in Bangalore who wanted to setup a smart office for this startup. I had a very interesting discussion on this subject and here in this blog, is the gist of our discussion.

==

Startups need a work environment that fosters collaboration, productivity, and innovation, it is able to attract and retain the best employees.

Office spaces are now turning into intelligent spaces - where office space can engage with employees to maximize effectiveness, connect seamlessly and securely anywhere, anytime.


How to build such a work place? 


Today, we have ultra-fast Wi-Fi, mobile "anywhere" communications, and Internet of Things (IoT) connectivity that connects physical building work places to employees. For example having a mobile app - which gives information about available meeting rooms, directions to meeting rooms and an easy one-click interface to book a meeting room - helps save 5-10 minutes off an employee's time in setting up a meeting. This alone translates into 42-85 man hours of productivity per employee per year! Freeing up time for innovation and collaboration.

In the world of intelligent work spaces technology becomes a key enabler. Every employee has fast and complete access to the applications and data they need and can use any mobile device to schedule space, operate electronic white boards or projectors, or set up video conference calls. Employees can be productive, seamlessly and securely, anywhere, anytime, whether in a quiet work space, a conference room, a boardroom, or even an outdoor space such as a rooftop café.

In a startup office, basic Wi-Fi and video conferencing are now so very common that they are taken for granted. The closed-door offices and cubicles have given way to open-space designs, casual meeting areas. Cafeterias & pantries are now seamlessly integrated with work spaces - to encourage open idea sharing and other collaborative exchanges among workers.


Understanding the requirements


Startup offices often share offices spaces with other startups (typically non-competing - Of course!)

A startup workplace must also be truly innovative. In most of the legacy workplaces, there is too much friction and inefficiency that hampers office productivity. Talented & creative employees are still shackled to desks on which sit hardwired computers that act as their main and sometimes only access point to the applications, software, and data they need to do their work.

Startups do not have a hierarchical organization structure. Instead they tend to have a team-based organizational structure. Teams are formed and disbanded depending on the project at hand. Cross-functional teams are dynamically created when necessary. This means employees need the right tools to work in a fluid environment where they and their colleagues can collaborate whenever and wherever the need arises.

Good news is that, today we have mobile-first, cloud-first and IoT technologies that enable such intelligent spaces. Facility managers will have to dorn an IT hat and ensure that:

  1. Secure, untethered, and consistent connectivity anywhere.Security is of paramount importance in a multi-tenant workplace. Employees are no longer tethered to a wired deskspace, they need to have complete mobility within the workspace - and yet have safe and consistent high bandwidth connectivity.
  2. Consistent workplace productivity solutions across all devicesWorkplace productivity tools such as Slack, Skype, Google Meetups etc are essential. These tools can work on various devices: iPad, iPhone, Android Phones, Windows Laptops, Apple Macbook etc.
  3. Collaboration solutions built on cloud
     
    Note that for a consistent workplace collaboration tools are all connected to cloud. Its not just the productivity tools, even booking conference rooms or meeting rooms are handled via cloud. In a multi-tenant workplace where conference rooms are shared, the solutions must be able to generate pay-per-use billing systems for all shared resources.
  4. Location based services
    Based on number of people per floor or per zone or area, smart facilities are turned on or off based on actual need. This means all lighting, cooling/heating and Wi-Fi connectivity are all based on number of people in that area. This implies use of intelligent sensors and smart analytics on the edge to minimize energy usage.   

Build Analytics into workplace  


Industrial IoT devices opens up a whole new way of seeing how an existing facility is being used. Heat/motion sensors can track which areas of the office are highly used and which areas are least used. This data over a period of time can be of immense value - to optimize the way  office spaces are designed. This data can be used to optimize the cooling & lighting requirements and HVAC systems planning.

The use of smart building technologies - sensors on the floor, motion sensors, thermal scanners, CCTV, Biometric scanners etc., generate vast amounts of data which can be used in lot of ways to a better workplace.

Closing Thoughts 


When technology and facility design is done right, we can create workspace that allows organizations, small or large, to orchestrate workflows for maximum efficiency and productivity. This will unleash the kind of innovation, creativity, and productivity needed to compete in the new digital economy.

Such a building would be a truly digital workplace where technology becomes a strong hidden foundation for a true user centered work place.

Invest in IT enabled facilities to make employees happier, attract and retain the most talented workers, it must provide employees with a modern, digital environment where they can work efficiently and seamlessly.


Tuesday, June 27, 2017

Key Metrics to measure Cloud Services


As a business user, if you are planning to host your IT workloads on a public cloud and you want to know how to measure the performance of the cloud service, here are seven important metrics you should consider.

1. System Availability

A cloud service must be available 24x7x365. However, there could be downtimes due to various reasons. This system availability is defined as the percentage of time that a service or system is available. Often expressed as a percentage. For example, a downtime of 7.5 hours unavailable per year or 99.9% availability! A downtime of few hours can potentially cause millions of dollars in losses.

365 or 3.65 days of downtime per year, which is typical for non redundant hardware if you include the time to reload the operating system and restore backups (if you have them) after a failure. Three nines is about 8 hours of downtime, four nines is about 52 minutes and the holy grail of 5 nines is 7 minutes.

2. Reliability or also known as Mean Time Between Failure (MTBF)  and Mean Time To Repair(MTTR)

Reliability is a function of two components: Mean Time Between Failures (MTBF) and Mean Time To Repair (MTTR) - i.e., time taken to fix the problem. In the world of cloud services, it is important to know MTTR is often defined as the average time required to bring back a failed service back into production status.

Hardware failure of IT equipment can lead to a degradation in performance for end users and can result in losses to the business. For example, a failure of a hard drive in a storage system can slow down the read speed - which in turn causes delays in customer response times.

Today, most cloud systems are built with high levels of hardware redundancies - but this increases the cost of cloud service.        

3. Response Time

Response time is defined as the time it takes for any workload to place a request for
work on the cloud system and for the cloud system to complete the request. Response time is heavily dependent on the network latencies.

Today, If the user and the data center are located in the same region, the average overall response time is 50.35 milliseconds. When the user base and data centers are located in different regions, the response time increases significantly, to an average of 401.72 milliseconds.

Response Time gives a clear picture of the overall performance of the cloud. It is therefore very important to know the response times to understand the impact on application performance and availability - which in-turn impacts customer experience.

4. Throughput or Bandwidth

The performance of cloud services are also measured with throughput; i.e., Number of tasks completed by the cloud service over a specific period. For transaction processing systems, it is normally measured as transactions/second. For systems processing bulk data, such as audio or video servers, it is measured as a data rate (e.g., Megabytes per second).

Web server throughput is often expressed as the number of supported users – though clearly this depends on the level of user activity, which is difficult to measure consistently. Alternatively, cloud service providers publish their throughputs in terms of bandwidth - i.e., 300MB/Sec, 1GB/sec etc. This bandwidth numbers most often exceeds the rate of data transfer required by the software application.

In case of mobile apps or IoT, there can be a very large number of apps or devices streaming data to or from the cloud system. Therefore it is important to ensure that there is sufficient bandwidth to support the current user base.

5. Security

For cloud services, security is often defined as the set of control based technologies and policies designed to adhere to regulatory compliance rules and protect information, data applications and infrastructure associated with cloud computing use. The processes will also likely include a business continuity and data backup plan in the case of a cloud security breach.

Often times, cloud security is categorized into multiple areas: Security Standards, Access Control, Data Protection (Data unavailability & Data loss prevention), Network  - Denial of service (DoS or DDoS)

6. Capacity

Capacity is the size of the workload compared to available infrastructure for that workload in the cloud. For example, capacity requirements can be calculated by tracking average utilization over time of workloads with varying demand, and working from the mean to find the capacity to handle 95% of all workloads.  If the workloads increases beyond a point, then one needs to add more capacity - which increases costs.

7. Scalability

Scalability refers to the ability to service a theoretical number of users - degree to which the service or system can support a defined growth scenario.

In cloud systems, scalability is often mentioned as scalable up to tens of thousands, hundreds of thousands, millions, or even more, simultaneous users. That means that at full capacity (usually marked as 80%), the system can handle that many users without failure to any user or without crashing as a whole because of resource exhaustion. The better an application's scalability, the more users the cloud system can handle simultaneously.

Closing Thoughts


Cloud service providers often publish their performance metrics - but one needs to dive in deeper and understand how these metrics can impact the applications being run on that cloud. 

Wednesday, June 14, 2017

How to Design a Successful Data Lake

Today, business leaders are continuously envisioning new and innovative ways to use data for operational reporting and advanced data analytics. Data Lake is a next-generation data storage and management solution, was developed to meet the ever increasing demands of business & data analytics.

In this article I will explore some of the existing challenges with the traditional enterprise data warehouse and other existing data management and analytic solutions. I will describe the necessary features of the Data Lake architecture and the capabilities required to leverage a Data and Analytics as a Service (DAaaS) model, characteristics of a successful Data Lake implementation and critical considerations for designing a Data Lake.

Current challenges with Enterprise Data Warehouse 

Business leaders are continuously demanding new and innovative ways to use data analysis to gain competitive advantages.

With the development of new data storage and data analytic tools, the traditional enterprise data warehouse solutions have become inadequate and are impeding maximum usage of data analytics and even prevent users from maximizing their analytic capabilities.

Traditional data warehouse tools has the following shortcomings:

Timeliness 
Introducing new data types and content to an existing data warehouse is usually a time consuming and cumbersome process.

When users want quick access to data,  processing delays can be frustrating and cause users to ignore/stop using data warehouse tools, and instead develop an alternate ad-hoc systems  which costs more, waste valuable resources and bypasses proper security systems.

Quality
If users do not know the origin or source of data  - currently stored in the data warehouse, users view such data with suspicion and may not trust the data. Current data warehousing solutions often store processed data - in which source information is often lost.

Historical data often have some parts missing or inaccurate, the source of data is usually not captured. All this leads to situations where analysis results provide wrong or conflicting results.

Flexibility 
Today's on-demand world needs data to be accessed on-demand and results available in near real time. If users are not able to access this data in time, they lose the ability to analyze the data and derive critical insights when needed.

Traditional data warehouses "pull" data from different sources - based on a pre-defined business needs. This implies that users will have to wait till the data is brought into the data warehouse. This seriously impacts the on-demand capability of business data analysis.

Searchablity
In the world of Google, users demand a rapid and easy search for all their enterprise data. Many of the traditional data warehousing solutions - do not support an easy search tools. Thus customers cannot find the required data and it limits users' ability to make best use of data warehouses for rapid on-demand data analysis.

Today's Need


Modern data analytics - be it Big Data or BI or BW require:


  1. Support multiple types (structured/unstructured) of data to be stored in its raw form - along with source details.
     
  2. Allow rapid ingestion of data - to support real time or near real time analysis
     
  3. Handle & manage very large data sets - both in terms of data streams and data sizes.
     
  4. Allow multiple users to search, access and use this data simultaneously from a well known secure place.
     


Looking at all the demands of modern business, the solution that fits all of the above criteria is the Data lake.

What is a Data Lake? 


A Data Lake is a data storing solution featuring a scalable data stores - to store vast amounts of data in various formats. Data from multiple sources: Databases, Web server logs, Point-of-sale devices, IoT sensors, ERP/business systems, Social media, third party information sources etc are all collected, curated into this data lake via an ingestion process. Data can flow into the Data Lake by either batch processing or real-time processing of streaming data.

Data lake holds both raw & processed data along with all the metadata and lineage of the data which is available in a common searchable data catalog. Data is no longer restrained by initial schema decisions, and can be used more freely across the enterprise.

Data Lake is an architected data solution - on which all the common compliance & security policies also applied.

Businesses can now use this data on demand to provide Data and Analytics as a Service  (DAaaS) model to various consumers. ( Business users, data scientists, business analysts)

Note: Data Lakes are often built around a strong scalable, globally distributed storage systems. Please refer my other articles regarding storage for Data lake

Data Lake: Storage for Hadoop & Big Data Analytics

Understanding Data in Big Data

Uses of Data Lake

Data Lake is the place were raw data is ingested, curated and used for modification via ETL tools. Existing data warehouse tools can use this data for analysis along with newer big data, AI tools.

Once a data lake is created, users can use a wide range of analytics tools of their choice to develop reports, develop insights and act on it. The data lake holds both raw data & transformed data along with all the metadata associated with the data.

DAaaS model enables users to self-serve their data and analytic needs. Users browse the data lake's catalog to find and select the available data and fill a metaphorical "shopping cart" with data to work with.

Broadly speaking, there are six main uses of data lake:


  1. Discover: Automatically and incrementally "fingerprint" data at scale by analyzing source data.
     
  2. Organize: Use machine learning to automatically tag and match data fingerprints to glossary terms. Match the unmatched terms through crowd sourcing
     
  3. Curate: Human review accepts or rejects tags and automates data access control via tag based security
  4. Search: Search for data through the Waterline GUI or through integration via 3rd party applications
     
  5. Rate: Use objective profiling information along with subjective crowdsourced input to rate data quality
     
  6. Collaborate: Crowdsource annotations and ratings to collaborate and share "tribal knowledge" about your data

Characteristics of a Successful Data Lake Implementation


Data Lake enables users to analyze the full variety and volume of data stored in the lake. This necessitates features and functionalities to secure and curate the data, and then to run analytics, visualization, and reporting on it. The characteristics of a successful Data Lake include:


  1. Use of multiple tools and products. Extracting maximum value out of the Data Lake requires customized management and integration that are currently unavailable from any single open-source platform or commercial product vendor. The cross-engine integration necessary for a successful Data Lake requires multiple technology stacks that natively support structured, semi-structured, and unstructured data types.
     
  2. Domain specification. The Data Lake must be tailored to the specific industry. A Data Lake customized for biomedical research would be significantly different from one tailored to financial services. The Data Lake requires a business-aware data-locating capability that enables business users to find, explore, understand, and trust the data. This search capability needs to provide an intuitive means for navigation, including key word, faceted, and graphical search. Under the covers, such a capability requires sophisticated business processes, within which business terminology can be mapped to the physical data. The tools used should enable independence from IT so that business users can obtain the data they need when they need it and can analyze it as necessary, without IT intervention.
     
  3. Automated metadata management. The Data Lake concept relies on capturing a robust set of attributes for every piece of content within the lake. Attributes like data lineage, data quality, and usage history are vital to usability. Maintaining this metadata requires a highly-automated metadata extraction, capture, and tracking facility. Without a high-degree of automated and mandatory metadata management, a Data Lake will rapidly become a Data Swamp.
     
  4. Configurable ingestion workflows. In a thriving Data Lake, new sources of external information will be continually discovered by business users. These new sources need to be rapidly on-boarded to avoid frustration and to realize immediate opportunities. A configuration-driven, ingestion workflow mechanism can provide a high level of reuse, enabling easy, secure, and trackable content ingestion from new sources.
     
  5. Integrate with the existing environment. The Data Lake needs to meld into and support the existing enterprise data management paradigms, tools, and methods. It needs a supervisor that integrates and manages, when required, existing data management tools, such as data profiling, data mastering and cleansing, and data masking technologies.


Keeping all of these elements in mind is critical for the design of a successful Data Lake.


Designing the Data Lake


Designing a successful Data Lake is an intensive endeavor, requiring a comprehensive understanding of the technical requirements and the business acumen to fully customize and integrate the architecture for the organization's specific needs. Data Scientists and Engineers provide the expertise necessary to evolve the Data Lake to a successful Data and Analytics as a Service solution, including:

DAaaS Strategy Service Definition. Data users can leverage define the catalog of services to be provided by the DAaaS platform, including data onboarding, data cleansing, data transformation, data catalogs, analytic tool libraries, and others.

DAaaS Architecture. Datalake help data users create a right DAaaS architecture, including architecting the environment, selecting components, defining engineering processes, and designing user interfaces.

DAaaS PoC. Rapidly design and execute Proofs-of-Concept (PoC) to demonstrate the viability of the DAaaS approach. Key capabilities of the DAaaS platform are built/demonstrated using leading-edge bases and other selected tools.

DAaaS Operating Model Design and Rollout. Customize DAaaS operating models to meet the individual business users' processes, organizational structure, rules, and governance. This includes establishing DAaaS chargeback models, consumption tracking, and reporting mechanisms.

DAaaS Platform Capability Build-Out. Provide an iterative build-out of all data analytics platform capabilities, including design, development and integration, testing, data loading, metadata and catalog population, and rollout.

Closing Thoughts  


Data Lake can be an effective data management solution for advanced analytics experts and business users alike. A Data Lake allows users to analyze a large variety and volume when and how they want. DAaaS model provides users with on-demand, self-serve data for all their analysis needs

However, to be successful, a Data Lake needs to leverage a multitude of products while being tailored to the industry and providing users with extensive, scalable customization- In short, it takes a blend of technical expertise and business acumen to help organizations design and implement their perfect Data Lake. 

Tuesday, June 13, 2017

Key Product Management Principle - People are the core Asset


2017 is turning out to be a tumuntous year for IT industry worldwide. Large established IT companies such as Cisco, HPE, Dell-EMC, IBM are serious cutting down costs.  Unfortunately, companies tend to look at people as "expenses" and layoffs have become common.

Product managers have From a product managers often answers three main questions:

1. Where is the Product Today?
2. Where do we want to take the product & by what time?
3. How can the team get the product there?

Therefore, product managers have a different view when it comes to employees. From a product development perspective, people are "assets" - especially engineering teams and customer facing teams. Success of new product development depends of people.

Product managers treat people as true assets as the success of the new products - which creates future revenue for the company. Without people, the new product will never be able to reach its intended goal.

In IT, engineers and their intellect, skills, knowledge, character, integrity are - the true value in any organization. Because of the nature of IT product development, it is vital that product managers treat their engineering colleagues as true assets.  Product manager must spend time with the team. This means talking with them, listening to their concerns and fears about the current phase of the project, and occasionally taking them out for lunch. ( Lunch is a truly amazing way to motivate people)

Product managers have to make the team members feel valued. That's when engineers they care more about the product on which they are working. Face-time with the team also helps product managers understand individuals and personally assist them. Time spent with the team pays financial dividends as high-quality products make it to market on time and with enough vitality to excite the sales force.

Closing Thoughts

When product managers focus on the people with whom they work, the products succeed as a result.