Saturday, December 31, 2011

What ails Innovation in India


Recently Shankar Subramanian, a friend and a famous trainer in Bangalore pointed out a not-so-surprising fact that India is steadily increasing the number of patents being filed in India, but India still trails USA, Japan, China, South Korea & France. This is hardly surprising when we consider the basic inputs needed for innovation.

So what are the necessary inputs for innovation?

There are seven basic inputs needed for innovation as listed below in the order of strengths in India

  1. Technically competent people
  2. Technology
  3. Understanding customer Needs
  4. Leadership commitment
  5. Financial resources
  6. Brainstroming or Innovation sessions
  7. Strong patent filing & protection system

When we look at the seven basic inputs, we in India has great strength in top three inputs: People, Technology and Understanding customer needs. But we are weakest in protecting Intellectual Property Rights (IPR). Without adequate IPR protection, there will be little motivation to file for patents or even innovate - which effectively affects the leadership commitment to innovate, which then has a casading impact on allocating financial resources and time for innovation.

Innovation is not easy, it takes several inputs to succeed. Having a partial set of inputs will not help. Most Indian companies put their best efforts and talent in operations and spend very little for innovation. But when there is a focus - we have seen rapid innovation happen. Tata Eka - the super computer, Mahindra XUV500, Tata Nano are best examples of low cost innovation.

In India, the biggest missing piece has been a strong patent filing & protection system. The current patent filing system and patent protection in India is still not world class. Without a world class patent filing & protection system, India will lag behind other countries.

Friday, December 30, 2011

Is VDI the killer App for Cloud Computing?

Virtual desktop technology has been around for more than a decade, yet it has not found wide user acceptance. Back in 1990's virtual desktops was only means by which a UNIX user could access a Windows application on his workstation or virtual desktops (VNC) was only means by which a Windows user could access a Unix application on his desktop/laptop.

The usage of virtual desktops were limited to small group of users - mainly computer experts and the common man had no use for a virtual desktop.

The early versions of Virtual Desktops was accessible only over the VPN tunnels or on the enterprise network. With the emergence of cloud in 2008, it became possible to access virtual desktops over Internet. Access to Virtual Desktops over Internet had a big implication - Virtual Desktops can now be accesses by a huge percentage of population. But there were still some limitations - people had no use for it. Users still needed a computer to access a remote virtual desktop - it was an absurd idea to have a desktop in order to access another virtual desktop! Most of the computer users had no need for two desktops and VDI still remained a niche technology.

In 2009, introduction of iPad and other Tablet computers started to change things a bit. People rushed in head first to buy the latest gadget. After few weeks, people realized that they now have three gadgets - A Smart Phone, a Laptop, and a Tablet. This meant that there was no real application for the tablet. Users could use a tablet and do nearly everything that they could do on their laptops but not completely. The tablets had one major drawback - Limited storage. Users wanted to access the files they had in their laptops, and started using applications such as "DropBox" or "TeamViewer". The tablets also had other short comings - limited compute power, so if one wanted to download & edit photos/videos - they still need their laptops.

All this created additional complexity for the user.

Users now have some of their data in their phones, some in the tablets and some in laptops. Having multiple devices causes additional baggage to carry around and synchronizing data across multiple devices/platforms is a headache.

Thus the need for a Virtual Desktop has grown beyond the geek world to the common man.

Need for Virtual Desktop is being felt by various users, but the need is yet to be clearly defined. Apple, Microsoft and host of startup are jumping in with their unique solutions in this space. iCloud, Skydrive, Office365, Azure platform, Zoho etc are all trying to address this need for the common user.

As a early adapter of various technologies, I have used iCloud, Office365, Zoho, Google Apps, DropBox etc, and found that none of these applications work seamlessly across all platforms. I need a solution which works across an Android cell phone or Tablet, iPhone & iPad, Blackberry, and Windows XP/7.0. ( I am still on XP and I am waiting for Windows 8.0)

A virtual desktop could be one solution that could work across all platforms. The current VDI solutions from Vmware, Citrix or Microsoft still does not need my basic requirement to work across all platforms.

As the year 2011 ends, I wait to see if a see if a true VDI solution emerges in 2012 - which can work across multiple platforms and makes life easier for the layman. If such a VDI solution is developed, users will embrace tablet computers in large numbers and Cloud computing will have its killer app.

Developing a Product Brief


They key to new product development is to have a good product ideas and turn that into an accurate product plan. The product plan defines the way in which the product is designed and developed. The product plan outlines the high level product concept, product requirements, product scope, and constrains - mainly cost and time.

While developing a totally new product , it is best to have a broad & vague product plan. This is done deliberately so that the product can evolve as people start working on developing the product. Starting with specifications that are wide and vaguely defined will lead to creative solutions which can be truly path breaking.

While developing a next version of an existing product, the product plan must the very narrow in terms of requirements, the requirements are very descriptive and specific, and the constrains are sharply defined. This helps to get a better focus on product development - which is aimed at improving the current product and help gain competitive advantage.

Defining Needs

The vague product ideas must be converted into a product plan - to develop a tangible product. The first step in this process is to define the user needs.

Documenting the user needs, the customer use cases, i.e., what the product is supposed to do for the customer or how the customer will use the product is the most important stage in product design.

Defining Needs is an art. There are no predefined ways to do it right. Every product type has it's own set of complications and challenges, so to define needs, one has to blend a certain amount of imagination with that of people's necessity and then come up with the product needs.

Identifying the customer needs is a lot tougher than it sounds. Customers are rarely aware of their needs. So customers often disguise their needs with wants. So identifying the real needs will involve deeper study of customer's problems and/or observing customers very closely - an anthropology study. (see Customer Anthropology)

If one gets this stage wrong, then it results in poorly designed products - that look poorly or work badly. But often times, the mistakes are not obvious and that results in multiple iterations to get the right product. If defining the product needs are wrong - it costs lots of money, time, resources and most importantly lost opportunities.

Examples of successful product designs

Great product designs are born from a deep understanding of product needs. Take a look around for some of the successful products and you can learn how the needs have been met in the product design.

Leatherman Tools


Tim Leatherman came up with a novel idea for a multipurpose tool while traveling across Europe in 1975. Back in 1975, Swiss Army Knife was the gold standard and the only player in multipurpose tools. But Tim Leatherman had a need for pliers, which the Swiss Knife did not have. Tim took that basic need for a multipurpose tool centered around a pair of pliers and developed a range of products by 1983. Today, Leatherman tools are just as popular as Swiss Army Knife.

See:Outdoing the Swiss Army knife (http://money.cnn.com/2007/09/27/smbusiness/100123045.fsb/index.htm)

Bose Speakers



In 1956, Amar C Bose, a student at MIT was disappointed with all the high end audio systems. Bose spend lots of money on high end audio system and in those systems he had to constantly adjust the equalizers to make the sound better every time he changed the room or changed the type of music. Bose spend next eight years understanding the user needs, his search led to the field of psychoacoustics - and that led to break through speaker designs: Bose 901 speakers released in 1968.







See: http://en.wikipedia.org/wiki/Bose_Corporation
=============

The stories of Tim Leatherman and Amar Bose lead us to ponder why it takes so much time to design new products. Tim Leatherman took nearly 8 years to develop his breakthrough product and Bose took 20 years!

The basic problem in developing product needs is that people do not know where to start or where to stop.

In today's corporate world, No product development project will be allowed to run for so long. Today the need is for fast product development. This is solved by defining the product criteria.

A product criteria is a set of descriptions that define product requirements. Product criteria essentially defines the end state of the product by defining what the product should do in clear and non-ambiguous terms. For example, when defining the size of the speakers - the product requirement should be very specific and avoid terms such as "Small" or "Big". Instead the size for the speakers will be defined as "The size of the speakers must not exceed 24 inches in height, 18 inches in width, 10 inches in depth."

Many of the requirements in product criteria may be contradictory or opposite of each other - thus creating contradictions and a sharper constrain in design. For example, while defining the requirement for a car, the design criteria will call for a fuel efficiency of 30Kms/Litre, while the design criteria will also call for a higher engine power of 150HP. This apparent contradictions in product criteria calls for new thinking and may lead to innovation.

It is therefore very important to establish the product requirement criteria at the beginning of the project and then constantly review the project for meeting the criteria requirements.

In any product development, there will be opposing product criteria and it is important to let the designers know the priority of these criteria. The criteria list must be prioritized from top to bottom, thus allowing engineers to make the appropriate tradeoffs. In our example of car design, there were two criteria which is prioritized as:

1. Fuel Economy of 30 Kms/Litre
2. Engine power of 150HP
3. Total Weight of the car should not exceed 500Kgs
...

Prioritization helps in making design decisions. If the prioritization was done in a way mentioned above, engineers are likely to compromise on other things to meet the fuel economy requirement resulting in car known for its fuel economy. If engine power was the main criteria, then engineers will design a car known for its power and may not have fuel economy.

By prioritizing, designers can put the right emphasis on the most important criteria area and have some clarity for making difficult decisions.

In short:

1. Product Criteria must be descriptive.
2. Sharply define a product's requirements.
3. Must be written down for communicating the requirements to team.
4. Avoid vague or loose descriptions: big, small, strong, etc. Product criteria must be described in standard metrics.

Closing Thoughts

Successful product design starts with a deep understanding of the product needs. The product needs should then be converted into a written product brief which documents the product requirement criteria in a prioritized manner.

Monday, December 26, 2011

Why Mobile Smart devices are yet to gain enterprise acceptance


Today users are whole heartedly embracing next-generation smart phones and tablets. Employees are bringing them to work and using these device extensively. But the corporate IT departments are yet to accept these devices. Most IT departments are forced to allow only the corporate emails on these smart devices. At most work places, these smart phones or tablets cannot be connected to the corporate network or even access corporate data thorough VPN.

The main challenge for corporate IT has been security management on these mobile devices. Unlike laptops, the Apple iPhones, Apple iPads, or Google Android devices do not have software functions built for corporate level security management. As a result, these devices cannot be used in a corporate network - as they pose a serious security risks. But then not allowing smart phones or tablets poses a bigger risk to the business. Businesses that do not embrace new technology will become obsolete, and lose its competitive edge. Having mobile smart devices at the hands of employees can create competitive advantages in terms of better customer service, faster response to issues and lower operating costs. So, in other words, the corporate IT departments are forced to look at various solutions to this problem.

There are two possible solutions: Virtual Desktops or Information Security Management (ISM) app on the mobile device.

Virtual desktops is a relatively new technology and is a disruptive technology. VDI forces massive changes in IT management, requires new server hardware, new networks, new storage. In short VDI is expensive and needs lots of planning, staged and carefully planned implementation. In the long run VDI will be a perfect solution, but there is a need for an interim solution.

The interim solution is a simple ISM app running on the smart phone.

In my earlier blog titled "Product Management- How to beat iPad", I had written about creating a tablet with strong built-in security system. In this article, I am taking that idea little further - by defining what the security requirements are for a mobile smart device.

ISM Application Solution

Information Security Management software is an ideal solution for managing mobile smart devices. A similar ISM solution will be needed to manage the Virtual Desktop instances.

Let's take a look at what functions the ISM app is expected to provide.

1. Memory resident software
The application has to be 'Memory Resident' - i.e., the device OS is not permitted to shut down this application. The ISM app is started on boot-up and is shutdown only when the device is shut down.

2. Uses multi-part authentication system & then allocated privileges based on user rights.
A secure multi-part authentication system (as a service) is used to authenticate the user. Once the user is authentication, the user privileges & security settings on the device is set accordingly. ISM software will implement a policy based access control thus allowing a large scale and yet flexible deployments. (Also see: Need for a Central Multi-Factor Authentication as a Service)

3. Repeated Authentication failure (10 times) results in data wipe-off.
If the user fails the authentication checks for a preset number of attempts, the ISM software will wipe out all data on the device. Since the data gets synchronized with a remote server - loss of user data is minimized and securing the corporate data at the same time.

4. Run and manage compliance audits
The ISM app can be configured remotely to run compliance audits and report policy violations, application inventory, compliance status tracking etc. to the remote server.

5. Image lock - users cannot add/delete software programs or apps.
Based on user privilege settings - users can be allowed/dis-allowed to add/delete software programs/apps. For all user types, there will be one golden image which can be pushed remotely to reset the user device to a set configuration.

6. All local data is synchronized automatically when connected.
All user generated data is temporarily stored on the device and is synchronized automatically when the device is connected to Internet. This data synchronization happens in the background without user intervention.

7. All local data on the device is encrypted with AES 256 encryption standards
Data stored in the device: emails, calendar, contacts, documents etc. is encrypted using a secure key generated by the multi-part authentication system. Only the authentication system knows the decryption keys. This is a critical security requirement. Currently, only Blackberry has such a system.

8. Permits remote application management: upgrades & patches, software additions/deletion
Remote device configuration and application management is done in the background without disrupting the user.

9. SMS/MMS, IP traffic supervision
All data traffic on the device is monitored for potentially dangerous threats, the logs are created and uploaded to the central server for analysis. Threat from hackers is too much, so to prevent major hacks, all data traffic is monitored for suspicious behavior and the logs are uploaded to security management solutions such as RSA Envision

10. Location tracking and remote data wipe-off for lost/stolen devices
To prevent data loss in case of a lost/stolen device, the ISM software can be remotely triggered to wipe out all user data. The ISM app can also broadcast its geographic information to the server - when triggered remotely. This functionality currently exists in Blackberry phones

11. Remote administration and control for help desk.
In case the user calls the help desk, the help desk employee must be able to take control of the device and solve the user issue.

12. Internet firewall as per corporate policy
Organizations must be able to protect the users from unauthorized or unsecure websites. The ISM firewall maintains a list of permitted and non-permitted websites, and controls the access to Internet based on the corporate policies.

Control, compliance, and convenience is the key for successful ISM application. The user functions must not be disrupted or hindered, while providing all the security to the user and the organization.

ISM Central server

To monitor & manage the mobile devices, the ISM app connects to the ISM server application over the VPN/Internet. ISM server is essential to help IT admins plan, maintain all the mobile deployments. The administrators can centrally implement management polices, configuration changes, and monitor the mobile devices. The ISM server also provides a user dashboard to help operators manage mobile devices.

The functions of the ISM server will be manage the mobile devices. The functional features of the ISM server will be described in a later blog.

Closing Thoughts

Today there is a strong need for a ISM application - which will enable all mobile devices to be used with a corporate network without compromising on safety and security. In this article I have briefly outlined the functionalities of such a software system. The software system has two components, the client app that resides on the mobile device and a server that runs in a data center. I have limited the scope of this article to the client side app and will write about the server functionality in future.

Control, compliance, and convenience is the key for successful ISM application.

Also see:

Apple and power of 'i'

Apple in the last decade has transformed the computing world into the world of 'i': iMac, iPod, iTunes, iPhone, iPad, iCloud, iOS. In this world of computing, 'i' stands for the individual.

Every 'i' product is designed for the single individual. Apple never concentrated on the enterprise, it has a laser sharp focus on the individual, which leads to the next apple product : - iTV

iTV is an individualized TV experience which has 3 main charateristics - all sorrounding the 'i'

1. Internet based content delivery - through 'iTunes. (Wi-Fi or LTE)
2. Individuals can rent/buy viewing rights for the program
3. Individuals can watch any program any time, anywhere at their conveinece.
4. iCloud stores all the videos.
5. Individuals can watch the video programs on the 'i' products: iTV, iMAC, iPad, iPhone, iPod or a PC.

Trends driving iTV

Today TV has become tailored for the "Individual". Families no longer watch TV together, instead each member of the household watch at their convenience. This is the biggest factor driving towards individual entertainment.

Internet is everywhere - High speed wireless broadband - 3G & 4G (IPv6) networks will cover 80% of G12 countries population - which represents 99% of Apple's potential market.

Wasted Content - 80% of content gets wasted, losses in current distribution is causing increasing cost of cable TV. With iTV, cost per user will go down & bring Internet ecomonies to to TV/Video/Movie.

iCloud is a very effeceient way to store videos. Since multiple users will have the same video, iCloud will store one copy and provide links/privileges to all those who have bought the program.

Cable TV has failed to innovate. Cable TV was the creator for Video-on-Demand and the technology was created in 1990's, since then, cable TV companies have not innovated on it and have slacked off. Cable TV companies had good broadband serivces as well - but failed to create a killer IPTV offering. The current opportunity for Apple is actually a missed opportunity by cable TV companies.

Arguments for iTV

1. Why watch the program when the broadcasters puts it. I want to watch it at my convenience and on device of my choice.

2. Why watch new movies only in theater. I want to watch it at my place.

3. I will pay for what I use, and not for the unnecessary bundle of programs

4. Internet will allow the content producer to link directly to the viewer by automatic distribution.

5. The current distribution channels are ineffecient and wasteful. CableTV, Satilitte TV charge too much and are too restricitive.

Major players in the Internet TV era

Internet TV will create new winners. Leaders of cableTV era - Comcast, DishTV, SkyTV etc will have to dramatically change their business models, develop new technologies to compete and survive. My guess is that most of the cableTV companies will die.

Internet TV calls for strong cloud computing capabilities - to deliver videos on demand over high speed Internet. The companies will also need strong relationship with content produces. In view of the two core requirements: Cloud Computing & Access to Content, the clear favorites in the Internet TV era are:

1. Apple
2. Amazon
3. Google/YouTube


Big winners

iTV is focused on the 'individual', therefore users will be the ultimate winners. Internet provide the most effecient distribution system for content producers to reach the consumers. Distribution of video content will no longer be held hostage to TV channels and cable TV companies.

The new content distribution technology will win big. Apple, Amazon & Google are currently position to take full advantage of this shift.

Finally, the other hidden winners will be the Internet service providers and the network/storage equipment makers (Cisco, EMC, Juniper NetApp, Dell, HP etc)

In short the winners are:

1. Individuals
2. Content producers
3. Internet based distributors - Apple, Amazon, Google
4. Network providers - ISP, and Network equipment providers

Death of TV Channels

The term TV Channel - a streaming signal that broadcasts video programs is dead. The new paradigm is programs. People do not watch a channel, people want to watch programs. In the iTV era, people can watch programs of their choice, when they want it, where they want it, and on device of their choice. This leads to an explosion of programs that will be available for viewers on demand. Millions of programs will available in all languages - all on demand.

TV channels of today are like music CD's - each channel has 1or 2 good programs and the rest is filled with junks or re-runs.

Today a basic version of such a iTV system exists with YouTube. But YouTube suffers from several disadvantages:

1. Lack of clear cut organization of content.
2. No uniform quality of content.
3. Too much amature videos etc.
4. Lack of parental controls.
5. Spam content - where the title does not match with the content.

So if Google can get its act together and streamline its video content, it can become a major player in this market.

Biggest Losers

With the death of TC channels, CableTV, SatelliteTV service providers will have a tough time. Television set manufactures will adapt to develop Internet ready Televisions, those manufacturers who refuse to introduce Internet ready TV's will be a Dodo.

iTV creates New opportunities.

Internet will create new markets for creative, Niche content developers. We will see a wave of new content providers. The current distribution system makes it impossible for small and niche content developers to reach out to a wide audience, with iTV can provide.

Closing Thoughts

Apple's iTunes/iTV can change things be organizing video content better and ensuring quality of video programs. By having an Apple's seal of approval, users will be more comfortable with iTunes/iTV - than that of YouTube.

Internet finds the next killer App


Companies are rapidly developing new Internet delivery technologies, and as these new technologies reach mainstream users, customers will require new applications that can make use of the increasing Internet speeds.

Looking at the evolution of Internet, Internet evolved in stages, and at each stage there was a killer app -which made customers embrace Internet. Now as 4G wireless Internet gains market acceptance, there is a need for a new killer app.

Looking at the evolution of Internet technology, here is how it evolved:

Rev1: research lab connectivity 2.6Kbps-9.6Kbps
Universities in the US started using Internet as a novel way to share data. The main applications were Archie, Jughead, & Veronica - the first versions of search technology.

Rev2: Early adaptors: 14.4Kbps-33Kbps
With faster dial-up connectivity, techies and early adaptors developed Bulletin Board Service (BBS). BBS became the killer app. It allowed sers to loginto a computer, share files, post information on bulletins.

Rev3: WWW & AOL: 56Kbps, eMail, AOL & WWW
Internet took off big time with the 56Kbps dial-up Internet speeds and AOL eMails. American users embraced Internet in large numbers, and as World Wide Web swept the world. E-commerce was just coming up on the horizon when the next version of Internet was developed.

Rev4: Cable Broadband & DSL. High speed 2-8 Mbps
It the era of broadband Internet. E-commerce was the driving force which made people cancel their AOL accounts and sign-up for DSL or cable broadband. The real killer app for this high speed Internet was Peer-2-Peer networking and MP3. People all over the world embraced high speed Internet for free music and convenience of e-commerce. Google, Amazon and iTunes emerged as the ultimate winners of this technology.

Rev5: 3G/WiMax, GPRS, WiFi. The era of Mobile Internet
Wireless broadband opened the doors for a huge population of the world. People with smart phones could now access Internet anywhere on their mobile devices. The mobility attraced billions of people all around the world & embraced web 2.0 in huge droves. Facebook became the killer app for the mobile Internet. Google estabilished itself as the king of search engines.

Rev6: 4G LTE,100Mbps: The era of ultra broadband Internet.
Year 2011 marks the beginning of the era of ultra broadband Internet. 17Mbps and faster Internet speeds meets iPad & iTV (Apple TV). Cloud computing and Video on Demand will be the killer application for this ultra-broadband Internet. High Definition TV content now has a distribution arm with cloud computing and ultra-broadband Internet.

Internet Technology has been a game changer. Every 36 months, faster Internet speeds are introduced and application developers are quick to capitalize on this bandwidth. Companies that make best use of this will win, others will lose.

Developing New Product Ideas

Every year, people all over thw world submit a little over 2 Million patent applications. Most of these patent applications are from companies who are in the business of developing new products and it is essential to the success and survial of these businesses.

These patent applications are just a small tip of the iceberg of ideas. Hardly 1% of the product ideas get converted to patents. So the question is: How to people get ideas?

As a product manager, innovator, engineer and a creative individual, I can share some insights. Essentially there are two distinct paths for getting new ideas.

1. Necessity
2. Imagination

Necessity

Necessity is the mother of all inventions. Only when there is a need, people will think of developing something new, else people are happy using the same old stuff.

The best part is that almost everyone in this planet has a necessity for a something that does not exist. So with 7 Billion people in this planet, one can comeup with atleast 7 Billion ideas!

Necessity or identifying people needs is the easiest way to come up with new product ideas. Just go around and ask people, and you will get ideas. Even a 5 year old child can talk about their needs for a unique type of a toy.

Most product companies (i.e, product managers) go around asking their current customers - what they need in the future. Customers being the main users of the product always have ideas to improve/develop the current products. Collecting a bunch of such ideas and distilling them will lead into new product development.

New products that are developed by identifying people needs are often incremental in functionality and design. These products are also low risk products - as there is a greater chance that customers/users will accept it. Thus in the overall market place, products that are developed to meet specific customer needs have a high value.

Imagination

Imagination is one capability that sets us 'Humans' apart from other animals in the world. We are blessed with the power of imagination. The ability to think of something new - which does not exist is a powerful means to develop new products.

So in a planet of 7 Billion people, every person has imagination can potentially develop new product ideas - i.e., at least 7 Billion ideas!! Yes it can be 7 Billion ideas per day.

People's imagination is often triggered by what they see, feel & experience. Everyone is capable of imagination and can imagine new products.

What triggers human imagination?

There are many things that can trigger human imagination: Personal Experience, Viewing things differently, Playing and exploring, Subversion/Adaptation - taking a regular product and converting it for an unintended use, self-reflection and knowledge of world around us.

There is no specific way that can trigger imagination, imagination just happens only when we put our thoughts into it.

Making an Idea into a product

But to really develop a product, it takes more than imagination - persistence. A designer or artist or inventor can take a raw idea and developing a product out of it. This ability to persist is a only differentiator from people who can just think up of ideas and those who develop products.

Only when persistence is combined with right skills and knowledge can lead to new products.

Sunday, December 25, 2011

What's driving Virtualization and cloud computing

Every now and then people ask me this question: What's driving Virtualization and cloud computing?

As I have answered this question repeatedly over time, the answer to this question has been changing to accommodate the different reasons and value propositions for virtualization and cloud computing.

The first answer - which was valid four years ago was: Cost saving. The cost saving due to virtualization was a sufficient enough reason to virtualize the IT infrastructure.

Cloud technology was primarily limited to SaaS applications and the value of cloud was mostly in the ease of use, and pay for what you use.

The second answer was GreenIT. A virtualised IT infrastructure resulted in server consolidation and that lowered the power bill. Having all the applications running in a centralized data center lowers the cost while increasing the reliability, thus cloud computing was becoming cheaper than a hosted solution.

As time progressed technology became more affordable and cell phones became smarter. People could now access Internet from their cell phones. This created a need for cloud computing, SaaS applications could now be accessed from anywhere over plethora of mobile devices. That was my third answer. Mobile computing, anywhere, anytime, any device.

As year 2011 ends, I find two more reasons for virtualization & cloud computing.

1. Enormous compute power
2. Intelligent and easy to use mobile compute devices

When I look around my house, I notice that I have enough computer power in my house to meet the elearning needs of 30-50 school kids, but I am barely using a fraction of the installed capacity. The compute power in my quad core desktop, dual core laptops are all wasted. Looking at things in a slightly different perspective, I find that my iPad meets all my computing needs.

In last three years, Intel & AMD has released more powerful CPU's. The 8 core, the 12 core CPU's of today are way too powerful for any individual user. The desktop software applications do not need that amount of compute power. So, in other words, I can do away with all the laptops, desktops and replace them with a simple Tablet computer.

This powerful computers are best utilized when they are virtualized, allowing multiple software to run simultaneously - thus consuming all that enormous compute power. The best way to do that is to have these powerful computers run cloud applications in a virtualized environment, while the end users can use simple & easy to use end devices to access the cloud.

Over last four years, the smart phones have evolved enormously. The initial qwerty keyboard based phones with tiny screens have given way to larger multi-touch screens. The phones today can take voice commands or one can even write using a stylus. The phones can serve as a projector, record videos, edit pictures and even edit videos. This rapid improvement in phone capability has essentially made them the primary computing devices for many people.

In near future, we will see mobile devices that can project a large image on the walls, have 3D videos as well.

Tablet computers & netbooks now have the right form factor and compute power to replace laptops - when used with Web applications. Virtual desktops and with DaaS, users can still have the look/feel of a desktop and security/reliability of the cloud in their mobile devices.

Having access to a virtual computer with unlimited capabilities is a very powerful reason for users to opt for cloud computing.

Challenges in Operations Management of Virtual Infrastructure

Corporate IT has embraced virtualization as a means to save on costs, modernize their infrastructure, and offer a greater range of services. Virtualizations resulted in consolidation of resources & workloads - that led to productivity gains, improved efficiency of application performance and IT infrastructure and reduced operating costs.

Virtualization broke the traditional silos of dedicated computing resources for specific applications and also broke the silo's of operations management by forcing IT administrators to look at servers (compute), networks and storage as unified resource pool.

This has created new operations management challenges - which cannot be solved by traditional approaches. The new challenges are mostly in the following areas:

  1. Virtualization eliminated the IT silo boundaries between Applications, network, compute and storage. This has made the IT stack more sensitive to changes in interference to the components in the IT stack. For example changing a network setting could have an adverse impact on the applications, or data store speeds. Changes in storage configuration could result in unresponsive applications etc.

  2. Virtualization can easily push the resource utilization beyond the safe operating boundaries. Thus causing random performance issues or causing random hardware failures.

  3. Applications running in a virtualized environment will see a dynamic changes in resources available for it. As one application starts to consume more resource, the other applications could see a similar reduction in resources available for it. This causes a random performance variations and in many cases disrupt the entire business operations.

Managing Virtualized infrastructure needs new tools and technologies to handle these new factors of complexity. Given the dynamic nature of a virtualized IT Infrastructure, the new management tools must be: Scalable, Unified, Automated, Proactive and User friendly.

It is also very important to ensure that the cost of virtual infrastructure management tools must be lower than the cost of failures. Though this sounds simple, in reality, the cost of infrastructure management can potentially raise to the sky - so one needs to be cautious on choosing the right set of tools.

Traditional Operations Management

Ever since the beginning of IT operations, management of the IT infrastructure has been organized based on resource silos. A dedicated team to manage:

1. Servers - Physical machines & Operating systems
2. Network - LAN, VLAN, WAN
3. Storage - SAN, NAS, Storage Arrays
4. Applications- CRM, ERP, Database, Exchange, Security etc. In large organizations there are teams to manage each application types.

Each of the resource management silos operated independently, had their own operations management: Monitor, analyze, control and change resources. Each team has its own set of tools, processes, procedures to manage the resources that come under their preview.

Since each group had little idea of the needs and requirements of other groups, they often created excessive capacities to handle growing business needs and also for peak loads.

This silo based approach led to inefficiencies and wastage. Virtualization eliminates such wastages and improves efficiency.

Virtualization disrupted Operations Management

Virtualization is a game changer for operations management. Virtualization elements the boundary between compute, storage and network resource silos, and views the entire IT resources as a single pool.

The hypervisor shares the physical resources into Virtual Machines (VM) that can process workloads. This resource sharing architecture dramatically improves resource utilization and allows for flexible scaling of workloads and resources available for those workloads.

Virtualization creates new operations management challenges by:

1. Virtual Machines share the physical resources. So when one VM increases the resource usage, it will impact the performance of applications running on other VM's that share the same resource. This interference can be random & sporadic - leading to complex performance management challenges.

2. Hypervisor has a abstract view of the real physical infrastructure. Often times the real capacity of the underlying infrastructure is not what is viewed by hypervisor, as a result when new VM's are added, it will create under-provisioning of resources and create major performance bottlenecks.

3. Hypervisor allows for consolidation of workload streams to get a higher resource utilization. But if the workloads are correlated, i.e., as increase in one workload creates a corresponding increase on an other workload, then their peaks will become compounded and the system will run out of resources or/and create enormous bottlenecks.

4. VM's need to have dynamic resource allocation in-order for the applications to meet the performance & SLA requirements. This dynamic resource allocation requires an active and automatic resource management.

5. Hypervisor has a abstract view of the real physical infrastructure. As a result, the configuration management appears to be overly simple at the hypervisor layer - but in reality, the configuration changes will have to be coordinated across different resource types (compute, network, storage).

6. Virtualization removes the silo boundaries across the resource types (compute, network & storage). This creates cross-element interference on the applications. So when an application fails to respond, the root cause for the failure cannot be easily identified.

Virtualization creates a new set of operations management challenges, but the solution to these challenges will result in a seamless, cross-domain management solutions will reduce costs by automating various management functions and eliminate the costly cross-silo coordination between different teams. Managing a virtualized infrastructure will need automated solutions that will reduce the need for labor intensive management systems of today.

Virtualization and Utilization

The greatest benefit of Virtualization is in resource optimization. IT administrators were able to retire the old and inefficient servers and move the applications to a virtualized server running on newer hardware. This optimization helped administrators reduce the operating costs, reduce energy utilization, and increase utilization of existing hardware.

Cost saving achieved by server consolidation and higher resource utilization was a prime driver for virtualization. The problem of over-provisioning had led to low server utilization. With virtualization, utilization can be raised to as high as 80%.

The higher utilization rate may sound exciting, it also creates major performance problems.

As virtualization consolidates multiple work loads on a single physical server - thus increasing the utilization of that server. But work loads are never stable - work loads tend to have their peaks and lows. So if one or more work loads hits a peak, utilization can quickly reach 100% and create a grid lock for other work loads. Thus adversely affect performance. Severe congestion can lead to data losses and even hardware failures.

For example, Virtual machines typically use a virtual network: Virtual network interface, subnets, and bridging packages to map the virtual interfaces to the physical interfaces. If multiple VM are running on the server and the server has limited physical network interface, then running multiple VM's that are running network intensive applications can easily choke off the physical interface causing a massive congestion in the system. Similarly, such congestion can occur with CPU, memory or storage I/O resources as well.

The resource congestion problems could be intermittent and random, that makes it even more harder to debug and solve the resource contention issues.

To solve these performance problems, one has to find out the bottle neck for each of the performance problems first.

In a virtualized environment, finding these performance bottlenecks is a big challenge as the symptoms of congestion would show up in one area - while the real congestion could be somewhere else.

In a non-virtualized world, the resource allocation were done in silos, such that each silo must accommodate for all fluctuations of work loads. This led to excessive capacity - by planning to handle the peak work loads. Therefore performance management was never a major issue. But with virtualization, active performance management is critical. The virtual infrastructure must be constantly monitored for performance and corrective actions must be taken as needed - by moving out VM's from a loaded server to another lightly loaded server, or by dynamically provisioning additional resources to absorb the peaks.

Dynamic provisioning requires a deeper understanding of resource utilization: which application is consume what resource and when resources are being used. To understand this better consider this example:

In an enterprise there are several workloads, but few workloads have a marked peak behavior. Sales system has a peak demand between 6PM-9PM, HR systems has a peek demand between 1PM to 5PM, Inventory management system has a peak demand between 9 AM to 1 PM. On further analysis, it is found that the sales system actually has a peak demand on the network and storage IOPS, ERP system has a peak demand on servers and storage, HR systems has a peak demand on servers and storage IOPS.

Knowing this level of detail will help the system administrators to provision additional VM's for ERP by moving VM's allocated to HR between 9 AM to 1PM. While VM's allocated to ERP can be moved to HR between 1PM-5PM.

Solving the sales peak load problem may require additional networking hardware and more bandwidth - which will result in a lower utilization. It is better to have the excess capacity wasted during off-peak times than having performance bottlenecks.

There could be other complex cases: The HR system creates multiple random writes, while the sales system is issuing a series of sequential reads, then in such case the sales application will see a delay or performance degradation even though the workloads are normal. In this case the SAN network gets choked with writes from the HR system and the performance problem will be reported by the sales application administrator.

Resolving such correlated workload performance issues requires special tools that provide deeper insight into the system. Essentially, the IT administrators must be able to map the application to the resources it uses and then monitor the entire data path for performance management.

Fundamental Operations Management Issues with Virtualization

Virtualization creates several basic system management problems. These are new problems and these cannot be solved by silo based management tools.

  1. Fragmented Configuration Management

    Configuration & provisioning tools are still silo based, there are separate tools for server configuration, network configuration and storage configuration. In organizations, this has led to fragmented configuration management - which is not dynamic or fast enough to meet the demands of virtualization.

  2. Lack of Scalability in monitoring tools

    Fault and performance monitoring tools are still silo based and as the infrastructure gets virtualized, the number of virtual entities increase exponentially. Also the number of virtual entities are dynamic and vary with time. The silo domain based management tools are intrinsically non-scalable for the virtual system.

  3. Hardware Faults due to high utilization

    Virtualization leads to higher resource utilization - which often stress the underneath hardware beyond the safe operating limits - which eventually causes hardware failures. Such high utilization cannot be detected by current monitoring systems, and administrators are forced to do breakdown repairs.

  4. Hypervisor complexities

    Typical virtualization environment will have multiple virtualization solutions: Vmware, Xen etc. The hypervisor mechanisms itself create management problems. (see: http://communities.vmware.com/docs/DOC-4960 ) Having multi-vendor approach towards virtualization will increase hypervisor management complexity.

  5. Ambiguity

    Performance issues arising in a virtualized environment are often ambiguous. Faults or bottleneck seen in one system may have the root cause in another system. This makes it mandatory to have a complete cross-domain (compute, network, storage) management tools to find the root cause issues.

  6. Interference

    VM workloads share resources. As a result increasing one workload can create interference to the performance of another workload. These interference problems are very difficult to identify and manage.

Closing Thoughts

Virtualization is great to save costs and improve resource utilization. However, one may have to fundamentally change the way the IT infrastructure is managed with virtualization. New workflows will have to be developed and new management tools will be needed.

Wednesday, December 21, 2011

Apple’s $500 Million acquisition of Anobit

Apple recently announced acquisition of Anobit - a semiconductor design firm in Israel. Anobit is mainly into R&D of better Flash memory chips, which is of great interest to Apple.
Apple is big user of flash memory. All its products: iPod, iPad, iPhone, MacBook Air - all use NAND flash for memory and this acquisition is a clear indication of Apple’s future strategy towards Flash memory as the basic storage for all its products.

Currently, Apple buys lots of NAND Flash memory from a host of suppliers, but these are generic chips. Apple is acquiring Anobit to create unique Intellectual property in terms of NAND flash memory management - which will enhance the performance of Apple’s products.
In 2005, Apple acquired Intrinsity, a CPU design company based in Austin, Texas. Today the A5 CPU designed by folks at Apple’s Austin CPU design center forms the computing core of its iPad and iPhones. Though A5 is based on ARM CPU designs, the Graphic processing capability built into the chip ensures a superior video/graphics quality - a key product differentiator for iPhones and iPads.

Apple’s Future strategy with this acquisition

Today, there are few major performance bottlenecks in iPhones and Ipads - most of them are related to memory latency and memory errors. By acquiring Anobit and leveraging their Flash memory designs and technology, Apple can create unique competitive advantages for itself.

Acquisition of Anobit helps in two ways:

1. Improve the memory latency issues. Apple can integrate Anobit’s memory management technology with its CPU, Apple can create ultra fast chips - that will power the tablets, MacBook Air or other thin clients. Apple can also use the new SoC CPU (CPU + GPU+ Memory Manager) to power the next generation servers.

2. Anobit’s technology helps improve the reliability of Flash Memory chips. By locking in this technology, Apple can build better products than its competition in Smart phones. The current multi-cell flash memory chips used in cell phones suffer from unreliability and short lifespans.

Apple has historically shown the world the advantages of tightly coupling the hardware with its software. So as Apple moves ahead in the post-PC world, Apple is once again announcing to the world that it will control both the hardware and the software, and through that tight coupling - Apple will create unique competitive advantages.

The current generation of iPhones and iPads are built from generic off the shelf products which can be easily copied and Samsung has done that. So Apple intends to create a new breed of hardware - in which Apple owns all the IP and then designs chips which have a tightly integrated CPU, GPU and memory. Apple can then contract the manufacturing to other companies such as TSMC. Apple can then reduce it's dependency on Samsung for memory chips.

Is Apple planning to capture the Data Center?

Apple has long abandoned the server market. Ever since Apple moved to Intel Processors, Apple has steadily moved away from the desktops and servers. In November 2010, Apple announced the End of life to its Xserve rack servers.

But as the Cloud technology enters the main stream, there is a new demand for energy efficient servers - which run the cloud services. Already Apple has a very energy efficient ARM CPU technology and integrating it with efficient memory management technology, Apple can create a rack mounted server cluster - which consists of 16 or 32 or 64 individual servers. Such a cluster will be ideal for hosting web servers.

Today it is widely acknowledged that ARM architecture is ideal for low power super computers (see: http://www.cpu-world.com/news_2011/2011111601_Barcelona_Supercomputer_Center_to_build_ARM_Supercomputer.html )
With Apple already capturing a dominant marketshare in tablets and smart phones, Apple will have to look at other markets to keep Wall Street happy with its hyper growth, and web servers present one such golden opportunity for Apple.

Apple TV Solution

Apple has already announced its entry into TV sets market. Apple will have a unique technology in Televisions - wireless IP streaming of videos with limited storage on TV sets. Such a TV technology will call for high reliability Flash memory chips. TV programs are memory hogs, and TV sets tend to have a long life when compared to computers. In that view, having better technology to create a long lasting Flash memory chips will be key differentiator.

Closing Thoughts

Apple has huge amounts of cash, but it has been very frugal in acquisitions. Apple has always chosen to acquire niche technology in hardware and merged it with its software to create unique products. Acquisition of Anobit is also a step in the same direction, and Apple is telling the world that the future of computing performance gains will come by tightly integrating memory management with CPU and GPU.

How Apple will translate this technologies into exciting products needs to be seen. One obvious benefit from this acquisition will be to create competitive advantages to Apple’s existing product lines, and then leverage the technology to create new products such as TV, Servers etc.

Tuesday, December 20, 2011

Network Management in VDI

Virtual Desktops is poised for prime time in 2012. After several years of feeling the pain of supporting multiple platforms and dealing with all the headaches - IT departments of most of major corporations will adapt VDI in a big way in 2012.


While IT administrators may have tested various aspects of VDI: implementation, integration into legacy applications and data management, there are several network issues that are hidden till VDI is deployed and these issues will have to resolved for successful VDI deployments.



VDI technology essentially changes the network traffic in organizations. As VDI’s are installed in a Datacenters, the WAN traffic to/from the data centers explodes. Since the network and datacenters were designed for PC clients, the network will be inadequate for VDI deployments. Moreover the network problems will not revealed immediately - rather it will be revealed in bits and pieces and that will force an ad-hoc update/upgrade to the networks.


The network issues will crop up and the entire deployment process will not be smooth sailing for enterprise wide VDI deployment.


Managing VDI deployments


From the IT managers perspective, having only one VDI platform will be an ideal solution, but in reality companies will have a heterogeneous (VMWare, Citrix, Microsoft) environment. This adds to the complexity of managing the VDI environments. Having a heterogeneous VDI environments will create a need for a unified Infrastructure manager.
All this means a big need for network Infrastructure management software - such as Ionix ITOI.


Major Network Challenges in VDI deployments


WAN Issues
As VDI would change the data routing within an enterprise in a big way. Currently the individual PC’s connect to the data center servers over the LAN - but with VDI, all end devices will connect to the servers over WAN - as all the virtual desktops are now running on the servers and users are accessing the servers over the WAN gateway. .



As the number of VDI users increase WAN traffic could increase exponentially. The only way to manage the WAN traffic will be opt for WAN optimization technology such as Cisco WAAS, Silver Peak, Blue Coat, F5, Expand Networks, Exinda etc



QoS and bandwidth management can play a significant role in mitigating the WAN contention issues. Screen refresh, for example, is highly interactive and very sensitive to congestion. Video traffic is also very sensitive to congestion. QoS and bandwidth management can ensure that these applications perform well. While file transfer and print jobs are not very sensitive to congestion, they can induce congestion on the WAN and hence impact the other types of applications. QoS and bandwidth management can ensure that these applications do not interfere with applications that are sensitive to congestion.


VDI will help IT departments consolidate and optimize remote desktop management, but they need to spend time focusing on optimizing and controlling the WAN connections between the VDI client and the VDI server between branches. Any bumps in the WAN will translate to a bad user experience for the remote VDI user and a support call to IT.


Major WAN challenges are:



  • Hidden choke points that only become apparent when stressed

  • Spikes in network traffic that are hard-to-predict before full roll out

  • Intermittent network even when average bandwidth is high

WAN optimization in VDI


VDI clients will be mobile. Laptops, Netbooks, Tablets, & even smart phones - will be the typical clients for VDI. This implies that the user’s Internet connection standards cannot be gauranteed. In such cases, users on a low bandwidth connection will experience severe performance degradation.

LAN Issues


VDI could also increase LAN traffic subtle ways. As part of VDI, a virtual machine (VM) on a data center server hosts a complete user desktop including all its applications, configurations, and privileges. The client then accesses the applications via the network with the desktop and application objects delivered on demand over the network from the virtual desktop servers via a remote display protocol, such as Microsoft Remote Desktop Protocol (RDP) and/or Citrix’s ICA protocol. The RDP/ICA traffic could spike at times creating choke points within the corporate LAN networks. In general with VDI, the RDP/ICA traffic will be much higher than the average.


LAN traffic in a traditional IT deployments.


In addition to the LAN RDP/ICA traffic, user’s systems could also have other applications (data/music/video/photos that could be running over the LAN network.
So in a nutshell with VDI deployments, one would see rapid increase of LAN traffic and create multiple choke points in the LAN network. The problem will get worse as more seats are added into the system.


Storage Issues


VDI is essentially a hybrid approach, where each end user has a thin client and connects to a private Windows XP or Vista image—a virtual machine hosted on VMware Virtual Infrastructure. This approach allows IT administrators the greater control over the user environment usually provided by Terminal Services or Citrix environments by consolidating the Windows images on server class hardware. It also allows the images to be stored and managed in the datacenter, while giving each user a full personal copy, which requires no introduction or explanation to a normal user.


VDI relies on central data storage for both block & file type data. VDI must handle both structured and unstructured data.


The following table, adapted from the VMware VDI Server Sizing and Scaling white paper, compares the disk usage of light and heavy users for a large number of VMware VDI virtual machines (approximately 20) on a single VMware SEX host. It suggests that over 90% of the average information worker’s disk I/O consists of read operations.




Before intelligent storage subsystem choices can be made, these throughput values need to be converted to Input/output operations Per Second (IOPS) values used by the SAN/NAS storage industry. A throughput rate can be converted to IOPS by the following formula:
.Throughput (MBtyes / sec)×1024(kbytes/MByte)
IOPS = -----------------------------------------------------------
.Blocksize (kbytes/ IO)


Even though the standard NTFS file system allocation size is 4k, Windows XP uses a 64-Kbyte block size, and Windows Vista uses a 1-MByte block size, for disk I/O. Using the worst case (heavy user) scenario of 7.0 MBytes/sec throughput and the smaller block size of 64kbytes, of a full Windows XP group of machines, the generated IOPS for approximately 20 virtual machines is 112 IOPS.

ESX server supports Fiber Channel Protocol, FCoE, iSCSI, NFS & 10GbE.
As the number of VDI users increase, the storage system will face both capacity and performance issues. This types of VDI workload senario can bring a traditional storage system to its knees.
VDI tends to create a I/O spike which will require complete redesign of storage systems. Storage tiering - ‘tier 0’ along with Flash drives or Solid state drives(SSD) can solve the storage I/O spikes and also improve the performance problems by having the most frequently used data into SSD. VDI deployments are typically read intensive (90% read & 10% write).
In short, VDI creates additional overheads on storage management and administration.


VDI Fault & Performance Management


One of the most common faults that occur in a VDI environment is VMWare losing connection to storage. When a VM losses connection to its datastore, the VM becomes unresponsive. This problem becomes more acute in VDI deployments with uses vMotion.



In case of VDI, as such a case will result in an unresponsive desktops and the subsequent increase to IT tickets.


WAN Management
The basic requirements of WAN management are:
1. Discover the entire WAN gateway network components. Including all WAN optimizing devices (Blue coat, F5, Cisco WAAS, Silver Peak, Exinda, etc.)
2. Fault Management of WAN gateway network devices
3. Performance Monitoring of WAN gateway network devices. Monitor QoS & SLA parameters through SLA & QoS MIBs.
4. Performance Management via vCenter Operations Enterprise for the entire WAN gateway network
5. Remote Configuration management of WAN network for bandwidth/performance optimization.



LAN Management



1. Discover & monitor the entire enterprise LAN - wire & Wi-Fi network for availability & performance.
2. Performance monitoring of Wi-Fi networks as people are increasingly using Wi-Fi for connectivity to corporate network
3. VPN tunnel connection monitoring. Monitor VPN tunnels & VPN gateway for any faults that cause VPN tunnels to go down.
4. Security & Authentication management to detect any unauthorized log-in & intruder detection on LAN.


Storage Mangement
1. Discover & Monitor SAN switches & SAN network
2. Discover Storage array, LUNs & WWN
3. Discover all VM’s for VDI and the WWN associated with the VDI
4. Corrleate unresponsive VM/VDI events to respective storage or network failures


VDI Needs better Storage Managment


VDI creates humungous volumes of data under managment. Since all the user data will be centrally stored, the volume of data that needs to be managed will be HUGE. To understand this, consider this:


Average user today has 100GB of data in their desktops. So for 2000 users, the total volume of data will be 200TB. This translates to a disk space of 1000TB - accounting for RAID-5, and active backups. Note that this is additional data under management, which resides today unmanaged in physical desktops.



Naturally, it makes sense to use data de-duplication technologies and implement rule based data management system to minimize the total volume of data under management and disk requirements.



Closing Thoughts


VDI deployments will create a big need for an automated fault, performance, & configuration management solution that can span across the virtual domain into the physical domain of servers, network and storage.

Successful VDI deployment will rely on automated IT infrastructure management solution that can provide provisioning, Automated root cause analysis and alerts to potential provisioning issues, automatic fault identification and active performance management systems.

In an ideal world, the IT infrastructure management solution should be able to correlate faults in the virtual domain (VDI) to the underlying hardware, network & storage problems and alert users and IT administrators before the faults are detected by the end users.


References:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009553

VDI sets the stage for DaaS

As Virtual Desktop Infrastructure gains user acceptance, a new service will emerge called as DaaS - Desktop as a Service.

Today VDI technology has matured and is ready for prime time, but deploying VDI will require some heavy lifting that involves massive investments into infrastructure. So for medium sized enterprises, DaaS offers an easier option to roll out VDI, and then integrate those DaaS deployments back into the in-house VDI offerings.

DaaS: The Cloud-based virtual desktops

DaaS can be deployed in any location on devices connected to Internet. Having a virtual desktops hosted on a cloud, DaaS service providers can offer the best of VDI and cloud worlds.

Cloud based apps such as Google Apps, Office365 etc have one major disadvantage. All the user created data will reside with the service provider, so over a period of time the volume of data will essentially create a lock on the users, preventing customers from moving to a new service. Also there is a risk of data, stored with the app provider, can be hacked/stolen or lost. In order to overcome such cases or a disaster, it will be better to have a DaaS provider integrate a VDI deployment with cloud Apps and deliver the desktop to the users over Internet.

DaaS helps large organizations

For very large organizations such as government departments, and manufacturing firms are ideal customers for DaaS. Since in these organizations not every employee needs a desktop all the time, the organizations can buy virtual desktops for their employees and then have the data synchronized with their own data centers.

For government departments, having a DaaS will mean greater savings in terms of centralized IT services and eliminate the need for multiple IT departments. Getting a centralized IT for services such as Email, desktop, and data management can result in significant savings.

DaaS helps data management

Desktops can create huge volume of data. Each user will end up having about 100GB of data - which in a large organization can translate to several petabytes of data. Today in traditional desktops this data resides in the hard drives of the user desktops and is not managed. But with VDI this data will have to be managed.

Since most of the user data will be copies of emails or files, having a strong data de-duplication system will help reduce the volumes of user data to a small fraction of the original. Also having all the data in a central repository will mean that rouge programs or data can be deleted and prevented from entering the system.

Since all the user data is stored in a central repository, the loss of user data can be totally prevented.

Large organizations also have a large number of contractors and part time employees or partner employees - who will need some levels of IT access. For such 'floating' employees, DaaS offers an ideal solution to provide a Virtual Desktop - without giving an access to the main corporate IT infrastructure.

DaaS will also attract individual users

In a total cloud enabled world, where all the end user devices are thin clients - such as Chrome Books, Tablets etc, then users will need a traditional desktop to do some of their personal work - such as editing pictures/videos, Creating docs, spreadsheets etc, store contact details, music etc. These services can be done as Web services across multiple sites or have then consolidated into a central VDI.

Just like today, where people have their home PC and Work PC, there is a need for a home VDI and a work VDI. Getting a private VDI from a DaaS provider will perfectly meet that need.

DaaS will create Data-as-a-Service

The most valuable entity in any IT system is Data. Users will consume data to create valuable information, so with VDI, users will need certain types of data to be delivered as service. Today Data-as-a-Service is still in its infancy, but as the cloud system matures, the user created data in a VDI environment will create new opportunities to build new services that takes the user created data and offers it as service to others.

The best example of Data-as-a-Service is Bloomberg's stock information data. Bloomberg provides accurate stock quotes to user desktops for enable users to trade with stocks. With Data-as-a-Service, VDI users will consume data to create information or to make valuable decisions.

Closing thoughts

2012 will see a steady (but slow) roll out of Virtual Desktops. Large organizations will roll out VDI as a service to the business, and new DaaS providers such as Desktone will emerge - who will offer entire VDI as a service.

As the popularity of thin clients grow and employees are encouraged to bring their own PC to work, VDI becomes a very attractive value proposition. The amount of data centralization & simplicity of desktop management & control will drive VDI implementation in organizations. For many organizations trying out VDI, will also try out Desktop-as-a-Service as a stepping stone for a full fledged VDI.

The ideal VDI deployment will be a hybrid of completely private VDI deployment (for 80% of their needs) - along with public DaaS (for 20% of their needs). An hybrid VDI deployment will also work as a buffer space to deal with sudden/seasonal demands.

DaaS will eventually create additional value added services for Data-as-a-Service. Right now, Data-as-a-Service is still in its infancy, but as VDI and cloud services become popular, Data becomes a valuable service.

Monday, December 19, 2011

Need for a Central Multi-Factor Authentication as a Service

Today there is a need for a safe and secure authentication system. The current methods of authentication - user ID & Password combination is inadequate and does not offer any security. Already, every user is forced to remember multiple log-in ID and passwords for several on line items - e-mail (Gmail, Hotmail, Yahoo & Office), Social Networks (Linkedin, Twitter, Facebook, & corporate Intranet social networks), Banking, e-Serivices, Internet retail etc. As the number of web services increases, users are finding it tough to remember and maintain all their passwords.

Once cloud computing becomes popular, the number of web sites that need authentication will explode. I already have half a dozen of cloud service sites for which I need to remember by login Ids and password. (Office365, Zoho, Dropbox, Google docs, MegaCloud, Salesforce.com)

Today, every online user has at least 12+ web based services which need a log in authentication services and this number is about to explode.

On the other hand as the number of services that need user authentication increases, the online security is being increasingly compromised. World wide over, hackers are getting even bolder - sitting in safe havens, they hack into secure sites such as Citi Bank, HSBC, etc. Even RSA's servers were hacked.

All this points to a need for a safe and secure and centralized multi-factor authentication as a service offering.

Current gold standard for authentication - RSA's two factor authentication is on its last legs. RSA servers was hacked into in March 2011, and several of its internal secrets were stolen. RSA acknowledged that the hackers went after its Intellectual Property and source codes - but stopped short of reveling the extent of the theft. (see http://www.wired.com/threatlevel/2011/03/rsa-hacked/)

All this indicates that the Two factor authentication - Private key, public key can be broken in near future.

Authentication system today

Today, there is no central authentication system. To an extent, Google and Facebook provide a public authentication service, but this is too weak and is unsecured. So users are forced to remember multiple log-in ID/Passwords for each of the service they use.

LDAP falls way short of the requirement as it does not support multi-part authentication. Kerberos & X.509 supports multi-part authentication - but does not scale for a global web based authentication service. Kerberos was introduced in 2000 and follows DES encryption, which is not suffecient for the cloud era. In addition, Kerberos suffers a major drawback as it needs a clock synchronization - which is not practical in a global web based authentication service.

Need of the hour

There is a need for a safe, secure and single authentication service. The authentication service will be on a cloud and offers Authentication as a global service. The central authentication service can be multi-tiered:
1. A basic authentication service for basic web services - such as log-in to public web sites
2. Geographic authentication service - which provides basic authentication along with locational information for accessing personal information on Internet - such as social network or emails.
3. High security & Encrypted authentication service for eCommerce, Net banking and other high value services. The authentication system can generate encryption keys to encrypt all transactions between the user and the service provider. Thus provide a safe & secure web transactions

The authentication service validates several aspects of the user and confirms the Internet user is really the person who claims he/she is. The authentication service will validate and verify the following:

1. Identity of the person - Age, Sex, Address, etc.
2. Validate the privileges the person is entitled for a particular web service.
3. Provide a history of services the user has used in the past. This will require the web services to update the anuthentication service with user history.

The authentication service may or may not provide personal information to web service providers - based on what the individual user's wish. If a person does not wish to reveal his age to a web service provider, then the web service provider can only check if the user is of legal age or not, and such a service will be provided by the authentication service system.

The authentication system incorporates one or more Unique identification services - such as Unique Identification Authority of India (UIDAI), or Social Security Number etc to establish the person's identity.

The central authentication service can also provide information regarding the user rights - i.e., tell the web service - the extent/level of rights the user has on the system. I.e., the authenticated user has clearances for a given set of functions. This information can be used by the web services to designate the level of authorization for the user and set the user privileges accordingly.

The multi-part authentication service can use:

1. A 6-10 character Private key -only the user knows it
2. User BioMetric or A Unique ID code which is not in human readable format.
3. Dynamic Public Key like RSA FOB or a software FOB system

This multi-part authentication service will be more secure than anything in use today. Creating a 3 part authentication system will provide security against hackers - as the possibility of hacking an individual's ID will be impossible - given the Zillion+ possible combinations - which will make it immune for brute attacks.

The multi-part authentication system will be several time more secure than the current 3DES or AES256 standards

Creating a centralized authentication as a service will enable pooling of resources to create a better identity management and security system as a service. This service will provide the first level of security for all Internet transactions - between user and service providers, and also between various service providers.

Ideally, there can be multiple authentication service providers providing a choice for customers.

Closing Thoughts

There is a need for a safe and secure authentication system. The current methods of authentication - user ID & Password combination is inadequate & broken. As the world moves towards Web/Cloud based services - the need for a strong Identity management, authentication & security system becomes a vital building block for a safe and secure Internet.

In this article, I have just touched upon the basic idea of such a service. The business model and the operational details are yet to be developed.

Wednesday, November 16, 2011

Product Management - Nokia launches Lumia series

November 14th 2011, Nokia announced its new range of Windows Phone 7 Mango enabled cell phone - the Lumia 800 & 710 models in India.

With Nokia's marketshare in doldrums, this product launch is vital for Nokia's survival. Whole lot of loyal Nokia users, industry pundits and analysts were eagerly looking at this launch and Nokia managed to draw out a BIG yawn as a response to this launch.

From a product marketing perspective the actual launch was a nonevent. News media that covered the launch put the event into its back pages, Internet, Twitter, blogsphere and Facebook was not ignited with enthusiasm.

Unlike Apple's famous product launches, Nokia's flagship was launched by D. Shivakumar and Bhaskar Pramanik. If you do not know who they are then you are doing just fine. The entire launch event managed to draw a BIG collective yawn from Indian audience.

I am a loyal Nokia user for last 8 years and was eagerly looking forward for the new Nokia Smartphone to replace my aging E71. I had a my first glimpse of Lumia 800 at London and for me it was a disappointment. The launch event was pathetic, the branding is worse, and the worst was that the new phone has no great features/functions.

In short Nokia has failed in product launch.

Nokia now faces a very very steep uphill climb to regain its old glory and is wearing a shoes dipped in grease.

A Failed Product Launch

Why do I call this as a failed product launch?

Four reasons:

1. Product is launched by no-names (sorry Mr. D.Shivakumar and Mr. Bhaskar Pramanik, no one outside your organization will recognize you. You are no Steve Jobs - whom even people in Somalia will recognize.)

2. No media hype or extensive campaign for the product. Please learn from Shah Rukh Khan - who promoted Ra-One to a record success. Lumia is a make-or-break product for Nokia, and Nokia bungled on the product launch and branding. Lumia 800 and Lumia 710 do not have the same punch as HTC Titan. Nokia should have chosen a better brand name which stands out and which does not confuse customers

3. Product functional features and pricing was too banal. A me-too product for a very high price. Lumia 800 falls terribly short when compared to iPhone 4S. Lumia does not have Siri or Facetime like features. Worst of all, Lumia does not differentiate itself from HTC Titan phones.

4. Over priced. At Rs29000, Lumia is over priced and under powered, and has no real app market. Today's smart phones are all about apps and in that space Window's phones has the least number of apps. From the pricing perspective, Lumia is comparably priced when compared with iPhone, but then there are hundreds of Android phones out there that sell for 1/3rd the price of Lumia.

Good things about Mango

Windows Mango OS brings in a fresh air into mobile space. The new UI is bright, intuitive and interesting. The active tiles is an excellent feature which makes Windows phone differentiate from other smart phones. Office applications Exchange support, ability to view and edit MS Word, MS Excel and MS PowerPoint has been the main selling point of Windows phones.

Xbox live feature is also another feature that will appeal to users of Xbox.

Things that need improvements

Nokia needs to work on improving its Lumia phones by adding the following feature:

1. Laptop/Netbook Adapter: When Lumia is connected wirelessly to a laptop or netbook or chromebook, the laptop monitor/keyboard becomes the input & Output device for the phone. This will enable users to easily edit documents, view videos etc. Essentially this function will enable the cell phone to double as a VDI device.

2. HDMI output: Enable users to plug the cell phone to TV and share/view videos on a bigger screen. With a 8MP camera and HD video recording capability, cell phones becomes a primary camera. So having a HDMI output helps users view the video/photos on big screen.

3. Encryption & Security for data: Security has been a major selling point of Blackberry phones. Microsoft has all the basic security technologies to create a secure windows phone. All e-mail communications and data stored in the phone must be encrypted - preferably a 256 bit encryption - AES256 standard

4. Remote management facility. As cell phones become more complex and doubles as a VDI devices, users will need support to configure/mange their cell phones. Having a remote management facility will help users & corporate IT to manage their smart phones better.

5. Cloud Mobility. Today cell phones are designed to be connected to Internet. Users should have all their data mirrored in a cloud. Apple has the iCloud, Microsoft has skydrive, but to make Window's phones truly different, enable users to sync with any cloud.

6.Better Camera and screen resolution

Current phone needs a front facing camera for video calls and a true 1080p resolution screens. Cell phones have become a primary camera for many folks, and Nokia/Windows needs to be a leader in the pack. In Lumia 800, the camera is very good. The best I have seen on a cell phone till date. Nokia needs to add a front facing camera.

7. Multi-core CPU and expandable memory

While the 1.4 Ghz CPU is adequate for phone applications but as apps grow, there will be a need for more memory and more CPU power. So in the next generation of mobiles, having a multi-core CPU and expandable memory upto 128GB will be a key differentiator.

Can Microsoft Keep up the Pace with Android & Apple?

This is a Billion dollar question. Currently Microsoft has several other business interests that will have to be cannibalized in order to win in the mobile space. The MS Office software will have to migrate to the mobile world and will have to be fragmented as well. One version of office for mobile and a full version for PC. But this approach will eat into the MS office PC market share. Will Microsoft allow that?

As mobile software evolve, soon it will become imperative to create a common platform for mobile and PC software. In other words, the Windows Mobile OS will have to supplant the Windows PC OS products to become a preferred VDI client. Apple has given early indications of its intentions to merge iOS and OS-X, and Microsoft will have to follow suit in order to keep up. This possibility raises lots of thorny questions at Microsoft and how Microsoft resolves this needs to be seen.

As I can foresee things, Microsoft will have no choice but to create three distinct OS platforms:

1. Mobile OS - that powers all end user devices (PC, netbook, cell phones. Tablets etc)
2. Server OS - that powers the back end servers
3. Cloud OS - currently the Azure platform.

If Microsoft can integrate these three platforms for seamless user experience, then Microsoft will have a distinct advantage over Google and Apple.

Xbox live is another feature which Microsoft has and other's don't. Microsoft must develop the Window's Phone OS to make it a personal player and a console for the web based platform.

What will Nokia do?

Apple is able to sell ~20 million cell phones per quarter because it is the only company that makes iPhones. But in Nokia's case, there is competition from Samsung, HTC and others who also make Window's phones. In addition, Microsoft has a tight control over the hardware specifications on the handsets. This implies that Nokia will not have much control on the hardware and hence cannot play much in terms of pricing as a differentiation strategy.

If Windows phones become successful, then it will replicate the PC story in terms of falling prices and lower margins. This means strong supply chain and excellent supply chain management. Nokia has world class factories as a competitive advantage.

Nokia must develop a strong product differentiation in the Windows phone segment in order to be successful in the long run. And to differentiate, Nokia must bring in its OVI store and its OVI platform as a key differentiator and create value with its OVI offerings.

Another advantage Nokia has today is in its Industrial Design. Nokia has the skills to create beautiful designs for cell phones and that is a strong product differentiator. Lumia 800 is a classic design. If Nokia can create exciting designs and beat Apple in industrial design, then Nokia can regain its former glory.

Uphill climb ahead

Nokia needs to prove its relevance in cell phone market by selling 12 Million Lumia handsets before 2011 Christmas, and then manage to sell atleast 22+ million phones every quarter for 2012

To meet such high numbers Nokia will need:

1. Strong App market place, Beef up the OVI store operations
2. Strong product differentiation from other Windows phones
3. Strong Carrier support in Europe & US
4. Big push marketing in Asia.

Can Nokia deliver on all the four? We will have to wait and watch what Nokia does in next 6 months. If Nokia fails to sell more than 10 million Lumia phones by end of 2011, then that may signal the end of the road for Nokia.

Closing Thoughts

In today's brutally competitive market place, Nokia will have to make it big in smart phone market with its Windows phone, else Nokia will be relegated to a secondary player (like Sony Ericcsson, Motorola etc), and that would force a massive downsizing or sell the company off.

To succeed in mobile space, Nokia needs to create next week's opportunity with tomorrow's technology but Lumia phones are designed to solve last weeks problem with yesterday's technology.

Monday, October 31, 2011

Product Positioning of Amazon Kindle Fire

Amazon recently released the Andriod based version of Kindle - Kindle Fire.

The world media went on the frenzy about Amazon's competition to Apple iPad. Well in one sense Amazon is a competition to iPad, but it creates a new segment for itself in the tablet space and avoids direct competition with iPad.

I got some questions on how Kindle Fire will fare against iPad2, and that question got me thinking and here is the results.

  1. Amazon Kindle is not a competition to iPad2. (atleast not in the current format).
    With a smaller screen & lesser memory, Kindle is more of a competition to iPod Touch than the iPad2.

  2. Kindle Fire is a direct competition to Nook, Kobo and other eBook readers.

    Amazon can easily leverage its enormous clout and cloud technology to strike direct deals with authors and kill competition from other standalone ebook readers.

  3. The success of Kindle fire & iPad will force other eBook readers out of business.

    Both Apple and Amazon are building a strong cloud platforms to deliver content directly to users. Use of technology to deliver content in multiple ways will create intense competition to Nook, Kobo and other standalone ebook readers and eventually forcing them out.

  4. Amazon will rake in Billions in sales of Music, Books & videos, making Amazon a competition to iTunes & iCloud

    Amazon Prime, AppStore & Amazon cloud offer streaming audio/video and data storage. This is a direct competition to Apple's offering in iCloud & iTunes store. For a long time iTune had no real competition & Amazon could easily become a 800 pound gorilla in this space.

  5. Biggest winner in this contest between iPad & Kindle Fire could be Samsung.

    Samsung also makes tablet computers, memory chips, CPU's etc. Success of Kindle would invariably create demand for tablet like devices in Asia, Africa & Europe - where Apple & Amazon do not have a strong market position and those markets will be captured by Samsung. In India & China, Samsung is already ahead of Apple in smart phones and tablets. Success of tablets will also bring in other hardware vendors - to whom Samsung can sell its chips and other hardware components. This Samsung could become a winner in this contest between Amazon & Apple

Product Positioning of Kindle

Amazon's Kindle is essentially a device designed for data consumption: Books, Movies, Music, Web and gaming. The Kindle is designed primarily for individual use. Kindle is not a family use device. It panders to individual consumption - just like iPad.

Amazon has cleverly positioned Kindle Fire away from headlong competition with iPad, and tied it tightly to its Amazon store. This is an attempt to create a new market segment for itself. Not that Amazon is afraid of competition, Amazon has taken the competition from Nook and Kobo headlong, but avoided direct competition to iPad. By having a smaller screen at 7 inches and smaller memory with 8 GB, Kindle is not in direct competition to iPad.

Amazon is building an ecosystem centered around its main online store to make Kindle an attractive product. Amazon is building a core cloud offerings to sell streaming audio & video content to Kindle devices (along with eBooks). This makes Amazon Kindle a major competition of every cable TV service company and to a lesser extent to iTunes/iCloud.

As a personal media player, Kindle is solving one big problem with cable TV - i.e., inability to watch any program at any time. Cable TV customers are forced to watch the program at a fixed time. Kindle cuts this time dependency and offers viewers to watch programs at their convenience.

Kindle Fire is also a direct threat to book publishers. The current books & eBooks are outdated. Tablet computers opens up the possibility of having interactive eBooks. Having a strong delivery platform will enable Amazon to become an eBook publisher and strike direct deals with authors for content, and in the process eliminate the traditional publishers and book distributors. Android is an open platform which supports multiple languages. This coupled with Amazon's global delivery with its cloud can create a global platform for publishing books in any language. If Amazon executes on becoming a ePublisher and leverages its cloud computing capabilities to help authors create interactive eBooks, Amazon can becomes a global leader in book publishing.


Amazon Fire, Prime, Appstore, Cloud Drive, and Games Center create an exciting ecosystem for personal entertainment delivery. Amazon has built a complete ecosystem and with that Amazon is now set to dominate a new market on a global scale.