- Technically competent people
- Technology
- Understanding customer Needs
- Leadership commitment
- Financial resources
- Brainstroming or Innovation sessions
- Strong patent filing & protection system
Saturday, December 31, 2011
What ails Innovation in India
Friday, December 30, 2011
Is VDI the killer App for Cloud Computing?
Developing a Product Brief
Monday, December 26, 2011
Why Mobile Smart devices are yet to gain enterprise acceptance
Apple and power of 'i'
Internet finds the next killer App
Developing New Product Ideas
Sunday, December 25, 2011
What's driving Virtualization and cloud computing
Challenges in Operations Management of Virtual Infrastructure
Virtualization broke the traditional silos of dedicated computing resources for specific applications and also broke the silo's of operations management by forcing IT administrators to look at servers (compute), networks and storage as unified resource pool.
This has created new operations management challenges - which cannot be solved by traditional approaches. The new challenges are mostly in the following areas:
- Virtualization eliminated the IT silo boundaries between Applications, network, compute and storage. This has made the IT stack more sensitive to changes in interference to the components in the IT stack. For example changing a network setting could have an adverse impact on the applications, or data store speeds. Changes in storage configuration could result in unresponsive applications etc.
- Virtualization can easily push the resource utilization beyond the safe operating boundaries. Thus causing random performance issues or causing random hardware failures.
- Applications running in a virtualized environment will see a dynamic changes in resources available for it. As one application starts to consume more resource, the other applications could see a similar reduction in resources available for it. This causes a random performance variations and in many cases disrupt the entire business operations.
Managing Virtualized infrastructure needs new tools and technologies to handle these new factors of complexity. Given the dynamic nature of a virtualized IT Infrastructure, the new management tools must be: Scalable, Unified, Automated, Proactive and User friendly.
It is also very important to ensure that the cost of virtual infrastructure management tools must be lower than the cost of failures. Though this sounds simple, in reality, the cost of infrastructure management can potentially raise to the sky - so one needs to be cautious on choosing the right set of tools.
Traditional Operations Management
Ever since the beginning of IT operations, management of the IT infrastructure has been organized based on resource silos. A dedicated team to manage:
2. Network - LAN, VLAN, WAN
3. Storage - SAN, NAS, Storage Arrays
4. Applications- CRM, ERP, Database, Exchange, Security etc. In large organizations there are teams to manage each application types.
Each of the resource management silos operated independently, had their own operations management: Monitor, analyze, control and change resources. Each team has its own set of tools, processes, procedures to manage the resources that come under their preview.
Since each group had little idea of the needs and requirements of other groups, they often created excessive capacities to handle growing business needs and also for peak loads.
This silo based approach led to inefficiencies and wastage. Virtualization eliminates such wastages and improves efficiency.
Virtualization disrupted Operations Management
Virtualization is a game changer for operations management. Virtualization elements the boundary between compute, storage and network resource silos, and views the entire IT resources as a single pool.
The hypervisor shares the physical resources into Virtual Machines (VM) that can process workloads. This resource sharing architecture dramatically improves resource utilization and allows for flexible scaling of workloads and resources available for those workloads.
Virtualization creates new operations management challenges by:
1. Virtual Machines share the physical resources. So when one VM increases the resource usage, it will impact the performance of applications running on other VM's that share the same resource. This interference can be random & sporadic - leading to complex performance management challenges.
2. Hypervisor has a abstract view of the real physical infrastructure. Often times the real capacity of the underlying infrastructure is not what is viewed by hypervisor, as a result when new VM's are added, it will create under-provisioning of resources and create major performance bottlenecks.
3. Hypervisor allows for consolidation of workload streams to get a higher resource utilization. But if the workloads are correlated, i.e., as increase in one workload creates a corresponding increase on an other workload, then their peaks will become compounded and the system will run out of resources or/and create enormous bottlenecks.
4. VM's need to have dynamic resource allocation in-order for the applications to meet the performance & SLA requirements. This dynamic resource allocation requires an active and automatic resource management.
5. Hypervisor has a abstract view of the real physical infrastructure. As a result, the configuration management appears to be overly simple at the hypervisor layer - but in reality, the configuration changes will have to be coordinated across different resource types (compute, network, storage).
6. Virtualization removes the silo boundaries across the resource types (compute, network & storage). This creates cross-element interference on the applications. So when an application fails to respond, the root cause for the failure cannot be easily identified.
Virtualization creates a new set of operations management challenges, but the solution to these challenges will result in a seamless, cross-domain management solutions will reduce costs by automating various management functions and eliminate the costly cross-silo coordination between different teams. Managing a virtualized infrastructure will need automated solutions that will reduce the need for labor intensive management systems of today.
Virtualization and Utilization
The greatest benefit of Virtualization is in resource optimization. IT administrators were able to retire the old and inefficient servers and move the applications to a virtualized server running on newer hardware. This optimization helped administrators reduce the operating costs, reduce energy utilization, and increase utilization of existing hardware.
Cost saving achieved by server consolidation and higher resource utilization was a prime driver for virtualization. The problem of over-provisioning had led to low server utilization. With virtualization, utilization can be raised to as high as 80%.
The higher utilization rate may sound exciting, it also creates major performance problems.
As virtualization consolidates multiple work loads on a single physical server - thus increasing the utilization of that server. But work loads are never stable - work loads tend to have their peaks and lows. So if one or more work loads hits a peak, utilization can quickly reach 100% and create a grid lock for other work loads. Thus adversely affect performance. Severe congestion can lead to data losses and even hardware failures.
For example, Virtual machines typically use a virtual network: Virtual network interface, subnets, and bridging packages to map the virtual interfaces to the physical interfaces. If multiple VM are running on the server and the server has limited physical network interface, then running multiple VM's that are running network intensive applications can easily choke off the physical interface causing a massive congestion in the system. Similarly, such congestion can occur with CPU, memory or storage I/O resources as well.
The resource congestion problems could be intermittent and random, that makes it even more harder to debug and solve the resource contention issues.
To solve these performance problems, one has to find out the bottle neck for each of the performance problems first.
In a virtualized environment, finding these performance bottlenecks is a big challenge as the symptoms of congestion would show up in one area - while the real congestion could be somewhere else.
In a non-virtualized world, the resource allocation were done in silos, such that each silo must accommodate for all fluctuations of work loads. This led to excessive capacity - by planning to handle the peak work loads. Therefore performance management was never a major issue. But with virtualization, active performance management is critical. The virtual infrastructure must be constantly monitored for performance and corrective actions must be taken as needed - by moving out VM's from a loaded server to another lightly loaded server, or by dynamically provisioning additional resources to absorb the peaks.
Dynamic provisioning requires a deeper understanding of resource utilization: which application is consume what resource and when resources are being used. To understand this better consider this example:
In an enterprise there are several workloads, but few workloads have a marked peak behavior. Sales system has a peak demand between 6PM-9PM, HR systems has a peek demand between 1PM to 5PM, Inventory management system has a peak demand between 9 AM to 1 PM. On further analysis, it is found that the sales system actually has a peak demand on the network and storage IOPS, ERP system has a peak demand on servers and storage, HR systems has a peak demand on servers and storage IOPS.
Knowing this level of detail will help the system administrators to provision additional VM's for ERP by moving VM's allocated to HR between 9 AM to 1PM. While VM's allocated to ERP can be moved to HR between 1PM-5PM.
Solving the sales peak load problem may require additional networking hardware and more bandwidth - which will result in a lower utilization. It is better to have the excess capacity wasted during off-peak times than having performance bottlenecks.
There could be other complex cases: The HR system creates multiple random writes, while the sales system is issuing a series of sequential reads, then in such case the sales application will see a delay or performance degradation even though the workloads are normal. In this case the SAN network gets choked with writes from the HR system and the performance problem will be reported by the sales application administrator.
Resolving such correlated workload performance issues requires special tools that provide deeper insight into the system. Essentially, the IT administrators must be able to map the application to the resources it uses and then monitor the entire data path for performance management.
Fundamental Operations Management Issues with Virtualization
Virtualization creates several basic system management problems. These are new problems and these cannot be solved by silo based management tools.
- Fragmented Configuration Management
Configuration & provisioning tools are still silo based, there are separate tools for server configuration, network configuration and storage configuration. In organizations, this has led to fragmented configuration management - which is not dynamic or fast enough to meet the demands of virtualization. - Lack of Scalability in monitoring tools
Fault and performance monitoring tools are still silo based and as the infrastructure gets virtualized, the number of virtual entities increase exponentially. Also the number of virtual entities are dynamic and vary with time. The silo domain based management tools are intrinsically non-scalable for the virtual system. - Hardware Faults due to high utilization
Virtualization leads to higher resource utilization - which often stress the underneath hardware beyond the safe operating limits - which eventually causes hardware failures. Such high utilization cannot be detected by current monitoring systems, and administrators are forced to do breakdown repairs. - Hypervisor complexities
Typical virtualization environment will have multiple virtualization solutions: Vmware, Xen etc. The hypervisor mechanisms itself create management problems. (see: http://communities.vmware.com/docs/DOC-4960 ) Having multi-vendor approach towards virtualization will increase hypervisor management complexity. - Ambiguity
Performance issues arising in a virtualized environment are often ambiguous. Faults or bottleneck seen in one system may have the root cause in another system. This makes it mandatory to have a complete cross-domain (compute, network, storage) management tools to find the root cause issues. - Interference
VM workloads share resources. As a result increasing one workload can create interference to the performance of another workload. These interference problems are very difficult to identify and manage.
Closing Thoughts
Virtualization is great to save costs and improve resource utilization. However, one may have to fundamentally change the way the IT infrastructure is managed with virtualization. New workflows will have to be developed and new management tools will be needed.
Wednesday, December 21, 2011
Apple’s $500 Million acquisition of Anobit
Apple is big user of flash memory. All its products: iPod, iPad, iPhone, MacBook Air - all use NAND flash for memory and this acquisition is a clear indication of Apple’s future strategy towards Flash memory as the basic storage for all its products.
Currently, Apple buys lots of NAND Flash memory from a host of suppliers, but these are generic chips. Apple is acquiring Anobit to create unique Intellectual property in terms of NAND flash memory management - which will enhance the performance of Apple’s products.
In 2005, Apple acquired Intrinsity, a CPU design company based in Austin, Texas. Today the A5 CPU designed by folks at Apple’s Austin CPU design center forms the computing core of its iPad and iPhones. Though A5 is based on ARM CPU designs, the Graphic processing capability built into the chip ensures a superior video/graphics quality - a key product differentiator for iPhones and iPads.
Apple’s Future strategy with this acquisition
Today, there are few major performance bottlenecks in iPhones and Ipads - most of them are related to memory latency and memory errors. By acquiring Anobit and leveraging their Flash memory designs and technology, Apple can create unique competitive advantages for itself.
Acquisition of Anobit helps in two ways:
1. Improve the memory latency issues. Apple can integrate Anobit’s memory management technology with its CPU, Apple can create ultra fast chips - that will power the tablets, MacBook Air or other thin clients. Apple can also use the new SoC CPU (CPU + GPU+ Memory Manager) to power the next generation servers.
2. Anobit’s technology helps improve the reliability of Flash Memory chips. By locking in this technology, Apple can build better products than its competition in Smart phones. The current multi-cell flash memory chips used in cell phones suffer from unreliability and short lifespans.
Apple has historically shown the world the advantages of tightly coupling the hardware with its software. So as Apple moves ahead in the post-PC world, Apple is once again announcing to the world that it will control both the hardware and the software, and through that tight coupling - Apple will create unique competitive advantages.
The current generation of iPhones and iPads are built from generic off the shelf products which can be easily copied and Samsung has done that. So Apple intends to create a new breed of hardware - in which Apple owns all the IP and then designs chips which have a tightly integrated CPU, GPU and memory. Apple can then contract the manufacturing to other companies such as TSMC. Apple can then reduce it's dependency on Samsung for memory chips.
Is Apple planning to capture the Data Center?
Apple has long abandoned the server market. Ever since Apple moved to Intel Processors, Apple has steadily moved away from the desktops and servers. In November 2010, Apple announced the End of life to its Xserve rack servers.
But as the Cloud technology enters the main stream, there is a new demand for energy efficient servers - which run the cloud services. Already Apple has a very energy efficient ARM CPU technology and integrating it with efficient memory management technology, Apple can create a rack mounted server cluster - which consists of 16 or 32 or 64 individual servers. Such a cluster will be ideal for hosting web servers.
Today it is widely acknowledged that ARM architecture is ideal for low power super computers (see: http://www.cpu-world.com/news_2011/2011111601_Barcelona_Supercomputer_Center_to_build_ARM_Supercomputer.html )
With Apple already capturing a dominant marketshare in tablets and smart phones, Apple will have to look at other markets to keep Wall Street happy with its hyper growth, and web servers present one such golden opportunity for Apple.
Apple TV Solution
Apple has already announced its entry into TV sets market. Apple will have a unique technology in Televisions - wireless IP streaming of videos with limited storage on TV sets. Such a TV technology will call for high reliability Flash memory chips. TV programs are memory hogs, and TV sets tend to have a long life when compared to computers. In that view, having better technology to create a long lasting Flash memory chips will be key differentiator.
Closing Thoughts
Apple has huge amounts of cash, but it has been very frugal in acquisitions. Apple has always chosen to acquire niche technology in hardware and merged it with its software to create unique products. Acquisition of Anobit is also a step in the same direction, and Apple is telling the world that the future of computing performance gains will come by tightly integrating memory management with CPU and GPU.
How Apple will translate this technologies into exciting products needs to be seen. One obvious benefit from this acquisition will be to create competitive advantages to Apple’s existing product lines, and then leverage the technology to create new products such as TV, Servers etc.
Tuesday, December 20, 2011
Network Management in VDI
Virtual Desktops is poised for prime time in 2012. After several years of feeling the pain of supporting multiple platforms and dealing with all the headaches - IT departments of most of major corporations will adapt VDI in a big way in 2012.
While IT administrators may have tested various aspects of VDI: implementation, integration into legacy applications and data management, there are several network issues that are hidden till VDI is deployed and these issues will have to resolved for successful VDI deployments.
VDI technology essentially changes the network traffic in organizations. As VDI’s are installed in a Datacenters, the WAN traffic to/from the data centers explodes. Since the network and datacenters were designed for PC clients, the network will be inadequate for VDI deployments. Moreover the network problems will not revealed immediately - rather it will be revealed in bits and pieces and that will force an ad-hoc update/upgrade to the networks.
The network issues will crop up and the entire deployment process will not be smooth sailing for enterprise wide VDI deployment.
Managing VDI deployments
From the IT managers perspective, having only one VDI platform will be an ideal solution, but in reality companies will have a heterogeneous (VMWare, Citrix, Microsoft) environment. This adds to the complexity of managing the VDI environments. Having a heterogeneous VDI environments will create a need for a unified Infrastructure manager.
All this means a big need for network Infrastructure management software - such as Ionix ITOI.
Major Network Challenges in VDI deployments
WAN Issues
As VDI would change the data routing within an enterprise in a big way. Currently the individual PC’s connect to the data center servers over the LAN - but with VDI, all end devices will connect to the servers over WAN - as all the virtual desktops are now running on the servers and users are accessing the servers over the WAN gateway. .
As the number of VDI users increase WAN traffic could increase exponentially. The only way to manage the WAN traffic will be opt for WAN optimization technology such as Cisco WAAS, Silver Peak, Blue Coat, F5, Expand Networks, Exinda etc
QoS and bandwidth management can play a significant role in mitigating the WAN contention issues. Screen refresh, for example, is highly interactive and very sensitive to congestion. Video traffic is also very sensitive to congestion. QoS and bandwidth management can ensure that these applications perform well. While file transfer and print jobs are not very sensitive to congestion, they can induce congestion on the WAN and hence impact the other types of applications. QoS and bandwidth management can ensure that these applications do not interfere with applications that are sensitive to congestion.
VDI will help IT departments consolidate and optimize remote desktop management, but they need to spend time focusing on optimizing and controlling the WAN connections between the VDI client and the VDI server between branches. Any bumps in the WAN will translate to a bad user experience for the remote VDI user and a support call to IT.
Major WAN challenges are:
- Hidden choke points that only become apparent when stressed
- Spikes in network traffic that are hard-to-predict before full roll out
- Intermittent network even when average bandwidth is high
WAN optimization in VDI
VDI clients will be mobile. Laptops, Netbooks, Tablets, & even smart phones - will be the typical clients for VDI. This implies that the user’s Internet connection standards cannot be gauranteed. In such cases, users on a low bandwidth connection will experience severe performance degradation.
LAN Issues
VDI could also increase LAN traffic subtle ways. As part of VDI, a virtual machine (VM) on a data center server hosts a complete user desktop including all its applications, configurations, and privileges. The client then accesses the applications via the network with the desktop and application objects delivered on demand over the network from the virtual desktop servers via a remote display protocol, such as Microsoft Remote Desktop Protocol (RDP) and/or Citrix’s ICA protocol. The RDP/ICA traffic could spike at times creating choke points within the corporate LAN networks. In general with VDI, the RDP/ICA traffic will be much higher than the average.
LAN traffic in a traditional IT deployments.
In addition to the LAN RDP/ICA traffic, user’s systems could also have other applications (data/music/video/photos that could be running over the LAN network.
So in a nutshell with VDI deployments, one would see rapid increase of LAN traffic and create multiple choke points in the LAN network. The problem will get worse as more seats are added into the system.
Storage Issues
VDI is essentially a hybrid approach, where each end user has a thin client and connects to a private Windows XP or Vista image—a virtual machine hosted on VMware Virtual Infrastructure. This approach allows IT administrators the greater control over the user environment usually provided by Terminal Services or Citrix environments by consolidating the Windows images on server class hardware. It also allows the images to be stored and managed in the datacenter, while giving each user a full personal copy, which requires no introduction or explanation to a normal user.
VDI relies on central data storage for both block & file type data. VDI must handle both structured and unstructured data.
The following table, adapted from the VMware VDI Server Sizing and Scaling white paper, compares the disk usage of light and heavy users for a large number of VMware VDI virtual machines (approximately 20) on a single VMware SEX host. It suggests that over 90% of the average information worker’s disk I/O consists of read operations.
Before intelligent storage subsystem choices can be made, these throughput values need to be converted to Input/output operations Per Second (IOPS) values used by the SAN/NAS storage industry. A throughput rate can be converted to IOPS by the following formula:
.Throughput (MBtyes / sec)×1024(kbytes/MByte)
IOPS = -----------------------------------------------------------
.Blocksize (kbytes/ IO)
Even though the standard NTFS file system allocation size is 4k, Windows XP uses a 64-Kbyte block size, and Windows Vista uses a 1-MByte block size, for disk I/O. Using the worst case (heavy user) scenario of 7.0 MBytes/sec throughput and the smaller block size of 64kbytes, of a full Windows XP group of machines, the generated IOPS for approximately 20 virtual machines is 112 IOPS.
ESX server supports Fiber Channel Protocol, FCoE, iSCSI, NFS & 10GbE.
As the number of VDI users increase, the storage system will face both capacity and performance issues. This types of VDI workload senario can bring a traditional storage system to its knees.
VDI tends to create a I/O spike which will require complete redesign of storage systems. Storage tiering - ‘tier 0’ along with Flash drives or Solid state drives(SSD) can solve the storage I/O spikes and also improve the performance problems by having the most frequently used data into SSD. VDI deployments are typically read intensive (90% read & 10% write).
In short, VDI creates additional overheads on storage management and administration.
VDI Fault & Performance Management
One of the most common faults that occur in a VDI environment is VMWare losing connection to storage. When a VM losses connection to its datastore, the VM becomes unresponsive. This problem becomes more acute in VDI deployments with uses vMotion.
In case of VDI, as such a case will result in an unresponsive desktops and the subsequent increase to IT tickets.
WAN Management
The basic requirements of WAN management are:
1. Discover the entire WAN gateway network components. Including all WAN optimizing devices (Blue coat, F5, Cisco WAAS, Silver Peak, Exinda, etc.)
2. Fault Management of WAN gateway network devices
3. Performance Monitoring of WAN gateway network devices. Monitor QoS & SLA parameters through SLA & QoS MIBs.
4. Performance Management via vCenter Operations Enterprise for the entire WAN gateway network
5. Remote Configuration management of WAN network for bandwidth/performance optimization.
LAN Management
1. Discover & monitor the entire enterprise LAN - wire & Wi-Fi network for availability & performance.
2. Performance monitoring of Wi-Fi networks as people are increasingly using Wi-Fi for connectivity to corporate network
3. VPN tunnel connection monitoring. Monitor VPN tunnels & VPN gateway for any faults that cause VPN tunnels to go down.
4. Security & Authentication management to detect any unauthorized log-in & intruder detection on LAN.
Storage Mangement
1. Discover & Monitor SAN switches & SAN network
2. Discover Storage array, LUNs & WWN
3. Discover all VM’s for VDI and the WWN associated with the VDI
4. Corrleate unresponsive VM/VDI events to respective storage or network failures
VDI Needs better Storage Managment
VDI creates humungous volumes of data under managment. Since all the user data will be centrally stored, the volume of data that needs to be managed will be HUGE. To understand this, consider this:
Average user today has 100GB of data in their desktops. So for 2000 users, the total volume of data will be 200TB. This translates to a disk space of 1000TB - accounting for RAID-5, and active backups. Note that this is additional data under management, which resides today unmanaged in physical desktops.
Naturally, it makes sense to use data de-duplication technologies and implement rule based data management system to minimize the total volume of data under management and disk requirements.
Closing Thoughts
VDI deployments will create a big need for an automated fault, performance, & configuration management solution that can span across the virtual domain into the physical domain of servers, network and storage.
Successful VDI deployment will rely on automated IT infrastructure management solution that can provide provisioning, Automated root cause analysis and alerts to potential provisioning issues, automatic fault identification and active performance management systems.
In an ideal world, the IT infrastructure management solution should be able to correlate faults in the virtual domain (VDI) to the underlying hardware, network & storage problems and alert users and IT administrators before the faults are detected by the end users.
References:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009553
VDI sets the stage for DaaS
Today VDI technology has matured and is ready for prime time, but deploying VDI will require some heavy lifting that involves massive investments into infrastructure. So for medium sized enterprises, DaaS offers an easier option to roll out VDI, and then integrate those DaaS deployments back into the in-house VDI offerings.
DaaS: The Cloud-based virtual desktops
DaaS can be deployed in any location on devices connected to Internet. Having a virtual desktops hosted on a cloud, DaaS service providers can offer the best of VDI and cloud worlds.
Cloud based apps such as Google Apps, Office365 etc have one major disadvantage. All the user created data will reside with the service provider, so over a period of time the volume of data will essentially create a lock on the users, preventing customers from moving to a new service. Also there is a risk of data, stored with the app provider, can be hacked/stolen or lost. In order to overcome such cases or a disaster, it will be better to have a DaaS provider integrate a VDI deployment with cloud Apps and deliver the desktop to the users over Internet.
DaaS helps large organizations
For very large organizations such as government departments, and manufacturing firms are ideal customers for DaaS. Since in these organizations not every employee needs a desktop all the time, the organizations can buy virtual desktops for their employees and then have the data synchronized with their own data centers.
For government departments, having a DaaS will mean greater savings in terms of centralized IT services and eliminate the need for multiple IT departments. Getting a centralized IT for services such as Email, desktop, and data management can result in significant savings.
DaaS helps data management
Desktops can create huge volume of data. Each user will end up having about 100GB of data - which in a large organization can translate to several petabytes of data. Today in traditional desktops this data resides in the hard drives of the user desktops and is not managed. But with VDI this data will have to be managed.
Since most of the user data will be copies of emails or files, having a strong data de-duplication system will help reduce the volumes of user data to a small fraction of the original. Also having all the data in a central repository will mean that rouge programs or data can be deleted and prevented from entering the system.
Since all the user data is stored in a central repository, the loss of user data can be totally prevented.
Large organizations also have a large number of contractors and part time employees or partner employees - who will need some levels of IT access. For such 'floating' employees, DaaS offers an ideal solution to provide a Virtual Desktop - without giving an access to the main corporate IT infrastructure.
DaaS will also attract individual users
In a total cloud enabled world, where all the end user devices are thin clients - such as Chrome Books, Tablets etc, then users will need a traditional desktop to do some of their personal work - such as editing pictures/videos, Creating docs, spreadsheets etc, store contact details, music etc. These services can be done as Web services across multiple sites or have then consolidated into a central VDI.
Just like today, where people have their home PC and Work PC, there is a need for a home VDI and a work VDI. Getting a private VDI from a DaaS provider will perfectly meet that need.
DaaS will create Data-as-a-Service
The most valuable entity in any IT system is Data. Users will consume data to create valuable information, so with VDI, users will need certain types of data to be delivered as service. Today Data-as-a-Service is still in its infancy, but as the cloud system matures, the user created data in a VDI environment will create new opportunities to build new services that takes the user created data and offers it as service to others.
The best example of Data-as-a-Service is Bloomberg's stock information data. Bloomberg provides accurate stock quotes to user desktops for enable users to trade with stocks. With Data-as-a-Service, VDI users will consume data to create information or to make valuable decisions.
Closing thoughts
2012 will see a steady (but slow) roll out of Virtual Desktops. Large organizations will roll out VDI as a service to the business, and new DaaS providers such as Desktone will emerge - who will offer entire VDI as a service.
As the popularity of thin clients grow and employees are encouraged to bring their own PC to work, VDI becomes a very attractive value proposition. The amount of data centralization & simplicity of desktop management & control will drive VDI implementation in organizations. For many organizations trying out VDI, will also try out Desktop-as-a-Service as a stepping stone for a full fledged VDI.
The ideal VDI deployment will be a hybrid of completely private VDI deployment (for 80% of their needs) - along with public DaaS (for 20% of their needs). An hybrid VDI deployment will also work as a buffer space to deal with sudden/seasonal demands.
DaaS will eventually create additional value added services for Data-as-a-Service. Right now, Data-as-a-Service is still in its infancy, but as VDI and cloud services become popular, Data becomes a valuable service.
Monday, December 19, 2011
Need for a Central Multi-Factor Authentication as a Service
Once cloud computing becomes popular, the number of web sites that need authentication will explode. I already have half a dozen of cloud service sites for which I need to remember by login Ids and password. (Office365, Zoho, Dropbox, Google docs, MegaCloud, Salesforce.com)
Today, every online user has at least 12+ web based services which need a log in authentication services and this number is about to explode.
On the other hand as the number of services that need user authentication increases, the online security is being increasingly compromised. World wide over, hackers are getting even bolder - sitting in safe havens, they hack into secure sites such as Citi Bank, HSBC, etc. Even RSA's servers were hacked.
All this points to a need for a safe and secure and centralized multi-factor authentication as a service offering.
Current gold standard for authentication - RSA's two factor authentication is on its last legs. RSA servers was hacked into in March 2011, and several of its internal secrets were stolen. RSA acknowledged that the hackers went after its Intellectual Property and source codes - but stopped short of reveling the extent of the theft. (see http://www.wired.com/threatlevel/2011/03/rsa-hacked/)
All this indicates that the Two factor authentication - Private key, public key can be broken in near future.
Authentication system today
Today, there is no central authentication system. To an extent, Google and Facebook provide a public authentication service, but this is too weak and is unsecured. So users are forced to remember multiple log-in ID/Passwords for each of the service they use.
LDAP falls way short of the requirement as it does not support multi-part authentication. Kerberos & X.509 supports multi-part authentication - but does not scale for a global web based authentication service. Kerberos was introduced in 2000 and follows DES encryption, which is not suffecient for the cloud era. In addition, Kerberos suffers a major drawback as it needs a clock synchronization - which is not practical in a global web based authentication service.
Need of the hour
There is a need for a safe, secure and single authentication service. The authentication service will be on a cloud and offers Authentication as a global service. The central authentication service can be multi-tiered:
1. A basic authentication service for basic web services - such as log-in to public web sites
2. Geographic authentication service - which provides basic authentication along with locational information for accessing personal information on Internet - such as social network or emails.
3. High security & Encrypted authentication service for eCommerce, Net banking and other high value services. The authentication system can generate encryption keys to encrypt all transactions between the user and the service provider. Thus provide a safe & secure web transactions
The authentication service validates several aspects of the user and confirms the Internet user is really the person who claims he/she is. The authentication service will validate and verify the following:
1. Identity of the person - Age, Sex, Address, etc.
2. Validate the privileges the person is entitled for a particular web service.
3. Provide a history of services the user has used in the past. This will require the web services to update the anuthentication service with user history.
The authentication service may or may not provide personal information to web service providers - based on what the individual user's wish. If a person does not wish to reveal his age to a web service provider, then the web service provider can only check if the user is of legal age or not, and such a service will be provided by the authentication service system.
The authentication system incorporates one or more Unique identification services - such as Unique Identification Authority of India (UIDAI), or Social Security Number etc to establish the person's identity.
The central authentication service can also provide information regarding the user rights - i.e., tell the web service - the extent/level of rights the user has on the system. I.e., the authenticated user has clearances for a given set of functions. This information can be used by the web services to designate the level of authorization for the user and set the user privileges accordingly.
The multi-part authentication service can use:
1. A 6-10 character Private key -only the user knows it
2. User BioMetric or A Unique ID code which is not in human readable format.
3. Dynamic Public Key like RSA FOB or a software FOB system
This multi-part authentication service will be more secure than anything in use today. Creating a 3 part authentication system will provide security against hackers - as the possibility of hacking an individual's ID will be impossible - given the Zillion+ possible combinations - which will make it immune for brute attacks.
The multi-part authentication system will be several time more secure than the current 3DES or AES256 standards
Creating a centralized authentication as a service will enable pooling of resources to create a better identity management and security system as a service. This service will provide the first level of security for all Internet transactions - between user and service providers, and also between various service providers.
Ideally, there can be multiple authentication service providers providing a choice for customers.
Closing Thoughts
There is a need for a safe and secure authentication system. The current methods of authentication - user ID & Password combination is inadequate & broken. As the world moves towards Web/Cloud based services - the need for a strong Identity management, authentication & security system becomes a vital building block for a safe and secure Internet.
In this article, I have just touched upon the basic idea of such a service. The business model and the operational details are yet to be developed.