Showing posts with label Cloud computing. Show all posts
Showing posts with label Cloud computing. Show all posts

Thursday, August 16, 2018

Steps in Cloud Adaption at Large Enterprises

Large enterprises have bigger challenges when it comes to migrating applications to cloud. Migration to cloud is often an evolutionary process in most large enterprises and is often a 4 step process - but not necessarily a sequential process, and can happen in sequence or on parallel.

Moving to cloud requires a complete buy-in from all business & IT teams: developers, compliance experts, procurement, and security.

The first step is all about becoming aware of cloud technologies and its implications. IT team will need to understand:

1. What are benefits - Agility, cost savings, scalability, etc.
2. What is the roadmap for moving to the cloud?
3. What skills each team member will need?
4. How does the legacy applications work in the future?
5. Who are the partners in this journey?

The second step is all about experimentation and learning from those small experiments. These are typically PoC projects which demonstrates the capability & benefits. The PoC projects are needed to get key stake holder buy-in.

Third step is essentially a migration of existing apps to cloud. For example moving emails to cloud or moving office apps to  Offce365 cloud etc. These projects are becoming a norm for large enterprises - which have a rich legacy.

Fourth step demonstrates  the final maturity of cloud. In this stage, companies now deploy all new apps on cloud and these are cloud only apps.

Wednesday, July 04, 2018

Skills Needed To Be A Successful Data Scientist

Data Scientist, the most demanded job of 21st century, requires multidisciplinary skills – mix of Math, Statistics, Computer Science, Communication & Business Acumen.


Thursday, June 14, 2018

Securing Containers and Microservices with HPE ProLiant Servers



Cloud-native software built on container technologies and microservices architectures is rapidly modernizing applications and infrastructure, and Containers are the preferred means of deploying Microservices.   Cloud-native applications and infrastructure require a radically different approach to security. In cloud native applications, Service Oriented Architecture based on Microservices are commonly employed. These Microservices are running on containers and each containers has to be individually secured.

This calls for new ways to secure applications and one need to start with a comprehensive secure infrastructure, container management platform and tools to secure cloud-native software to addresses the new security paradigm.

This article proposes one such solution. Running VMware Instantiated Containers on HPE Proliant Gen10 DL 325 & DL 385 servers using AMD EPYC processors can address the security challenges.

HPE Proliant Gen10 DL 325 & DL 385 servers using AMD EPYC processors provide a solid security foundation. HPE's silicon root of trust, FIPS 140-2 Level 1 certified platform and AMD's Secure Memory Encryption provides the foundation layer for a secure IT Infrastructure.

About AMD EPYC Processor

AMD EPYC processor is the x86 architecture server processor from AMD. Designed to meet the needs of today's software defined data centers. The AMD EPYC SoC bridges the gaps with innovations designed from the ground up to efficiently support the needs of existing and future data center requirements.

AMD EPYC SoC brings a new balance to your data center. The highest core count in an x86-architecture server processor, largest memory capacity, most memory bandwidth, and greatest I/O density are all brought together with the right ratios to get the best performance.

AMD Secure Memory Encryption

AMD EPYC processor incorporates a hardware AES encryption engine for inline encryption & decryption of DRAM. AMD EPYC SoC uses 32-bit micro-controller (ARM Cortex-A5), which provides cryptographic functionality for secure key generation and key management.
Encrypting main memory keeps data private from malicious intruders having access to the hardware. Secure Memory Encryption protects against physical memory attacks. Single key is used for encryption of system memory – Can be used on systems with VMs or Containers. Hypervisor chooses pages to encrypt via page tables - thus giving users control over which applications use memory encryption.

Secure Memory encryption allows running secure OS/Kernel so that encryption is transparent to applications with minimal performance impact. Other hardware devices such as Storage, Network, graphics cards etc., can access encrypted pages seamlessly through Direct Memory Access (DMA)

VMware virtualization solutions

VMware virtualization solutions including NSX-T, NSX-V & vSAN along with VMWare Instantiated Containers provide network virtualization which includes security inform of micro segmentation and virtual firewalls for each container to provide runtime security.

Other VMware components include vRealize Suite for continuous monitoring and container visibility. This enhanced visibility helps in automated detection, prevention & response to security threats.

Securing container builds and deployment

Security starts at the build and deploy phase. Only tested & approved builds are held in the container registry – from which all container images are used for production deployment. Each container image has to be digitally verified prior to deployment. Signing images with private keys provides cryptographic assurances that each image used to launch containers was created by a trusted party.

Harden & Restrict access to host OS. Since containers running on a host share the same OS, it is important to ensure that they start with an appropriately restricted set of capabilities. This can be achieved using kernel security feature such as secure boot and secure memory encryption.

Secure data generated by containers. Data encryption starts at the memory level – even before data is written to the disk. Secure memory encryption on HPE DL 325 & 385 servers allow a seamless integration with vSAN – so that all data is encrypted according to global standards such as FIPS 140-2. In addition kernel security features and modules such as Seccomp, AppArmor, and SELinux can also be used.

Specify application-level segmentation policies.  Network traffic between microservices can be segmented to limit how they connect to each other. However, this needs to be configured based on application-level attributes such as labels and selectors, abstracting away the complexity of dealing with traditional network details such as IP addresses. The challenge with segmentation is having to define policies upfront that restrict communications without impacting the ability of containers to communicate within and across environments as part of their normal activity.

Securing containers at runtime

Runtime phase security encompasses all the functions—visibility, detection, response, and prevention—required to discover and stop attacks and policy violations that occur once containers are running. Security teams need to triage, investigate, and identify the root causes of security incidents in order to fully remediate them. Here are the key aspects of successful runtime phase security:

Instrument the entire environment for continuous visibility.  Being able to detect attacks and policy violations starts with being able to capture all activity from running containers in real time to provide an actionable "source of truth." Various instrumentation frameworks exist to capture different types of container-relevant data. Selecting one that can handle the volume and speed of containers is critical.

Correlate distributed threat indicators.  Containers are designed to be distributed across compute infrastructure based on resource availability. Given that an application may be comprised of hundreds or thousands of containers, indicators of compromise may be spread out across large numbers of hosts, making it harder to pinpoint those that are related as part of an active threat. Large-scale, fast correlation is needed to determine which indicators form the basis for particular attacks.

Analyze container and microservices behavior. Microservices and containers enable applications to be broken down into minimal components that perform specific functions and are designed to be immutable. This makes it easier to understand normal patterns of expected behavior than in traditional application environments. Deviations from these behavioral baselines may reflect malicious activity and can be used to detect threats with greater accuracy.

Augment threat detection with machine learning. The volume and speed of data generated in container environments overwhelms conventional detection techniques. Automation and machine learning can enable far more effective behavioral modeling, pattern recognition, and classification to detect threats with increased fidelity and fewer false positives. Beware solutions that use machine learning simply to generate static whitelists used to alert on anomalies, which can result in substantial alert noise and fatigue.

Intercept and block unauthorized container engine commands. Commands issued to the container engine, e.g., Docker, are used to create, launch, and kill containers as well as run commands inside of running containers. These commands can reflect attempts to compromise containers, meaning it is essential to disallow any unauthorized ones.

Automate actions for response and forensics. The ephemeral life spans of containers mean that they often leave very little information available for incident response and forensics. Further, cloud-native architectures typically treat infrastructure as immutable, automatically replacing impacted systems with new ones, meaning containers may be gone by the time of investigation. Automation can ensure information is captured, analyzed, and escalated quickly enough to mitigate the impact of attacks and violations.

Closing Thoughts

Faced with these new challenges, security professionals will need to build on new secure IT infrastructure that supports the required levels of security for their cloud-native technologies. Secure IT Infrastructure must address the entire lifecycle of cloud-native applications: Build/Deploy & Runtime. Each of these phases has a different set of security considerations which is addressed to form a comprehensive security program.

Thursday, May 24, 2018

Most Common Security Threats for Cloud Services


Cloud computing continues to transform the way organizations use, store, and share data, applications, and workloads. It has also introduced a host of new security threats and challenges. As more data and applications are moving to the cloud, the security threat also increases.

With so much data residing in the cloud — public cloud, these services have become natural targets for cyber security attacks.

The main responsibility for protecting corporate data in public cloud lies not with the service provider but with the cloud customer.  Enterprise customers are now learning about the risks and spending money to secure their data and applications.


Tuesday, May 22, 2018

5 Aspects of Cloud Management


If you have to migrate an application to a public cloud, then there are five aspects that you need to consider first before migrating.



1. Cost Management
Cost of public cloud service must be clearly understood and charge back to each application must be accurate. Lookout for hidden costs and demand based costs - as these can burn a serious hole in your budgets.

2. Governance & Compliance
Compliance to regulatory standards is mandatory. In addition you may need additional compliance requirements. Service providers must proactively adhere to these standards.

3. Performance & Availability
Application performance is the key. Availability/Up time of underlying infrastructure and performance of IT infrastructure must be monitored continuously. In addition, application performance monitoring both direct methods and via synthetic transactions is critical to know what customers are experiencing

4. Data & Application Security
Data security is a must. Data must be protected against data theft, Data loss, data unavailability. Applications must also be secured from unauthorized access and DDoS attacks. Having an active security system is a must for apps running on cloud.

5. Automation & Orchestration
Automation for rapid application deployment via DevOps, rapid configuration changes and new application deployment is a must. Offering IT Infrastructure as code enables flexibility for automation and DevOps. Orchestration of various third party cloud services and ability to use multiple cloud services together is mandatory. 

Friday, May 18, 2018

Software Defined Security for Secure DevOps



Core idea of DevOps is to build & deploy more applications and do that a whole lot faster. However, there are several security related challenges that needs to be addressed before a new application is deployed.

Software Defined Security addresses this challenge of making applications more secure - while keeping pace with business requirements for a DevOps deployment.

The fundamental concept of software defined security is the codify all security parameters/requirements into modules - which can be snapped on to any application. For example, micro segmentation, data security, encryption policies, activity monitoring, DMZ security posture etc are all coded into distinct modules and offered over a service catalog.

A small team of security experts can develop this code, review & validate it and make these security modules generally available for all application developers.

Application developers can select the required security modules at the time of deployment. This gives tremendous time to deployment advantage as it automates several security checks and audits that are done before deployment.

Security code review & security testing is done once at the security module level and thus individual security code review of each application can then be automated. This saves tremendous amount of time during application testing time - leading to faster deployment.

Software security is ever changing, so when a new standard or a security posture has to be modified, only the security modules are changed and applications can pick up the new security modules - thus automating security updates on a whole lot of individual applications. This leads to tremendous effort saving in operations management of deployed apps.


Tuesday, May 08, 2018

Build Modern Data Center for Digital Banking



Building a digital bank needs a modern data center. The dynamic nature of fintech and digital banking calls for a new data center which is  highly dynamic, scalable, agile, highly available, and offers all compute, network, storage, and security services as a programmable object with unified management.

A modern data center enables banks to respond quickly to the dynamic needs of the business.
Rapid IT responsiveness is architected into the design of a modern infrastructure that abstracts traditional infrastructure silos into a cohesive virtualized, software defined environment that supports both legacy and cloud native applications and seamlessly extends across private and public clouds .

A modern data center can deliver infrastructure as code to application developers for even
faster provisioning both test & production deployment via rapid DevOps.

Modern IT infrastructure is built to deliver automation - to rapidly configure, provision, deploy, test, update, and decommission infrastructure and applications (Both legacy, Cloud native and micro services.

Modern IT infrastructure is built with security as a solid foundation to help protect data, applications, and infrastructure in ways that meet all compliance requirements, and also offer flexibility to rapidly respond to new security threats.

Monday, May 07, 2018

Product Management - Managing SaaS Offerings



If you are a product manager of a SaaS product, then there additional things one needs to do to ensure a successful customer experience - Manage the cloud deployment.

Guidelines to choosing the best data center or cloud-based platform for a SaaS offering

1. Run the latest software. 

In the data center or in the IaaS cloud, have the latest versions of all supporting software: OS, hyper visors, Security, core libraries etc., Having the latest software stack will help build the most secure ecosystem for your SaaS offerings.

2. Run on the latest hardware. 

Assuming you're running on your data center, run the SaaS application on the latest servers - like HPE Proliant Gen-10 servers to take advantage of the latest Intel Xeon processors. As of mid-2018, use servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you're not getting the most out of the applications or taking advantage of the hardware chipset.

3. Optimize your infrastructure for best performance.

Choose the VM sizing (vCPU & Memory) for the best software performance. More memory almost always helps. Yes, memory is the lowest hanging of all the low-hanging fruit. You could start out with less memory and add more later with a mouse click. However, the maximum memory available to a virtual server is limited to whatever is in the physical server.

4. Build Application performance monitoring into your SaaS platform

In a cloud, application performance monitoring is vital in determining customer experience. Application performance monitoring had to be from a customer perspective - i.e., how customers experience the software.

This implies constant Server, Network, Storage performance monitoring, VM monitoring, application performance monitoring via synthetic transactions.

Application performance also determines the location of cloud services. If  customers are in East coast - then servers/datacenters should be in east coast. Identify where customers are using the software and locate the data centers closer to customer, to maximize user experience.

5. Build for DR and Redundancy

SaaS operation must be available 24x7x365. So every component of SaaS platform must be designed for high availability (multiple redundancy) and active DR. If the SaaS application is hosted on big name-brand hosting services (AWS, Azure, Google Cloud etc) then opt for multi-site resilience with auto fail over.

6. Cloud security

Regardless of your application, you'll need to decide if you'll use your cloud vendor's native security tools or leverage your own for deterrent, preventative, detective and corrective controls. Many, though not all, concerns about security in the cloud are overblown. At the infrastructure level, the cloud is often more secure than private data centers. And because managing security services is complex and error-prone, relying on pre-configured, tested security services available from your cloud vendor may make sense. That said, some applications and their associated data have security requirements that cannot be met exclusively in the cloud. Plus, for applications that need to remain portable between environments, it makes sense to build a portable security stack that provides consistent protection across environments.

Hybrid SaaS Offering

Not all parts of your SaaS application can reside in one cloud. There may be cases where your SaaS app runs on one cloud - but pulls data from other cloud. This calls for interconnect between multiple cloud services from various cloud providers.

In such hybrid environment, one need to know how apps communicate and how to optimize such a data communications. Latency will be a critical concern and in such cases, one needs to build in a cloud interconnect services into the solution.

Cloud Interconnect: Speed and security for critical apps

If the SaaS App needs to access multiple cloud locations, you might consider using a cloud interconnect service. This typically offers lower latency and when security is a top priority, cloud interconnect services offer an additional security advantage.

Closing Thoughts

SaaS offerings has several unique requirements and needs continuos improvements. Product managers need to make important decisions about how the applications  are hosted in the cloud environment and how customers experience it. Making the right decisions gives the results for a successful SaaS offering.

Finally, measure continuously. Measure real-time performance, after deployment, examining all relevant factors, such as end-to-end performance, user response time, and individual components. Be ready to make changes if performance drops unexpectedly or if things change. Operating system patches, updates to core applications, workload from other tenants, and even malware infections can suddenly slow down server applications.

Friday, March 30, 2018

Build World Class SaaS Operations

The process of building a SaaS operation includes these five major stages:

1. Architect
2. Secure
3. Comply
4. Operate
5. Optimize

Architect
Start with right architecture for scalability, performance, security and efficiency.

Secure
Harden IT systems to protect from attacks, security exploits. Implement active security monitoring to detect & mitigate security breaches and address all evolving security threats

Comply
SaaS operations must comply with regulatory norms, industry standard and customer requirements.
Eg: PCI, HIPPA, DISA etc

Operate
Establish best practices that must be followed to provide high performance, most reliable SaaS

Optimize
After initial deployment, SaaS platform must be optimized to reduce operational costs & improve efficiency

Tuesday, February 13, 2018

Storage Versatility with SDS


Storage Versatility with SDS 

HPE offers a complete range of Software defined Storage solutions which work on HPE Proliant Servers - to provide a cost optimized, high performance storage solutions for all your cloud application needs.

SDS enables IT to provide multiple volume types, which differ in performance characteristics and cost points, so that you can tailor your storage performance and cost to the needs of your applications. These volumes types fall into following categories:

1. SSD-backed volumes optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS.
Ex: vSAN, Storage Spaces Direct

2. A mix of SSD & HDD-backed volumes optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS.
Ex: vSAN, Storage Spaces Direct

3. A mix of SSD & HDD-backed volumes optimized for storing file systems which provides simple, scalable file storage for your cloud apps. Build an elastic storage capacity with scalable file systems, so your applications have the storage they need, when they need it.
Ex: Lusture File System

4. HDD-backed Object storage optimized to securely collect, store, and analyze their data at a massive scale, to store and retrieve any amount of data from anywhere over Internet.
Ex: Scality Ring Object Store

Thursday, January 18, 2018

Software Defined Security

 
 In the virtual world an organization might have thousands of virtual machines... The organization cannot manage them manually. That's where the Software Defined Security comes handy. Applying security policy uniformly and automatically in all environments.
 
 With runtime virtualization, different containers/VMs can reside together in the same cloud infrastructure – and have different security protections. Each application can have a different security profile. A software bases security solutions helps automate secure deployments in clouds and allow for customization of protection across different applications
 

 In a hybrid cloud environments, applications can span across multiple clouds – and yet have an uniform security settings and responses. Automate responses to security events to minimize damages and increase vigilance with automated monitoring.

Tuesday, January 02, 2018

Wednesday, November 29, 2017

Managing Hybrid IT with HPE OneSphere


 HPE OneSphere simplifies multi-cloud management for enterprises. With HPE OneSphere:
 1. One can deliver everything “as-a-service”
  a. Present all resources as ready-to-deploy services
  b. Build VM Farm that spans across private and public clouds
  c. Dynamically scale resources
  d. Lower Opex
 2. Control IT spend and utilization of public cloud services
  a. Mange subscription based consumption
  b. Optimize app placement using insights/reports
  c. Get visibility into cross-cloud resource utilization & costs
 3. Respond faster by enabling fast app deployment
  a. Provide a quota based project work spaces
  b. Provide self-service access to curated tools, resources & templates
  c. Streamline DevOps process

Wednesday, November 22, 2017

Why Use Lusture File System?


Lusture File System is designed for a large-scale, high-performance data storage system. Lusture was designed for High Performance Computing requirements – which scales linearly to meet the most stringent and highly demanding requirements of Media applications. 

Lustre file systems have high performance capabilities and open source licensing, it is often used in supercomputers. Since June 2005, it has consistently been used by at least half of the top ten, & more than 60 of the top 100 fastest supercomputers in the world, including the world's No. 2 and No. 3 ranked TOP500  supercomputers in 2014,Titan & Sequoia. 

 Lustre file systems are scalable and can be part of multiple computer clusters with tens of thousands of client nodes, hundreds of petabytes (PB) of storage on thousands of servers, and more than a terabyte per second (TB/s) of aggregate I/O throughput.  This makes Lustre file systems a popular choice for businesses with large data centers, including those in industries such as Media Service, Finance, Research, Life sciences, and Oil & Gas.

Why Object Store is ideal data storage for Media Apps?




We are seeing a tremendous explosion of media content on Internet. Today, its not just YouTube for video distribution, there are million mobile apps which dostribute media - Audio & Video content over Internet.

Users today expect on-demand audio/video, anywhere, anytime access from any device. This increases the number of transcoded copies - to accommodate devices with various screen sizes. 

Companies are now using Video and Audio as major means of distributing information in their websites. This media content is cataloged online and is always available for users.

Even the content creation is adding new challenges to data storage. The advent of new audio & video technologies is making raw content capture much larger: 3D, 4K/8K, High Dynamic Range, High Frame Rates (120 fps, 240fps), Virtual and Augmented Reality, etc.

Content creation workflow has changed from file-based workflows to cloud-based workflows for production, post-production processing such as digital effects, rendering, or transcoding, as well as distribution and archiving. This has created a need for real-time collaboration need for distributed environments and teams scattered all over the globe, across many locations and time zones

All this changes in how media is created and consumed has resulted in such massive dataset sizes, traditional storage architectures just can't keep up any longer in terms of scalability.

Traditional storage array technolofies such as RAID will no longer capable of serving the new data demands. For instance, routine RAID rebuilds would be taking way too long in case of a failure, heightening data loss risks upon additional failures during that dangerously longer time window. Furthermore, even if current storage architectures could technically keep up, they are cost-prohibitive, especially considering the impending data growth tsunami about to hit. To top it off, they just can't offer the agility, efficiency and flexibility new business models have come to expect in terms of instant and unfettered access, rock-solid availability, capacity elasticity, deployment time and so on.

Facing such daunting challenges, the good news is that a solution does exist and is here today: Object Storage.

Object Storage is a based on sophisticated storage software algorithms running on a distributed, interconnected cluster of high-performance yet standard commodity hardware nodes, delivering an architected solution suitable for the stringent performance, scalability, and cost savings requirements required for massive data footprints. The technology has been around for some time but is now coming of age.

The Media and Entertainment industry is well aware of the benefits Object Storage provides, which is why many players are moving toward object storage and away from traditional file system storage. These benefits include:


  • Virtually unlimited scalability
    Scale out by adding new server node
  • Low cost with leverage of commodity hardware
  • Flat and global namespace, with no locking or volume semantics
  • Powerful embedded metadata capabilities (native as well as user-defined)
  • Simple and low-overhead RESTful API for ubiquitous, straightforward access over HTTP from any client anywhere
  • Self-healing capabilities with sophisticated and efficient data protection through erasure coding (local or geo-dispersed)
  • Multi-tenant management and data access capabilities (ideal for service providers)
  • Reduced complexity (of initial deployment/staging as well as ongoing data management)
  • No forklift upgrades, and no need for labor-intensive data migration projects
  • Software-defined storage flexibility and management


HPE. A leading sellers of servers and Hyperconverged Systems offers several low cost, high performance solutions for Object Storage on its servers using Software Defined Storage solutions:

1. Object Store with Scality Ring
2. Lusture File System

Scality Ring Object Store is a paid SDS offering from Scality Inc which is ideal for enterprise customers.

The Lustre file system is an open-source, parallel file system that supports many requirements of leadership class HPC simulation environments. Born from from a research project at Carnegie Mellon University, the Lustre file system has grown into a file system supporting some of the Earth's most powerful supercomputers. The Lustre file system provides a POSIX compliant file system interface, can scale to thousands of clients, petabytes of storage and hundreds of gigabytes per second of I/O bandwidth. The key components of the Lustre file system are the Metadata Servers (MDS), the Metadata Targets (MDT), Object Storage Servers (OSS), Object Server Targets (OST) and the Lustre clients.

In short, Lusture is ideal for large scale storage needs of service providers and large enterprises.

Thursday, November 16, 2017

Why Use Containers for Microservices?



Microservices deliver three benefits: speed to market, scalability, and flexibility.

Speed to Market
Microservices are small, modular pieces of software. They are built independently. As such, development teams can deliver code to market faster. Engineers iterate on features, and incrementally deliver functionality to production via an automated continuous delivery pipeline.

Scalability
At web-scale, it's common to have hundreds or thousands of microservices running in production. Each service can be scaled independently, offering tremendous flexibility. For example, let's say you are running IT for an insurance firm. You may scale enrollment microservices during a month-long open enrollment period. Similarly, you may scale member inquiry microservices at a different time E.g., during the first week of the coverage year, as you anticipate higher call volumes from subscribed members. This type of scalability is very appealing, as it directly helps a business boost revenue and support a growing customer base.

Flexibility
With microservices, developers can make simple changes easily. They no longer have to wrestle with millions of lines of code. Microservices are smaller in scale. And because microservices interact via APIs, developers can choose the right tool (programming language, data store, and so on) for improving a service.

Consider a developer updating a security authorization microservice. The dev can choose to host the authorization data in a document store. This option offers more flexibility in adding and removing authorizations than a relational database. If another developer wants to implement an enrollment service, they can choose a relational database its backing store. New open-source options appear daily. With microservices, developers are free to use new tech as they see fit.
Each service is small, independent, and follows a contract. This means development teams can choose to rewrite any given service, without affecting the other services, or requiring a full-fledged deployment of all services.

This is incredibly valuable in an era of fast-moving business requirements.