Thursday, August 30, 2018

Interesting Careers in Big Data


Big Data & Data analytics has opened a wide range of new & interesting career opportunities. There is an urgent need for Big Data professionals in organizations.

Not all these careers are new, and many of them are remapping or enhancements of older job functions. For example, a Statistician was formerly deployed mostly in government organization or in sales/manufacturing for sales forecast for financial analysis, statisticians today have become center of business operations. Similarly, Business analysts have become key for data analytics – as business analysts play a critical role of understanding business processes and identifying solutions.


Here are 12 interesting & fast growing careers in Big Data.

1. Big Data Engineer
Architect, Build & maintain IT systems for storing & analyzing big data. They are responsible for designing a Hadoop cluster used for data analytics. These engineers need to have a good understanding of computer architectures and develop complex IT systems which are needed to run analytics.

2. Data Engineer
Data engineers understand the source, volume and destination of data, and have to build solutions to handle this volume of data. This could include setting up databases for handling structured data, setting up data lakes for unstructured data, securing all the data, and managing data throughout its lifecycle.

3. Data Scientist
Data Scientist is relatively a new role. They are primarily mathematicians who can build complex models, from which one extract meaningful analysis.

4. Statistician
Statisticians are masters in crunching structured numerical data & developing models that can test business assumptions, enhance business decisions and make predictions.

5. Business Analyst
Business analysts are the conduits between big data team and businesses. They understand business processes, understand business requirements, and identify solutions to help businesses. Business analysts work with data scientists, analytics solution architects and businesses to create a common understanding of the problem and the proposed solution.

6. AI/ML Scientist
This is relatively a new role in data analytics. Historically, this was part of large government R&D programs and today, AI/ML scientists are becoming the rock stars of data analytics.

7. Analytics Solution Architects
Solution architects are the programmers who develop software solutions – which leads to automation and reports for faster/better decisions.

8. BI Specialist
BI Specialists understand data warehouses, structured data and create reporting solutions. They also work with business to evangelize BI solutions within organizations.

9. Data Visualization Specialist
This is a relatively new career. Big data presents a big challenge in terms of how to make sense of this vast data. Data visualization specialists have the skills to convert large amounts of data into simple charts & diagrams – to visualize various aspects of business. This helps business leaders to understand what’s happening in real time and take better/faster decisions.

10. AI/ML Engineer
These are elite programmers who can build AI/ML software – based on algorithms developed by AI/ML scientists. In addition, AI/ML engineers also need to monitor AL solutions for the output & decisions done by AI systems and take corrective actions when needed.

11. BI Engineer
BI Engineers build, deploy, & maintain data warehouse solutions, manage structured data through its lifecycle and develop BI reporting solutions as needed.

12. Analytics Manager
This is relatively a new role created to help business leaders understand and use data analytics, AI/ML solutions. Analytics Managers work with business leaders to smoothen solution deployment and act as liaison between business and analytics team throughout the solution lifecycle.

Wednesday, August 29, 2018

Customer Journey Towards Digital Banking



The bank branch as we know it with tellers behind windows and bankers huddled in cubicles with desktop computers, is in need of a massive transformation.

Today. most customers now carry a bank in their pockets in the form of a smart phone app, and  visit an actual branch is not really needed. But banks all over the world are still holding on to the traditional brick-and-morter branches.

Though many banks are closing these branches. In 2017 alone, SBI, India's largest bank closed 716 branches!

Today, despite all the modern mobile technologies, physical branches remain an essential part of banks' operations and customer advisory functions. Brick-and-mortar locations are still one of the leading sales channels, and even in digitally advanced European nations, between 30 and 60 percent  of customers prefer doing at least some of their banking at branches.

While banks like to move customers to the mobile banking platform, changing customer behavior has become a major challenge. The diagram shows the 5 distinct stages of customer behavior and banks must nudge customers to go along this journey.

Friday, August 24, 2018

Four Key Aspects of API Management

Today, APIs are transforming businesses. APIs are the core of creating new apps, customer-centric development and development of new business models.

APIs are the core of the drive towards digitization, IoT, Mobile first, Fintech and Hybrid cloud. This focus on APIs implies having a solid API management systems in place. 

API Management is based on four rock solid aspects:

1. API Portal
Online portal to promote APIs.
This is essentially the first place users will come to get registered, get all API documentation, enroll in an online community & support groups.
In addition, it is good practice to provide an online API testing platform to help customers build/test their API ecosystems.

2. API Gateway
API Gateway – Securely open access for your API
Use policy driven security to secure & monitor API access to protect your API’s from unregistered usage, protect from malicious attacks. Enable DMZ-strength security between consumer apps using your APIs & internal servers

3. API Catalog
API lifecycle Management Manage the entire process of designing, developing, deploying, versioning & retiring APIs. 
Build & maintain the right APIs for your business.  Track complex interdependencies of APIs on various services and applications.
Design and configure policies to be applied to your APIs at runtime.

4. API Monitoring
API Consumption Management
Track consumption of APIs for governance, performance & Compliance.
Monitor for customer experience and develop comprehensive API monetization plan
Define, publish and track usage of API subscriptions and charge-back services

Thursday, August 23, 2018

Common Options for Disaster Recovery


Disaster recovery (DR) is based on three standard DR sites.

In this article, lets take a look at the differences in hot site vs. warm and cold sites in disaster recovery.

Hot site 

In a hot site approach, the organization duplicates its entire environment as the basis of its DR strategy — an approach which, as you’d expect, costs a lot in terms of investment and upkeep. Even with data duplication, keeping hot site servers and other components in sync is time consuming. A typical hot site consists of servers, storage systems, and network infrastructure that together comprise a logical duplication of the main processing site. Servers and other components are maintained and kept at the same release and patch level as their primary counterparts. Data at the primary site is usually replicated over a WAN link to the hot site. Failover may be automatic or manual, depending on business requirements and available resources. Organizations can run their sites in “active‐active” or “active‐ passive” mode. In active‐active mode, applications at primary and recovery sites are live all the time, and data is replicated bi‐directionally so that all databases are in sync. In active‐ passive mode, one site acts as primary, and data is replicated to the passive standby sites.

Warm site 

With a warm site approach, the organization essentially takes the middle road between the expensive hot site and the empty cold site. Perhaps there are servers in the warm site, but they might not be current. It takes a lot longer (typically a few days or more) to recover an application to a warm site than a hot site, but it’s also a lot less expensive.

Cold site 

Effectively a non‐plan, the cold site approach proposes that, after a disaster occurs, the organization sends backup media to an empty facility, in hopes that the new computers they purchase arrive in time and can support their applications and data. This is a desperate effort guaranteed to take days if not weeks. I don’t want to give you the impression that cold sites are bad for this reason. Based on an organization’s recoverability needs, some applications may appropriately be recovered to cold sites. Another reason that organizations opt for cold sites is that they are effectively betting that a disaster is not going to occur, and thus investment is unnecessary. 


Tuesday, August 21, 2018

Fundamentals of Data Management in the Age of Big Data

In the age of GDPR and when new data regulations are being put in place, companies now have to be prudent and cautious in their data management policies. 

Data management, data privacy & security risks pose a great management challenge. In order to address these challenges, companies need to put proper data management policies in place. Here are eight fundamental policies of data management that needs to be adhered to by all companies.


Friday, August 17, 2018

4 Types of Data Analytics


Data analytics can be classified into 4 types based on complexity & Value. In general, most valuable analytics are also the most complex.

1. Descriptive analytics

Descriptive analytics answers the question:  What is happening now?

For example, in IT management, it tells how many applications are running in that instant of time and how well those application are working. Tools such as Cisco AppDynamics, Solarwinds NPM etc., collect huge volumes of data and analyzes and presents it in easy to read & understand format.

Descriptive analytics compiles raw data from multiple data sources to give valuable insights into what is happening & what happened in the past. However, this analytics does not what is going wrong or even explain why, but his helps trained managers and engineers to understand current situation.

2. Diagnostic analytics

Diagnostic Analytics uses real time data and historical data to automatically deduce what has gone wrong and why? Typically, diagnostic analysis is used for root cause analysis to understand why things have gone wrong.

Large amounts of data is used to find dependencies, relationships and to identify patterns to give a deep insight into a particular problem. For example, Dell - EMC Service Assurance Suite can provide fully automated root cause analysis of IT infrastructure. This helps IT organizations to rapidly troubleshoot issues & minimize downtimes.

3. Predictive analytics

Predictive analytics tells what is likely to happen next.

It uses all the historical data to identify definite pattern of events to predict what will happen next. Descriptive and diagnostic analytics are used to detect tendencies, clusters and exceptions, and predictive analytics us built on top to predict future trends.

Advanced algorithms such as forecasting models are used to predict. It is essential to understand that forecasting is just an estimate, the accuracy of which highly depends on data quality and stability of the situation, so it requires a careful treatment and continuous optimization.

For example, HPE Infosight can predict what can happen to IT systems, based on current & historical data. This helps IT companies to manage their IT infrastructure to prevent any future disruptions.



4. Prescriptive analytics

Prescriptive analytics is used to literally prescribe what action to take when a problem occurs.

It uses a vast data sets and intelligence to analyze the outcome of the possible action and then select the best option. This state-of-the-art type of data analytics requires not only historical data, but also external information from human experts (also called as Expert systems) in its   algorithms to choose the bast possible decision.

Prescriptive analytics uses sophisticated tools and technologies, like machine learning, business rules and algorithms, which makes it sophisticated to implement and manage.

For example, IBM Runbook Automation tools helps IT Operations teams to simplify and automate repetitive tasks.  Runbooks are typically created by technical writers working for top tier managed service providers. They include procedures for every anticipated scenario, and generally use step-by-step decision trees to determine the effective response to a particular scenario.

Thursday, August 16, 2018

Successful IoT deployment Requires Continuous Monitoring


Growth of the IOT has created new challenges to business. The massive volume of IoT devices and the deluge of data it creates becomes a challenge — particularly when one uses IoT as key part of their business operations. These challenges can be mitigated with real-time monitoring tools that has to be tied to the ITIL workflows for rapid diagnostics and remediation. 

Failure to monitor IoT devices leads to a failed IoT deployment.

Steps in Cloud Adaption at Large Enterprises

Large enterprises have bigger challenges when it comes to migrating applications to cloud. Migration to cloud is often an evolutionary process in most large enterprises and is often a 4 step process - but not necessarily a sequential process, and can happen in sequence or on parallel.

Moving to cloud requires a complete buy-in from all business & IT teams: developers, compliance experts, procurement, and security.

The first step is all about becoming aware of cloud technologies and its implications. IT team will need to understand:

1. What are benefits - Agility, cost savings, scalability, etc.
2. What is the roadmap for moving to the cloud?
3. What skills each team member will need?
4. How does the legacy applications work in the future?
5. Who are the partners in this journey?

The second step is all about experimentation and learning from those small experiments. These are typically PoC projects which demonstrates the capability & benefits. The PoC projects are needed to get key stake holder buy-in.

Third step is essentially a migration of existing apps to cloud. For example moving emails to cloud or moving office apps to  Offce365 cloud etc. These projects are becoming a norm for large enterprises - which have a rich legacy.

Fourth step demonstrates  the final maturity of cloud. In this stage, companies now deploy all new apps on cloud and these are cloud only apps.