Thursday, August 30, 2018

Interesting Careers in Big Data

Big Data & Data analytics has opened a wide range of new & interesting career opportunities. There is an urgent need for Big Data professionals in organizations.

Not all these careers are new, and many of them are remapping or enhancements of older job functions. For example, a Statistician was formerly deployed mostly in government organization or in sales/manufacturing for sales forecast for financial analysis, statisticians today have become center of business operations. Similarly, Business analysts have become key for data analytics – as business analysts play a critical role of understanding business processes and identifying solutions.

Here are 12 interesting & fast growing careers in Big Data.

1. Big Data Engineer
Architect, Build & maintain IT systems for storing & analyzing big data. They are responsible for designing a Hadoop cluster used for data analytics. These engineers need to have a good understanding of computer architectures and develop complex IT systems which are needed to run analytics.

2. Data Engineer
Data engineers understand the source, volume and destination of data, and have to build solutions to handle this volume of data. This could include setting up databases for handling structured data, setting up data lakes for unstructured data, securing all the data, and managing data throughout its lifecycle.

3. Data Scientist
Data Scientist is relatively a new role. They are primarily mathematicians who can build complex models, from which one extract meaningful analysis.

4. Statistician
Statisticians are masters in crunching structured numerical data & developing models that can test business assumptions, enhance business decisions and make predictions.

5. Business Analyst
Business analysts are the conduits between big data team and businesses. They understand business processes, understand business requirements, and identify solutions to help businesses. Business analysts work with data scientists, analytics solution architects and businesses to create a common understanding of the problem and the proposed solution.

6. AI/ML Scientist
This is relatively a new role in data analytics. Historically, this was part of large government R&D programs and today, AI/ML scientists are becoming the rock stars of data analytics.

7. Analytics Solution Architects
Solution architects are the programmers who develop software solutions – which leads to automation and reports for faster/better decisions.

8. BI Specialist
BI Specialists understand data warehouses, structured data and create reporting solutions. They also work with business to evangelize BI solutions within organizations.

9. Data Visualization Specialist
This is a relatively new career. Big data presents a big challenge in terms of how to make sense of this vast data. Data visualization specialists have the skills to convert large amounts of data into simple charts & diagrams – to visualize various aspects of business. This helps business leaders to understand what’s happening in real time and take better/faster decisions.

10. AI/ML Engineer
These are elite programmers who can build AI/ML software – based on algorithms developed by AI/ML scientists. In addition, AI/ML engineers also need to monitor AL solutions for the output & decisions done by AI systems and take corrective actions when needed.

11. BI Engineer
BI Engineers build, deploy, & maintain data warehouse solutions, manage structured data through its lifecycle and develop BI reporting solutions as needed.

12. Analytics Manager
This is relatively a new role created to help business leaders understand and use data analytics, AI/ML solutions. Analytics Managers work with business leaders to smoothen solution deployment and act as liaison between business and analytics team throughout the solution lifecycle.

Wednesday, August 29, 2018

Customer Journey Towards Digital Banking

The bank branch as we know it with tellers behind windows and bankers huddled in cubicles with desktop computers, is in need of a massive transformation.

Today. most customers now carry a bank in their pockets in the form of a smart phone app, and  visit an actual branch is not really needed. But banks all over the world are still holding on to the traditional brick-and-morter branches.

Though many banks are closing these branches. In 2017 alone, SBI, India's largest bank closed 716 branches!

Today, despite all the modern mobile technologies, physical branches remain an essential part of banks' operations and customer advisory functions. Brick-and-mortar locations are still one of the leading sales channels, and even in digitally advanced European nations, between 30 and 60 percent  of customers prefer doing at least some of their banking at branches.

While banks like to move customers to the mobile banking platform, changing customer behavior has become a major challenge. The diagram shows the 5 distinct stages of customer behavior and banks must nudge customers to go along this journey.

Friday, August 24, 2018

Four Key Aspects of API Management

Today, APIs are transforming businesses. APIs are the core of creating new apps, customer-centric development and development of new business models.

APIs are the core of the drive towards digitization, IoT, Mobile first, Fintech and Hybrid cloud. This focus on APIs implies having a solid API management systems in place. 

API Management is based on four rock solid aspects:

1. API Portal
Online portal to promote APIs.
This is essentially the first place users will come to get registered, get all API documentation, enroll in an online community & support groups.
In addition, it is good practice to provide an online API testing platform to help customers build/test their API ecosystems.

2. API Gateway
API Gateway – Securely open access for your API
Use policy driven security to secure & monitor API access to protect your API’s from unregistered usage, protect from malicious attacks. Enable DMZ-strength security between consumer apps using your APIs & internal servers

3. API Catalog
API lifecycle Management Manage the entire process of designing, developing, deploying, versioning & retiring APIs. 
Build & maintain the right APIs for your business.  Track complex interdependencies of APIs on various services and applications.
Design and configure policies to be applied to your APIs at runtime.

4. API Monitoring
API Consumption Management
Track consumption of APIs for governance, performance & Compliance.
Monitor for customer experience and develop comprehensive API monetization plan
Define, publish and track usage of API subscriptions and charge-back services

Thursday, August 23, 2018

Common Options for Disaster Recovery

Disaster recovery (DR) is based on three standard DR sites.

In this article, lets take a look at the differences in hot site vs. warm and cold sites in disaster recovery.

Hot site 

In a hot site approach, the organization duplicates its entire environment as the basis of its DR strategy — an approach which, as you’d expect, costs a lot in terms of investment and upkeep. Even with data duplication, keeping hot site servers and other components in sync is time consuming. A typical hot site consists of servers, storage systems, and network infrastructure that together comprise a logical duplication of the main processing site. Servers and other components are maintained and kept at the same release and patch level as their primary counterparts. Data at the primary site is usually replicated over a WAN link to the hot site. Failover may be automatic or manual, depending on business requirements and available resources. Organizations can run their sites in “active‐active” or “active‐ passive” mode. In active‐active mode, applications at primary and recovery sites are live all the time, and data is replicated bi‐directionally so that all databases are in sync. In active‐ passive mode, one site acts as primary, and data is replicated to the passive standby sites.

Warm site 

With a warm site approach, the organization essentially takes the middle road between the expensive hot site and the empty cold site. Perhaps there are servers in the warm site, but they might not be current. It takes a lot longer (typically a few days or more) to recover an application to a warm site than a hot site, but it’s also a lot less expensive.

Cold site 

Effectively a non‐plan, the cold site approach proposes that, after a disaster occurs, the organization sends backup media to an empty facility, in hopes that the new computers they purchase arrive in time and can support their applications and data. This is a desperate effort guaranteed to take days if not weeks. I don’t want to give you the impression that cold sites are bad for this reason. Based on an organization’s recoverability needs, some applications may appropriately be recovered to cold sites. Another reason that organizations opt for cold sites is that they are effectively betting that a disaster is not going to occur, and thus investment is unnecessary.