Eating an Elephant: Analyzing Big Data in the Call Center

Spoken | August 9, 2016

A sneak peak at session content from the Boost user conference

One of the most demanded sessions for the upcoming Boost user conference was that two-word buzzword that no one can easily pin down: big data. What the heck is it, and what are we supposed to do with it?

Big data is a buzzword everywhere, not just in the call center space. And call centers have traditionally collected mounds of data on call metrics, agent performance and even real time analytics. But how often do organizations draw actionable insights from all that data? That question, along with others, will be posed at the Boost panel Eating an Elephant: Analyzing Big Data in the Call Center.

In advance of Boost, meet the panel and what they had to say about big data:

Bit data panelist photo1

How do you define “big data”?

4-Vs-of-big-dataVolume, velocity, variety, veracity It’s a buzzword, but here are two definitions: first, data that is so huge that it can’t be handled by regular hardware or accessed by regular database programs like SQL and requires a lot of CPU to process. Second, the four Vs of big data: volume (a lot of data collected due to social networking and mobile), velocity (streaming of data, which is crucial for companies), variety (different types of data, including structured and unstructured), and veracity (especiall mobile phones where data security and accuracy are uncertain).|Yshay

Too large for normal processing Big data is a data set that is too large to be stored or processed in a single system, for example, a computer, a database or a spreadsheet. It typically accumulates at a very high rate and requires a strategy for data collection, transport, storage and retirement. |Gilad

Complex, fast and hard to manage A collection of complex, mainly unstructured and typically hard-to-manage data. “Big data” is a loose term; the idea is there’s a lot of data in terms of volume and complexity, and it’s coming at you very fast. Further, that data might be unstructured and tough to manage or analyze.|David

Complex, unstructured and difficult to analyze Data that is fairly large and complex and it is not easy to analyze or interpret using our regular data analytics techniques (including various statistical tools and tests readily available to us these days). Big data usually varies and expands rapidly and is mostly unstructured; so it is very difficult to maintain such data in traditional databases.|Gaurav

What is the biggest mistakes organizations make with big data analysis?

Not having a goal and use case A lot of companies want to have a big data strategy, but they don’t know what they want from it. So they put effort into technology instead of a user case. They haven’t thought about what insight they are chasing and just getting hardware and software. |Yshay

Misunderstanding data relevance and quality Big data analysis can provide a lot of insights, but are they material? Without solid business cases of what you’re trying to achieve and an idea of how to measure the contribution of big data analysis to success (or failure), an organization can spend a lot of time and money without return.

Screen Shot 2014-08-26 at 3.22.02 PMA famous case of big data analysis failure is Google’s flu trend predictor. Although after its promising 2008 launch, it later became clear that it overpredicted flu cases around the world. Common speculation for that failure laid the blame on data reporting manipulation by health organizations and private companies driven by political or business interest.|Gilad

Ignoring it The biggest mistake people make with big data is ignoring it. Traditionally, data was hard to acquire or analyze, so people just ignored it. It’s not intentional, but it’s easier to use the three standard KPIs that seem to work. Another big mistake companies are making is “we have all the data we need to understand our business.” That statement is a huge trap. For example, how on earth would knowing the temperature of the CPU on the agent desktop make a difference in how we conduct busines? That lack of desire to explore data and to stick with what was always done before is a big danger. |David

Mistaking correlation for causation The most common mistake while analyzing big data is very similar to regular data analytics: mistaking correlation for causation. Big data is extremely efficient at pointing out correlations: two data streams moving together in the same direction. But what correlation does NOT point out is causation. Two independent variables can move together, but that doesn’t always imply that one is being caused by the other (or vice versa). It is extremely important to apply business sense once big data analysis point towards various correlations in the data. Some are meaningful, and some have no relevance at all. |Gaurav

What results can an organization realistically expect from big data analysis and in what time frame?

It depends on the use case you’ve developed There are three phases of data mining, assuming you have the infrastructure for big data in place. The first phase is exploration; extracting info from big data; getting raw insights. The second phase is dashboards and visualization for the sales person. The third phase is advanced analytics, which circles back to the original use case. |Yshay

A month to a year In the call center, deploying big data collection and analysis is very complex due to the many varied technologies and system used in this environment, as well as the human factor. With expert help, and especially if the call center systems were designed with big data in mind, some quick improvements might become visible within month. However, a realistic time frame is in about one year. |Gilad

Screen Shot 2014-08-27 at 10.48.17 AMGoing beyond preconceived ideas–an example An example might be best here. Many companies try to save power usage in their data centers. The traditional way of looking at power is to list your equipment, note it requires X amount of power at rest and Y amount of power during activity; then you figure the minimum hardware requirements. But when Facebook started collecting big data through its OCP project and got out of its preconceived notions, the team learned that the typical load balancing approach is in fact very wasteful. It’s actually better to overload servers than to spread the load out equally.

The quick math: say each machine consumes X amount of power. At 50% CPU, it would consume 2X. But at 90% CPU it only consume 2.1X. This organization implemented a power-friendly load balancer initiative that was only made possible by exploring the data outside of their preconceived ideas–and saved $1.2 billion in infrastructure costs over the last three years.|David

Consider both short term and ongoing I’d  categorize big data initiatives into two categories: first, short term projects with a specific problem statement and second, ongoing predictive-model-type initiatives. Big data projects, like any projects, have specific timelines and are based on the business requirements and complexity of the project, so duration can range anywhere from few months to a few years. Predictive models, on the other hand, are ongoing. Analyzing market trends, profitability forecasting and analyzing customer behavior through social media are great examples of ongoing big data initiatives.|Gaurav

Boost is open to Spoken and HyperQuality clients and prospects by invitation. To register for the Boost user conference, as well as to explore the agenda, sessions, welcome videos and special events, visit the Boost14 event page.

Related Posts Plugin for WordPress, Blogger...