Data Insights for Everyone — The Semantic Layer to the Rescue

What is a semantic layer? That’s a good question, but let’s first explain semantics. The way that I explained it to my data science students years ago was like this. In the early days of web search engines, those engines were primarily keyword search engines. If you knew the right keywords to search and if the content providers also used the same keywords on their website, then you could type the words into your favorite search engine and find the content you needed. So, I asked my students what results they would expect from such a search engine if I typed the following words into the search box: “How many cows are there in Texas?” My students were smart. They realized that the search results would probably not provide an answer to my question, but the results would simply list websites that included my words on the page or in the metadata tags: “Texas”, “Cows”, “How”, etc. Then, I explained to my students that a semantic-enabled search engine (with a semantic meta-layer, including ontologies and similar semantic tools) would be able to interpret my question’s meaning and then map that meaning to websites that can answer the question.

This was a good opening for my students to the wonderful world of semantics. I brought them deeper into the world by pointing out how much more effective and efficient the data professionals’ life would be if our data repositories had a similar semantic meta-layer. We would be able to go far beyond searching for correctly spelled column headings in databases or specific keywords in data documentation, to find the data we needed (assuming we even knew the correct labels, metatags, and keywords used by the dataset creators). We could search for data with common business terminology, regardless of the specific choice or spelling of the data descriptors in the dataset. Even more than that, we could easily start discovering and integrating, on-the-fly, data from totally different datasets that used different descriptors. For example, if I am searching for customer sales numbers, different datasets may label that “sales”, or “revenue”, or “customer_sales”, or “Cust_sales”, or any number of other such unique identifiers. What a nightmare that would be! But what a dream the semantic layer becomes!

When I was teaching those students so many years ago, the semantic layer itself was just a dream. Now it is a reality. We can now achieve the benefits, efficiencies, and data superhero powers that we previously could only imagine. But wait! There’s more.

Perhaps the greatest achievement of the semantic layer is to provide different data professionals with easy access to the data needed for their specific roles and tasks. The semantic layer is the representation of data that helps different business end-users discover and access the right data efficiently, effectively, and effortlessly using common business terms. The data scientists need to find the right data as inputs for their models — they also need a place to write-back the outputs of their models to the data repository for other users to access. The BI (business intelligence) analysts need to find the right data for their visualization packages, business questions, and decision support tools — they also need the outputs from the data scientists’ models, such as forecasts, alerts, classifications, and more. The semantic layer achieves this by mapping heterogeneously labeled data into familiar business terms, providing a unified, consolidated view of data across the enterprise.

The semantic layer delivers data insights discovery and usability across the whole enterprise, with each business user empowered to use the terminology and tools that are specific to their role. How data are stored, labeled, and meta-tagged in the data cloud is no longer a bottleneck to discovery and access. The decision-makers and data science modelers can fluidly share inputs and outputs with one another, to inform their role-specific tasks and improve their effectiveness. The semantic layer takes the user-specific results out of being a “one-off” solution on that user’s laptop to becoming an enterprise analytics accelerant, enabling business answer discovery at the speed of business questions.

Insights discovery for everyone is achieved. The semantic layer becomes the arbiter (multi-lingual data translator) for insights discovery between and among all business users of data, within the tools that they are already using. The data science team may be focused on feature importance metrics, feature engineering, predictive modeling, model explainability, and model monitoring. The BI team may be focused on KPIs, forecasts, trends, and decision-support insights. The data science team needs to know and to use that data which the BI team considers to be most important. The BI team needs to know and to use which trends, patterns, segments, and anomalies are being found in those data by the data science team. Sharing and integrating such important data streams has never been such a dream.

The semantic layer bridges the gaps between the data cloud, the decision-makers, and the data science modelers. The key results from the data science modelers can be written back to the semantic layer, to be sent directly to consumers of those results in the executive suite and on the BI team. Data scientists can focus on their tools; the BI users and executives can focus on their tools; and the data engineers can focus on their tools. The enterprise data science, analytics, and BI functions have never been so enterprisey. (Is “enterprisey” a word? I don’t know, but I’m sure you get my semantic meaning.)

That’s empowering. That’s data democratization. That’s insights democratization. That’s data fluency/literacy-building across the enterprise. That’s enterprise-wide agile curiosity, question-asking, hypothesizing, testing/experimenting, and continuous learning. That’s data insights for everyone.

Are you ready to learn more how you can bring these advantages to your organization? Be sure to watch the AtScale webinar “How to Bridge Data Science and Business Intelligence” where I join a panel in a multi-industry discussion on how a semantic layer can help organizations make smarter data-driven decisions at scale. There will be several speakers, including me. I will be speaking about “Model Monitoring in the Enterprise — Filling the Gaps”, specifically focused on “Filling the Communication Gaps Between BI and Data Science Teams With a Semantic Data Layer.”

Register to attend and view the webinar at

EX is the New CX

(This article is a continuation of my earlier article “When the Voice of the Customer Actually Talks.”)

I recently attended (virtually) CX Summit 2021, presented by Five9, which focused on “CX Reimagined.” At first this title for the event seemed a bit grandiose to me – Reimagined! After attending the event, I now think the title was perfect, and it could have gone even further. I saw how the “art of the possible” in CX (Customer Experience) and EX (Employee Experience) in the Contact Center is already being realized and is being taken to new realms of possibility through AI and Cloud empowerments.

The evolved (reimagined) Contact Center now comes with more options for digital channels to accommodate the customer, more voice data-powered services that serve both the customer and the contact center representative, and more seamless actions on both ends of the call line, even for complex inquiries. This is all enabled by the 3 A’s: AI, Automation, and voice Analytics. We have heard it before: “happy employee, happy customer!” That now looks like this: “EX is the new CX.” Boom!

In an information-packed talk from Gartner Analyst Drew Kraus on “Getting Past the Hype in Customer Service”, where he reviewed just how much hype there is in the customer service and support technologies market, it became clear to me that Five9 delivers on the needs, not the hype.

Another informative and data-packed presentation was by Five9 SVP Scott Kolman and COMMfusion analyst Blair Pleasant, where they presented and did a deep dive into the Five9 survey “2021 Customer Service Index – Learn how customers have reimagined the customer service experience.” I won’t go too deep here (you should watch the whole session on-demand). Some interesting highlights include:

  1. Five9 surveyed 2048 consumers, with participants from 7 countries, representing ages 19 to early 70’s. They also completed a similar survey in 2020. Side-by-side comparisons of the survey results (by age, by year of survey, and by country) for the different survey questions were quite informative and potentially quite useful for any contact center operation. If that’s what you or your business does, then you should investigate these “Voice of the Customer” results.
  2. Across all demographics, only 25% of respondents felt that their contact center experience got worse (either “much worse” or “slightly worse”) from 2020 to 2021. We might have expected a different result with the pandemic raging. Everyone else (75%) felt that their experience got better, much better, or had no opinion.
  3. Some very surprising results appeared (with significant differences between countries) when people were asked to rate the keys to “Good Service Experience”. Highly rated categories were “Rep gets right answer, even if it takes more time” (33%); “Rep can answer my question quickly’’ (26%); and “Don’t have to wait long to reach rep” (20%).
  4. Similarly, there were some significant differences by country when people were asked to rate the keys to “Bad Service Experience”. Top responses included: “Get passed from one rep to another” (34%); “Have to wait long to reach rep” (26%); and a tie for third place (at 13%) for “Cue/on hold system not helpful” and “Rep cannot answer my question quickly”. (Remember, that despite these seriously bad experiences, only 25% of respondents generally saw a drop in customer service experience in the past year.)
  5. One of the more enlightening survey results appeared when asked, “How likely are you to do business with a company if you have a Poor Service Experience?” The USA responses were significantly different than responses from the other 6 countries in the survey in one category: over 11% of USA respondents were “very likely” to continue doing business, versus 3-6% of non-USA respondents being “very likely”. However, in the “somewhat likely” category, all countries were in the range 10-16%, with the USA respondents close to the midpoint, near 14%. In my opinion (not expressed by the session presenters), a reason for these seemingly incompatible responses is that there are two sentiments being conflated in this one question. On the one hand, you have the bad experience on “this” call. On the other hand, you have perhaps the much worse (time-consuming) future experience of switching providers and dealing with the corresponding onboarding (for whatever service this is about). I might be “somewhat likely” to switch providers after one bad call experience, but I would not be “very likely” to go through the pain of switching providers and all that entails.

There were many interesting and powerful sessions in addition to this one, which I focused on here because it presented lots of survey data, and I love data! Another great session was the presentation by former astronaut (now Professor) Michael Massimino – brilliant and inspiring, with numerous words of wisdom, leadership advice, and life’s lessons learned. Of course, I admit that I was drawn into his NASA space stories, including the Hubble Telescope repair mission that almost went wrong, because I worked with the Hubble Space Telescope project for 10 years and I worked an additional 10 years at NASA’s Goddard Space Flight Center where many of the telescope’s instruments were tested.

My big takeaway from the Five 9 CX Summit is how cloud, AI, automation, and voice analytics are rapidly driving change in the positive direction for contact center representatives and for customers who call in. Maybe that’s why the customer experience didn’t change much from 2020 to 2021, because a lot of those technologies have already been deployed in the past couple of years, particularly for Five9’s clients.

Chatbots and conversational AI are just part of the story – there’s so much more. Five9’s new cloud-enabled, AI-powered, voice data-driven solutions and services described at the summit are definitely worth exploring and investigating for your contact center: IVA (Intelligent Virtual Agents), VoiceStream, Agent Assist, Studio7, Practical AI, WFO (Work Flow Optimization), Conversation Architect, and UC (unified communications) integration into the contact center VX (Voice Experience) workflow.

Learn more about CX Reimagined and the roles of AI, Automation, Cloud, Voice Analytics, and Omnichannel Customer Engagement in the modern contact center at CX Summit 2021, presented by Five9. (Even if you missed the live event, the sessions are recorded, so you can watch them on-demand at any time you wish.) See for yourself where the Reimagined becomes the Realized in CX. And learn why EX is the new CX.

Note: This article was sponsored. The opinions expressed here are my own and do not represent the opinions of any other person, company, or entity.

#Five9CXSummit #CXNation

When the Voice of the Customer Actually Talks

For many years, organizations (mostly consumer-facing) have placed the “voice of the customer” (VoC) high on their priority list of top sources for customer intelligence. The goals of such activities are to improve customer service, customer interactions, customer engagement, and customer experience (CX) through just-in-time customer assistance, personalization, and loyalty-building activities. In recent years, even government agencies have increased their attention on Citizen Experience (CX) and Voice of the Citizen (VoC), to inform and guide their citizen services.

CX has become increasingly data-informed and data-driven, with VoC data being one of the key data sources. Other data sources include purchase patterns, online reviews, online shopping behavior analytics, and call center analytics. As good as these data analytics have been, collecting data and then performing pattern-detection and pattern-recognition analytics can be taken so much further now with AI-enabled customer interactions. 

AI is great for pattern recognition, product and service recommendations, anomaly detection, next-best action and next-best decision recommendations, and providing an insights power-boost to all of those. AI can be considered as Accelerated, Actionable, Amplified, Assisted, Augmented, even Awesome Intelligence, both for the customer and for the call center staff.

Consequently, VoC and AI have wonderfully come together in conversational AI applications, including chatbots. Chatbots can be deployed to answer FAQs, to direct calls to the right service representative, to detect customer sentiment, to monitor call center employee performance, to recall and recognize patterns in the customer’s prior history with the business, to confirm customer identity, to identify up-sell and cross-sell opportunities, to streamline data entry by automatically capturing intricate details of a customer’s preferences, concerns, requests, critical information, and callback expectations, and to detect when it’s time to direct the call to a human agent (the right human agent).

In short, the VoC reaches its peak value when it captures the full depth and breadth of what the customer is talking about and what they are trying to communicate. AI-enabled chatbots are thus bringing great value and effectiveness in consumers’ call center experiences. 

From the call center representative perspective, AI-enabled chatbots are a tremendous efficiency and effectiveness boost for these persons also. Many details of the initial customer interaction can be automatically captured, recorded, indexed, and made searchable even before the call is directed to the representative, increasing the likelihood that it is the right representative for that customer’s specific needs. Not only is the CX amplified, but so is the EX (Employee Experience). Surveys and reports have documented that the strong improvement in call center staff EX is a source of significant value to the entire organization. 

One dimension of this EX amplification that should not be overlooked is when advanced case management is required from a human call center agent. In cases like that, the agent is engaged in their best (most satisfying) capacity as the expert and most knowledgeable source to help the customer, in sharp contrast to other calls where they are engaged in answering the standard FAQs, or in quoting customer account information from a database that a chatbot could easily have retrieved, or in asking the customer to repeat the same information that the customer gave to a previous agent. Everybody wins when all those latter activities are handled swiftly, accurately, and non-redundantly prior to the person-to-person engagement that can then provide the best human touch in the entire caller experience.

Chatbots employ a suite of data-driven technologies, including: machine learning (for pattern detection and recognition, sentiment and emotion detection), natural language processing (covering natural language understanding NLU and natural language generation NLG), voice assistants (for voice search and autonomous action-enablement), cloud computing (to activate actions, services, document creation and document processing), AI (for auto-transcribing conversations, creating real-time action lists, and adding information to appropriate fields automatically), and more.

When the Voice of the Customer talks, the modern AI-powered Call Center listens and responds. 

Learn more about the modern Call Center and CX Reimagined at CX Summit 2021, presented by Five9. The Summit’s 5 tracks and multiple sessions will focus on the transformation of the contact center through the evolution of digital channels, AI, Automation and Analytics. By applying the power of data and the cloud we can reimagine CX and realize results in a rapidly changing marketplace. At the Summit, you can connect and network with contact center professionals, ecosystem partners and peers, you can learn to optimize your Five9 implementation to superpower your contact center, you can hear customer stories and product updates, and you can learn how Five9 can help you deliver a whole new level of customer service. Register here for CX Summit 2021 and see for yourself where the Reimagined becomes the Realized in CX:

Learn more about the Summit, some highlights and key takeaways, in my follow-up article “EX is the New CX.”

Note: This article was sponsored. The opinions expressed here are my own and do not represent the opinions of any other person, company, or entity.

#Five9CXSummit #CXNation

Are You Content with Your Organization’s Content Strategy?

In this post, we will examine ways that your organization can separate useful content into separate categories that amplify your own staff’s performance. Before we start, I have a few questions for you.

What attributes of your organization’s strategies can you attribute to successful outcomes? How long do you deliberate before taking specific deliberate actions? Do you converse with your employees about decisions that might be the converse of what they would expect? Is a process modification that saves a minute in someone’s workday considered too minute for consideration? Do you present your employees with a present for their innovative ideas? Do you perfect your plans in anticipation of perfect outcomes? Or do you project foregone conclusions on a project before it is completed?

If you have good answers to these questions, that is awesome! I would not contest any of your answers since this is not a contest. In fact, this is actually something quite different. Before you combine all these questions in a heap and thresh them in a combine, and before you buffet me with a buffet of skeptical remarks, stick with me and let me explain. Do not close the door on me when I am so close to giving you an explanation.

What you have just experienced is a plethora of heteronyms. Heteronyms are words that are spelled identically but have different meanings when pronounced differently. If you include the title of this blog, you were just presented with 13 examples of heteronyms in the preceding paragraphs. Can you find them all?

Seriously now, what do these word games have to do with content strategy? I would say that they have a great deal to do with it. Specifically, in the modern era of massive data collections and exploding content repositories, we can no longer simply rely on keyword searches to be sufficient. In the case of a heteronym, a keyword search would return both uses of the word, even though their meanings are quite different. In “information retrieval” language, we would say that we have high RECALL, but low PRECISION. In other words, we can find most occurrences of the word (recall), but not all the results correspond to the meaning of our search (precision). That is no longer good enough when the volume is so high.

The key to success is to start enhancing and augmenting content management systems (CMS) with additional features: semantic content and context. This is accomplished through tags, annotations, and metadata (TAM). TAM management, like content management, begins with business strategy.

Strategic content management focusses on business outcomes, business process improvement, efficiency (precision – i.e., “did I find only the content that I need without a lot of noise?”), and effectiveness (recall – i.e., “did I find all the content that I need?”). Just because people can request a needle in the haystack, it is not a good thing to deliver the whole haystack that contains that needle. Clearly, such a content delivery system is not good for business productivity. So, there must be a strategy regarding who, what, when, where, why, and how is the organization’s content to be indexed, stored, accessed, delivered, used, and documented. The content strategy should emulate a digital library strategy. Labeling, indexing, ease of discovery, and ease of access are essential if end-users are to find and benefit from the collection.

My favorite approach to TAM creation and to modern data management in general is AI and machine learning (ML). That is, use AI and machine learning techniques on digital content (databases, documents, images, videos, press releases, forms, web content, social network posts, etc.) to infer topics, trends, sentiment, context, content, named entity identification, numerical content extraction (including the units on those numbers), and negations. Do not forget the negations. A document that states “this form should not be used for XYZ” is exactly the opposite of a document that states “this form must be used for XYZ”. Similarly, a social media post that states “Yes. I believe that this product is good” is quite different from a post that states “Yeah, sure. I believe that this product is good. LOL.”

Contextual TAM enhances a CMS with knowledge-driven search and retrieval, not just keyword-driven. Contextual TAM includes semantic TAM, taxonomic indexing, and even usage-based tags (digital breadcrumbs of the users of specific pieces of content, including the key words and phrases that people used to describe the content in their own reports). Adding these to your organization’s content makes the CMS semantically searchable and usable. That’s far more laser-focused (high-precision) than keyword search.

One type of implementation of a content strategy that is specific to data collections are data catalogs. Data catalogs are very useful and important. They become even more useful and valuable if they include granular search capabilities. For example, the end-user may only need the piece of the dataset that has the content that their task requires, versus being delivered the full dataset. Tagging and annotating those subcomponents and subsets (i.e., granules) of the data collection for fast search, access, and retrieval is also important for efficient orchestration and delivery of the data that fuels AI, automation, and machine learning operations.

One way to describe this is “smart content” for intelligent digital business operations. Smart content includes labeled (tagged, annotated) metadata (TAM). These labels include content, context, uses, sources, and characterizations (patterns, features) associated with the whole content and with individual content granules. Labels can be learned through machine learning, or applied by human experts, or proposed by non-experts when those labels represent cognitive human-discovered patterns and features in the data. Labels can be learned and applied in existing CMS, in massive streaming data, and in sensor data (collected in devices at the “edge”).

Some specific tools and techniques that can be applied to CMS to generate smart content include these:

  • Natural language understanding and natural language generation
  • Topic modeling (including topic drift and topic emergence detection)
  • Sentiment detection (including emotion detection)
  • AI-generated and ML-inferred content and context
  • Entity identification and extraction
  • Numerical quantity extraction
  • Automated structured (searchable) database generation from textual (unstructured) document collections (for example: Textual ETL).

Consequently, smart content thrives at the convergence of AI and content. Labels are curated and stored with the content, thus enabling curation, cataloguing (indexing), search, delivery, orchestration, and use of content and data in AI applications, including knowledge-driven decision-making and autonomous operations. Techniques that both enable (contribute to) and benefit from smart content are content discovery, machine learning, knowledge graphs, semantic linked data, semantic data integration, knowledge discovery, and knowledge management. Smart content thus meets the needs for digital business operations and autonomous (AI and intelligent automation) activities, which must devour streams of content and data – not just any content, but smart content – the right (semantically identified) content delivered at the right time in the right context.

The four tactical steps in a smart content strategy include:

  1. Characterize and contextualize the patterns, events, and entities in the content collection with semantic (contextual) tags, annotation, and metadata (TAM).
  2. Collect, curate, and catalog (i.e., index) each TAM component to make it searchable, accessible, and reusable.
  3. Deliver the right content at the right time in the right context to the decision agent.
  4. Decide and act on the delivered insights and knowledge.

Remember, do not be content with your current content management strategy. But discover and deliver the perfect smart content that perfects your digital business outcomes. Smart content strategy can save end-users countless minutes in a typical workday, and that type of business process improvement certainly is not too minute for consideration.

Top 10 Data Innovation Trends During 2020

The year 2020 was remarkably different in many ways from previous years. In at least one way, it was not different, and that was in the continued development of innovations that are inspired by data. This steady march of data-driven innovation has been a consistent characteristic of each year for at least the past decade. These data-fueled innovations come in the form of new algorithms, new technologies, new applications, new concepts, and even some “old things made new again”.

I provide below my perspective on what was interesting, innovative, and influential in my watch list of the Top 10 data innovation trends during 2020.

1) Automated Narrative Text Generation tools became incredibly good in 2020, being able to create scary good “deep fake” articles. The GPT-3 (Generative Pretrained Transformer, 3rd generation text autocomplete) algorithm made headlines since it demonstrated that it can start with a very thin amount of input (a short topic description, or a question), from which it can then generate several paragraphs of narrative that are very hard (perhaps impossible) to distinguish from human-generated text. However, it is far from perfect, since it certainly does not have reasoning skills, and it also loses its “train of thought” after several paragraphs (e.g., by making contradictory statements at different places in the narrative, even though the statements are nicely formed sentences).

2) MLOps became the expected norm in machine learning and data science projects. MLOps takes the modeling, algorithms, and data wrangling out of the experimental “one off” phase and moves the best models into deployment and sustained operational phase. MLOps “done right” addresses sustainable model operations, explainability, trust, versioning, reproducibility, training updates, and governance (i.e., the monitoring of very important operational ML characteristics: data drift, concept drift, and model security).

3) Concept drift by COVID – as mentioned above, concept drift is being addressed in machine learning and data science projects by MLOps, but concept drift is so much bigger than MLOps. Specifically, it feels to many of us like a decade of business transformation was compressed into the one year 2020. How and why businesses make decisions, customers make decisions, and anybody else makes decisions became conceptually and contextually different in 2020. Customer purchase patterns, supply chain, inventory, and logistics represent just a few domains where we saw new and emergent behaviors, responses, and outcomes represented in our data and in our predictive models. The old models were not able to predict very well based on the previous year’s data since the previous year seemed like 100 years ago in “data years”. Another example was in new data-driven cybersecurity practices introduced by the COVID pandemic, including behavior biometrics (or biometric analytics), which were driven strongly by the global “work from home” transition, where many insecurities in networks, data-sharing, and collaboration / communication tools were exposed. Behavior biometrics may possibly become the essential cyber requirement for unique user identification, finally putting weak passwords out of commission. Data and network access controls have similar user-based permissions when working from home as when working behind the firewall at your place of business, but the security checks and usage tracking can be more verifiable and certified with biometric analytics. This is critical in our massively data-sharing world and enterprises.

4) AIOps increasingly became a focus in AI strategy conversations. While it is similar to MLOps, AIOps is less focused on the ML algorithms and more focused on automation and AI applications in the enterprise IT environment – i.e., focused on operationalizing AI, including data orchestration, the AI platform, AI outcomes monitoring, and cybersecurity requirements. AIOps appears in discussions related to ITIM (IT infrastructure monitoring), SIEM (security information and event management), APM (application performance monitoring), UEBA (user and entity behavior analytics), DevSecOps, Anomaly Detection, Rout Cause Analysis, Alert Generation, and related enterprise IT applications.

5) The emergence of Edge-to-Cloud architectures clearly began pushing Industry 4.0 forward (with some folks now starting to envision what Industry 5.0 will look like). The Edge-to-Cloud architectures are responding to the growth of IoT sensors and devices everywhere, whose deployments are boosted by 5G capabilities that are now helping to significantly reduce data-to-action latency. In some cases, the analytics and intelligence must be computed and acted upon at the edge (Edge Computing, at the point of data collection), as in autonomous vehicles. In other cases, the analytics and insights may have more significant computation requirements and less strict latency requirements, thus allowing the data to be moved to larger computational resources in the cloud. The almost forgotten “orphan” in these architectures, Fog Computing (living between edge and cloud), is now moving to a more significant status in data and analytics architecture design.

6) Federated Machine Learning (FML) is another “orphan” concept (formerly called Distributed Data Mining a decade ago) that found new life in modeling requirements, algorithms, and applications in 2020. To some extent, the pandemic contributed to this because FML enforces data privacy by essentially removing data-sharing as a requirement for model-building across multiple datasets, multiple organizations, and multiple applications. FML model training is done incrementally and locally on the local dataset, with the meta-parameters of the local models then being shared with a centralized model-inference engine (which does not see any of the private data). The centralized ML engine then builds a global model, which is communicated back to the local nodes. Multiple iterations in parameter-updating and hyperparameter-tuning can occur between local nodes and the central inference engine, until satisfactory model performance is achieved. All through these training stages, data privacy is preserved, while allowing for the generation of globally useful, distributable, and accurate models.

7) Deep learning (DL) may not be “the one algorithm to dominate all others” after all. There was some research published earlier in 2020 that found that traditional, less complex algorithms can be nearly as good or better than deep learning on some tasks. This could be yet another demonstration of the “no free lunch theorem”, which basically states that there is no single universal algorithm that is the best for all problems. Consequently, the results of the new DL research may not be so surprising, but they certainly prompt us with necessary reminders that sometimes simple is better than complexity, and that the old saying is often still true: “perfect is the enemy of good enough.”

8) RPA (Robotic Process Automation) and intelligent automation were not new in 2020, but the surge in their use and in the number of providers was remarkable. While RPA is more rule-based (informed by business process mining, to automate work tasks that have very little variation), intelligent automation is more data-driven, adaptable, and self-learning in real-time. RPA mimics human actions, by repetition of routine tasks based on a set of rules. Intelligent automation simulates human intelligence, which responds and adapts to emergent patterns in new data, and which is capable of learning to automate non-routine tasks. Keep an eye on the intelligent automation space for new and exciting developments to come in the near future around hyperautomation and enterprise intelligence, such as the emergence of learning business systems that learn and adapt their processes based on signals in enterprise data across numerous business functions: finance, marketing, HR, customer service, production, operations, sales, and management.

9) The Rise of Data Literacy initiatives, imperatives, instructional programs, and institutional awareness in 2020 was one of the two most exciting things that I witnessed during the year. (The other one of the two is next on my list.) I have said for nearly 20 years that data literacy must become a key component of education at all levels and an aptitude of nearly all employees in all organizations. The world is data, revolves around data, produces and consumes massive quantities of data, and drives innovative emerging technologies that are inspired by, informed by, and fueled by data: augmented reality (AR), virtual reality (VR), autonomous vehicles, computer vision, digital twins, drones, robotics, AI, IoT, hyperautomation, virtual assistants, conversational AI, chatbots, natural language understanding and generation (NLU, NLG), automatic language translation, 4D-printing, cyber resilience, and more. Data literacy is essential for future of work, future innovation, work from home, and everyone that touches digital information. Studies have shown that organizations that are not adopting data literacy programs are not only falling behind, but they may stay behind, their competition. Get on board with data literacy! Now!

10) Observability emerged as one of the hottest and (for me) most exciting developments of the year. Do not confuse observability with monitoring (specifically, with IT monitoring). The key difference is this: monitoring is what you do, and observability is why you do it. Observability is a business strategy: what you monitor, why you monitor it, what you intend to learn from it, how it will be used, and how it will contribute to business objectives and mission success. But the power, value, and imperative of observability does not stop there. Observability meets AI – it is part of the complete AIOps package: “keeping an eye on the AI.” Observability delivers actionable insights, context-enriched data sets, early warning alert generation, root cause visibility, active performance monitoring, predictive and prescriptive incident management, real-time operational deviation detection (6-Sigma never had it so good!), tight coupling of cyber-physical systems, digital twinning of almost anything in the enterprise, and more. And the goodness doesn’t stop there. The emergence of standards, like OpenTelemetry, can unify all aspects of your enterprise observability strategy: process instrumentation, sensing, metrics specification, context generation, data collection, data export, and data analysis of business process performance and behavior monitoring in the cloud. This plethora of benefits is a real game-changer for open-source self-service intelligent data-driven business process monitoring (BPM) and application performance monitoring (APM), feedback, and improvement. As mentioned above, monitoring is “what you are doing”, and observability is “why you are doing it.” If your organization is not having “the talk” about observability, now is the time to start – to understand why and how to produce business value through observability into the multitude of data-rich digital business applications and processes all across the modern enterprise. Don’t drown in those deep seas of data. Instead, develop an Observability Strategy to help your organization ride the waves of data, to help your business innovation and transformation practices move at the speed of data.

In summary, my top 10 data innovation trends from 2020 are:

  • GPT-3
  • MLOps
  • Concept Drift by COVID
  • AIOps
  • Edge-to-Cloud and Fog Computing
  • Federated Machine Learning
  • Deep Learning meets the “no free lunch theorem”
  • RPA and Intelligent Automation
  • Rise of Data Literacy
  • Observability

If I were to choose what was hottest trend in 2020, it would not be a single item in this top 10 list. The hottest trend would be a hybrid (convergence) of several of these items. That hybrid would include: Observability, coupled with Edge and the ever-rising ubiquitous IoT (sensors on everything), boosted by 5G and cloud technologies, fueling ever-improving ML and DL algorithms, all of which are enabling “just-in-time” intelligence and intelligent automation (for data-driven decisions and action, at the point of data collection), deployed with a data-literate workforce, in a sustainable and trusted MLOps environment, where algorithms, data, and applications work harmoniously and are governed and secured by AIOps.

If we learned anything from the year 2020, it should be that trendy technologies do not comprise a menu of digital transformation solutions to choose from, but there really is only one combined solution, which is the hybrid convergence of data innovation technologies. From my perspective, that was the single most significant data innovation trend of the year 2020.

Analytics Insights and Careers at the Speed of Data

How to make smarter data-driven decisions at scale:

The determination of winners and losers in the data analytics space is a much more dynamic proposition than it ever has been. One CIO said it this way, “If CIOs invested in machine learning three years ago, they would have wasted their money. But if they wait another three years, they will never catch up.”  Well, that statement was made five years ago! A lot has changed in those five years, and so has the data landscape.

The dynamic changes of the business requirements and value propositions around data analytics have been increasingly intense in depth (in the number of applications in each business unit) and in breadth (in the enterprise-wide scope of applications in all business units in all sectors). But more significant has been the acceleration in the number of dynamic, real-time data sources and corresponding dynamic, real-time analytics applications.

We no longer should worry about “managing data at the speed of business,” but worry more about “managing business at the speed of data.”

One of the primary drivers for the phenomenal growth in dynamic real-time data analytics today and in the coming decade is the Internet of Things (IoT) and its sibling the Industrial IoT (IIoT). With its vast assortment of sensors and streams of data that yield digital insights in situ in almost any situation, the IoT / IIoT market has a projected market valuation of $1.5 trillion by 2030. The accompanying technology Edge Computing, through which those streaming digital insights are extracted and then served to end-users, has a projected valuation of $800 billion by 2028.

With dynamic real-time insights, this “Internet of Everything” can then become the “Internet of Intelligent Things”, or as I like to say, “The Internet used to be a thing. Now things are the Internet.” The vast scope of this digital transformation in dynamic business insights discovery from entities, events, and behaviors is on a scale that is almost incomprehensible. Traditional business analytics approaches (on laptops, in the cloud, or with static datasets) will not keep up with this growing tidal wave of dynamic data.

Another dimension to this story, of course, is the Future of Work discussion, including creation of new job titles and roles, and the demise of older job titles and roles. One group has declared, “IoT companies will dominate the 2020s: Prepare your resume!” This article quotes an older market projection (from 2019), which estimated “the global industrial IoT market could reach $14.2 trillion by 2030.”

In dynamic data-driven applications, automation of the essential processes (in this case, data triage, insights discovery, and analytics delivery) can give a power boost to ride that tidal wave of fast-moving data streams. One can prepare for and improve skill readiness for these new business and career opportunities in several ways:

  • Focus on the automation of business processes: e.g., artificial intelligence, robotics, robotic process automation, intelligent process automation, chatbots.
  • Focus on the technologies and engineering components: e.g., sensors, monitoring, cloud-to-edge, microservices, serverless, insights-as-a-service APIs, IFTTT (IF-This-Then-That) architectures.
  • Focus on the data science: e.g., machine learning, statistics, computer vision, natural language understanding, coding, forecasting, predictive analytics, prescriptive analytics, anomaly detection, emergent behavior discovery, model explainability, trust, ethics, model monitoring (for data drift and concept drift) in dynamic environments (MLOps, ModelOps, AIOps).
  • Focus on specific data types: e.g., time series, video, audio, images, streaming text (such as social media or online chat channels), network logs, supply chain tracking (e.g., RFID), inventory monitoring (SKU / UPC tracking).
  • Focus on the strategies that aim these tools, talents, and technologies at reaching business mission and goals: e.g., data strategy, analytics strategy, observability strategy (i.e., why and where are we deploying the data-streaming sensors, and what outcomes should they achieve?).

Insights discovery from ubiquitous data collection (via the tens of billions of connected devices that will be measuring, monitoring, and tracking nearly everything internally in our business environment and contextually in the broader market and global community) is ultimately about value creation and business outcomes. Embedding real-time dynamic analytics at the edge, at the point of data collection, or at the moment of need will dynamically (and positively) change the slope of your business or career trajectory. Dynamic sense-making, insights discovery, next-best-action response, and value creation is essential when data is being acquired at an enormous rate. Only then can one hope to realize the promised trillion-dollar market value of the Internet of Everything.

For more advice, check out this upcoming webinar panel discussion, sponsored by AtScale, with data and analytics leaders from Wayfair, Cardinal Health,, and Slickdeals: “How to make smarter data-driven decisions at scale.” Each panelist will share an overview of their data & analytics journey, and how they are building a self-service, data-driven culture that scales. Join us on Wednesday, March 31, 2021 (11:00am PT | 2:00pm ET). Save your spot here: I hope that you find this event useful. And I hope to see you there!

Please follow me on LinkedIn and follow me on Twitter at @KirkDBorne.

RPA and IPA – Their Similarities are Different, but Their Rapid Growth Trajectories are the Same

When I was growing up, friends at school would occasionally ask me if my older brother and I were twins. We were not, though we looked twin-like! As I grew tired of answering that question, one day I decided to give a more thoughtful answer to the question (beyond a simple “No”). I replied: “Yes, we are twins. We were born 20 months apart!” My response caused the questioner to pause and think about what I said, and perhaps reframe their own thinking.

This story reminds me that two things may appear very much the same, but they may be found to be not so similar after deeper inspection and reflection. RPA and IPA are like that. Their similarities are different!

What are RPA and IPA? RPA is Robotic Process Automation, and IPA is Intelligent Process Automation. Sound similar? Yes, in fact, their differences are similar!

In the rest of this article, we will refer to IPA as intelligent automation (IA), which is simply short-hand for intelligent process automation.

Process automation is relatively clear – it refers to an automatic implementation of a process, specifically a business process in our case. One can automate a very complicated and time-consuming process, even for a one-time bespoke application – the ROI must be worth it, to justify doing this only once. Robotic Process Automation is for “more than once” automation. RPA then refers specifically to the robotic repetition of a business process. Repetition implies that the same steps are repeated many times, for example claims processing or business form completion or invoice processing or invoice submission or more data-specific activities, such as data extraction from documents (such as PDFs), data entry, data validation, and report preparation.

The benefits of RPA accrue from its robotic repetition of a well-defined process (specifically some sort of digital process with defined instructions, inputs, rules, and outputs), which is to be repeated without error, thus removing potential missteps in the process, or accidental omission of steps, or employee fatigue. The process must be simple, stable, repetitive, and routine, and it must be carried out the same way hundreds or thousands (or more) times. This robotic repetition of the process assures that the steps are replicated identically, correctly, and rapidly from one application to the next, thus yielding higher business productivity. Consequently, as organizations everywhere are undergoing significant digital transformation, we have been witnessing increases both in the use of RPA in organizations and in the number of RPA products in the market.

So, what about Intelligent Automation? IA refers to the addition of “intelligence” to the RPA – transforming it into “smart RPA” or even “cognitive RPA”. This is accomplished by adding more data-driven, machine learning, and AI (artificial intelligence) components to the process discovery, process mining, and process learning stages. IA incorporates feedback, learning, improvement, and optimization in the automation loop. The market and business appetite for IA is growing rapidly, particularly as more organizations are seeking to add AI to their enterprise functions and to step up the value derived from their process automation activities.

The pivot from RPA to IA right now has spurred Automation Anywhere, Inc. (AAI) to conduct a survey of businesses and organizations in numerous sectors and industries, to assess the current and future (Now and Next) state of RPA in the enterprise. AAI’s recently published “Now and Next State of RPA” report presents detailed results of that survey.

The AAI report covers these industries: energy/utilities, financial/insurance, government, healthcare, industrial/manufacturing, life sciences, retail/consumer, services/consulting, technology, telecom, and transportation/airlines.

The first part of the report addresses the “Now” – the present-day impact of RPA and IA – how organizations are deploying process automation, what their priorities are, how much are they investing in it, and what are the benefits being achieved. The “Next” part of the report probes organizations’ forward-looking strategies and goals over the next one to two years. Of course, both the “now” and “next” sections are affected and informed by the COVID-19 pandemic.

Some top trends from the report include:

  • Cloud is becoming the platform of choice for RPA deployments. Cloud is top of mind for leaders everywhere, and that includes cloud migration and cloud security.
  • Organizations are thinking big about RPA, with millions of bots already implemented by AAI customers. Some AAI customers have individually deployed tens of thousands of bots.
  • The pandemic has rapidly increased demand for RPA, especially in customer-facing functions, such as call centers.
  • Most organizations (63%) are deploying or scaling their efforts, while 27% are still evaluating RPA solutions, and 10% have no plans at this time.
  • Spending on RPA is increasing, with an estimated doubling of the number of bots deployed in the next 12 months in most organizations who are already actively using RPA bots.
  • Productivity improvement is the single most important driver for adopting RPA, IA, and AI.
  • The average ROI from RPA/IA deployments is 250%.
  • Interest in AI is high and growing, specifically in the areas of smart analytics, customer-centricity, chatbots, and predictive modeling. Many organizations are seeing a strong alignment of their AI and RPA (hence, IA) projects.
  • The top barriers to adoption are (in order of significance): lack of expertise, insufficient budget, lack of resources, lack of use cases, limited ROI, organizational resistance, scalability concerns, and security concerns.
  • Top use cases in the back office include finance (general ledger data reconciliation), accounts payable (invoice processing and payment), and HR (new employee onboarding).
  • Top use cases in the front office include customer records management (including account updates), customer service (request handling and call center support), and sales order processing.
  • RPA education and training are on the rise (specifically online training). This reflects the corresponding increase in interest among many organizations in data literacy, analytics skills, AI literacy, and AI skills training. Education initiatives address the change management, employee reskilling, and worker upskilling that organizations are now prioritizing.
  • Organizations report an increased excitement for RPA/IA, an awareness of significant new opportunities for adoption, and a desire for more mature RPA – that is, intelligent automation!

There are many more details, including specific insights for different sectors, in the 21-page “Now and Next State of RPA” report from Automation Anywhere, Inc. You can download the full report here: Download it for free and learn more how the robotic twinning and repetition of business processes is the most intelligent step in accelerating digital transformation and boosting business productivity.

Productivity is the top driver of RPA and AI.
Open-ended survey questions gave insights into organizations’ planned AI projects.
Training initiatives are a high priority for increasing ROI from RPA and AI projects.

How We Teach The Leaders of Tomorrow To Be Curious, Ask Questions and Not Be Afraid To Fail Fast To Learn Fast

I recently enjoyed recording a podcast with Joe DosSantos (Chief Data Officer at Qlik). This was one in a series of #DataBrilliant podcasts by Qlik, which you can also access here (Apple Podcasts) and here (Spotify). I summarize below some of the topics that Joe and I discussed in the podcast. Be sure to listen to the full recording of our lively conversation, which covered Data Literacy, Data Strategy, Data Leadership, and more.

The Age of Hype Cycles

The data age has been marked by numerous “hype cycles.” First, we heard how Big Data, Data Science, Machine Learning (ML) and Advanced Analytics would have the honor to be the technologies that would cure cancer, end world hunger and solve the world’s biggest challenges. Then came third-generation Artificial Intelligence (AI), Blockchain and soon Quantum Computing, with each one seeking that honor.

From all this hope and hype, one constant has always been there: a focus on value creation from data. As a scientist, I have always recommended a scientific approach: State your problem first, be curious (ask questions), collect facts to address those questions (acquire data), investigate, analyze, ask more questions, include a sensible serving of skepticism, and (above all else) aim to fail fast in order to learn fast. As I discussed with Joe DosSantos when I spoke with him for the latest episode of Data Brilliant, you don’t need to be a data scientist to follow these principles. These apply to everyone, in all organizations and walks of life, in every sector.

One characteristic of science that is especially true in data science and implicit in ML is the concept of continuous learning and refining our understanding. We build models to test our understanding, but these models are not “one and done.” They are part of a cycle of learning. In ML, the learning cycle is sometimes called backpropagation, where the errors (inaccurate predictions) of our models are fed back into adjusting the model’s input parameters in a way that aims to improve the output accuracy. A more colloquial expression for this is: good judgment comes from experience, and experience comes from bad judgment.

Data Literacy For All

I know that for some, the term data and some of the other terminology I’ve mentioned already can be scary. But they shouldn’t be. We are all surrounded by – and creating – masses of data every single day. As Joe and I talked about, one of the first hurdles in data literacy is getting people to recognize that everything is data. What you see with your eyes? That’s data. What you hear with your ears? Data. The words that come out of your mouth that other people hear? That’s all data. Images, text, documents, audio, video and all the apps on your phone, all the things you search for on the internet? Yet again, that’s data.

Every single day, everyone around the world is using data and the principles I mention above, many without realizing it. So, now we need to bring this value to our businesses.

How To Build A Successful Enterprise Data Strategy

In my chat with Joe, we talked about many data concepts in the context of enterprise digital transformation. As always, but especially during the race toward digital transformation that has been accelerated by the 2020 pandemic, a successful enterprise data strategy that leads to business value creation can benefit from first addressing these six key questions:

(1) What mission objective and outcomes are you aiming to achieve?

(2) What is the business problem, expressed in data terminology? Specifically, is it a detection problem (fraud or emergent behavior), a discovery problem (new customers or new opportunities), a prediction problem (what will happen) or an optimization problem (how to improve outcomes)?

(3) Do you have the talent (key people representing diverse perspectives), tools (data technologies) and techniques (AI and ML knowledge) to make it happen?

(4) What data do you have to fuel the algorithms, the training and the modeling processes?

(5) Is your organizational culture ready for this (for data-informed decisions; an experimentation mindset; continuous learning; fail fast to learn fast; with principled AI and data governance)?

(6) What operational technology environment do you have to deploy the implementation (cloud or on-premise platform)?

Data Leadership

As Joe and I discussed, your ultimate business goal is to build a data-fueled enterprise that delivers business value from data. Therefore, ask questions, be purposeful (goal-oriented and mission-focused), be reasonable in your expectations and remain reasonably skeptical – because as famous statistician, George Box, once said “all models are wrong, but some are useful.”

Now, listen to the full podcast here.

Key Strategies and Senior Executives’ Perspectives on AI Adoption in 2020

Artificial intelligence (AI) has become one of the most significant emerging technologies of the past few years. Some market estimates anticipate that AI will contribute 16 trillion dollars to the global GDP (gross domestic product) by 2030. While there has been accelerating interest in implementing AI as a technology, there has been concurrent growth in interest in implementing successful AI strategies. Some key elements of such strategies that have emerged include explainable AI, trusted AI, AI ethics, operationalizing AI, scaling sustainable AI operations, workforce development (training), and how to speed up all of this development.

The 2020 year of the pandemic has forced organizations to speed up their digital transformation and advanced technology adoption plans, essentially compressing several years of anticipated developments into several months. These accelerated developments cover a wide scope, including: technology-enabled remote work solutions, technology-enhanced health and safety programs, AI-powered implementations of “all of the above”, and sharpened focus and attention on their workforce: future of work in the age of AI, AI-assisted human work and process enhancements, and training initiatives (including data literacy and AI).

In the recent 2020 RELX Emerging Tech Study, results were presented from a survey of over 1000 U.S. senior executives across eight industries: agriculture, banking, exhibitions, government, healthcare, insurance, legal, and science/medical. The survey, carried out by RELX (a global provider of information-based analytics and decision tools for professional and business customers), focused on the state of interest, investment, and implementations in AI tech during the pandemic period. But it was not just a snapshot on the state of AI in 2020. The survey also had forward-looking questions, as well as historical comparisons and trends from results of similar “state of AI in the enterprise” surveys in 2018 and 2019.

Some of the most remarkable findings in the 3-year trending data include these changes from 2018 to 2020:

1) The percentage of senior executives who stated that AI technologies are being utilized in their business dramatically increased from 48% to 81%.

2) The percentage who are concerned about other countries being more advanced than the U.S. in AI technology and implementation increased from 70% to 82%.

3) The percentage who believe that government programs should assist in AI workforce development increased from 45% to 59%.

4) There was a slight increase (though still a minority) in those who believe that the government should leave the promotion of AI technologies to the private sector, growing from 30% to 36%.

5) There is a continued (and growing) strong belief that U.S. companies should invest in the future AI workforce through educational initiatives such as university partnerships, increasing from 92% to 95%.

6) The percentage who are offering training opportunities in AI technologies to their workforce increased significantly from 46% to 75%.

Some of the interesting findings specifically in the 2020 survey results include:

1) Across all sectors, a strong majority (86%) of survey respondents believe that ethical considerations are a strategic priority in the design and implementation of their AI systems, ranging from 77% in government and 80% in the legal sectors, up to 93% in banking and 94% in the insurance sectors, with healthcare and science/medical near the middle of that range.

2) The majority (68%) of respondents stated that they increased their investment in AI technologies, 48% of which invested in new AI technologies. The sectors with the greatest increases in investment were insurance, banking, and agriculture, followed closely by healthcare and science/medical.

3) 82% stated that AI technologies are most likely to be used to increase efficiencies and worker productivities.

4) Only 26% of respondents reported that AI technologies are being used to replace human labor.

Regarding the last point, this minority view regarding AI’s potential negative impact on employment is consistent with a 2019 worldwide survey of 19,000 employers, which found that 87% plan to increase or maintain the size of their staff as a result of automation and AI, and that just 9% of companies across the globe and 4% in the U.S. anticipate cutting jobs. Another report stated, “Rather, than being replaced, humans will be redeployed into higher-order jobs requiring more cognitive skills.”

The broader takeaways and insights drawn from the 2020 RELX Emerging Tech survey are these:

(a) COVID-19 drove increased AI tech investment and adoption.

(b) The use of AI has increased across all sectors that were polled.

(c) Ethical AI is viewed as both a priority and a competitive advantage.

(d) International competition remains a concern for U.S. organizations.

(e) AI workforce training and development is a major component of AI strategy, though AI implementations consistently outpace training initiatives.

Vijay Raghavan, Executive Vice President and Chief Technology Officer for the Risk & Business Analytics division of RELX, has summarized the survey results very well in the following statements:

  • “Businesses’ response to COVID-19 has confirmed the view of US business leaders that artificial intelligence has the power to create smarter, more agile and profitable businesses.”
  • “Businesses face more complex challenges every day and AI technologies have become a mission-critical resource in adapting to, if not overcoming, these types of unforeseen obstacles and staying resilient.”
  • “Companies that do not dedicate the necessary resources to training existing employees on new AI technologies risk leaving growth opportunities on the table and using biased or otherwise flawed systems to make and enforce major decisions.”

Therefore, when we consider AI strategy, a global perspective is no less important than local organization-specific mission objectives. Understanding your competition, the marketplace, and both the current expectations and future needs of your stakeholders (customers, employees, citizens, and shareholders) is vital. Furthermore, non-technology considerations should be incorporated alongside technology implementation and operationalization requirements. Performance metrics and goals associated with AI governance, ethics, talent, and training must be on the same balance sheet as AI tools, techniques, and technologies. How to bring all of these pieces together in a successful AI strategy has become clearer with the results and insights revealed in the 2020 RELX Emerging Tech survey.

You can find and read the full RELX survey report here:

Key Finding in Emerging Tech Survey
Key Findings in RELX 2020 Emerging Tech Survey

Data Science Blogs-R-Us

[UPDATED September 7, 2021]

I have written articles in many places. I will be collecting links to those sources here. The list is not complete and will be constantly evolving. There are some older blogs that I will be including in the list below as I remember them and find them. Also included are some interviews in which I provided detailed answers to a variety of questions.

In 2019, I was listed as the #1 Top Data Science Blogger to Follow on Twitter.

And then there’s this — not a blog, but a link to my 2013 TedX talk: “Big Data, Small World.” (Many more videos of my talks are available online. That list will be compiled in another place soon.)

  1. Rocket-Powered Data Science (the website that you are now reading).