Author Archives: Kirk Borne

About Kirk Borne

Dr. Kirk D. Borne is a transdisciplinary Data Scientist and Professor of Astrophysics & Computational Science at George Mason University (since 2003). He conducts research, teaching, and doctoral student advising in the theory and practice of data science. He is also an active consultant to numerous organizations in data science and big data analytics across a wide variety disciplines, domains, and sectors. He previously spent nearly 20 years supporting large scientific data systems for NASA astrophysics missions, including the Hubble Space Telescope. He was identified in 2013 as the Worldwide #1 Big Data influencer on Twitter at @KirkDBorne. ​

AI Readiness is Not an Option

This year, artificial intelligence (AI) has become a major conversation centerpiece at home, in the park, at the gym, at work, everywhere. This is not entirely due to or related to ChatGPT and LLMs (large language models), though those have been the main drivers. The AI conversations, especially in technical circles, have focused intensively on generative AI, the creation of written content, images, videos, marketing copy, software code, speeches, and countless other things. For a short introduction to generative AI, see my article “Generative AI – Chapter 1, Page 1”.

While there has been huge public interest in generative AI (specifically, ChatGPT) by individuals, there has been a transformative impact on organizations everywhere, both in strategy conversations and tactical deployments. Businesses and others are seeking to leverage generative AI to increase productivity (efficiencies and effectiveness) in nearly all aspects of their enterprise.

To support essential enterprise AI strategy conversations, here are 12 key points for organizations to consider within the context of “AI readiness is not an option, but an imperative”:

[continue reading the full article here]

Built for AI – https://purefla.sh/41oS2Dp

Generative AI – Chapter 1, Page 1

Anyone who has been watching the AI space this year, even peripherally, will have noticed the flaming hot story of the year—ChatGPT and related chatbot applications. These AI applications are essentially deep machine learning models that are trained on hundreds of gigabytes of text and that can provide detailed, grammatically correct, and “mostly accurate” text responses to user inputs (questions, requests, or queries, which are called prompts). Specifically, these are LLMs—large language models. It is imperative, not an option, for organizations (and for most individuals) to be aware of what is going on here—not only because it is all over the news, but because it could affect your future self.

When I said “mostly accurate,” I meant that sometimes the ChatGPT responses go way off target—people refer to these as “hallucinations,” which is basically a reflection of the statistical basis of the models (see below)—the application will generate some plausible-sounding, grammatically correct statements that are complete falsehoods, such as “Leonardo da Vinci painted the Mona Lisa in 1815” (which is a real example of an observed ChatGPT hallucination).

I tested ChatGPT with my own account, and I was impressed with the results. I prompted it with various requests, including: Write a short story on a specific topic, provide a layperson’s explanations of some complex deep machine learning concepts, create a lesson plan to learn a tough subject, create an outline for a blog on a particular topic (no, not this one), and provide some financial advice on particular investments (no, it did not provide specific advice, but it did offer warnings like NFA “Not Financial Advice” and DYOR “Do Your Own Research”). You can find my results on my Medium blog site.

LLMs are so responsive and grammatically correct (even over many paragraphs of text) that some people worry that it is sentient. Guess what? It isn’t. It is merely a very large statistical model that provides the most likely sequence of words in response to a prompt. It is effectively a galaxy-sized statistically rich version of text autocomplete on your smartphone’s text messaging app, which already delivers some highly probable guesses for the missing words in a text message like this one: “Due to a client deadline, I will be working late at the ____ this ____, so I will be home late for ____.” LLMs can respond to much more complex (but well-posed) prompts, such as lesson plans for education, content for a business presentation, code for a software task, workflow steps for an IT project, and much more.

In order to help people to create well-posed prompts, the new discipline of prompt engineering has arisen. It’s not hard to find many online guides to prompt engineering, including guides for very specific industries, business tasks, workplace applications, and context-dependent scenarios. You don’t need prompt engineering to find those guides—a simple web search should do the trick. And guess what? When web search engines were first created, it took a while for us to learn how to submit well-posed keyword searches. That scenario is being played out again with ChatGPT and prompt engineering, but now our queries are aimed at a much more language-based, AI-powered, statistically rich application. If you understand Bayes’ Theorem and Bayesian statistics, then you will understand me when I say that we are talking here about an enormously more enriched set of priors, likelihoods, and evidence to feed the LLMs—so, it should not be surprising that the posteriors are shockingly good for large text outputs (most of the time).

LLMs are a subset of the deep learning field of natural language processing (NLP), which includes natural language understanding (NLU) and natural language generation (NLG). Think of chatbots and you get the idea, just expanded to a much, much larger domain of AI-based conversation.

Computer vision (CV) is another subset of deep learning, specifically aimed at object/pattern detection, recognition, and classification in images (including still images and video sequences). ChatGPT and LLMs are examples of generative AI using NLP for text generation. Stable Diffusion, Midjourney, and Dall-E are examples of generative AI using CV for image generation. Oh, by the way, I asked the generative AI at Stable Diffusion to create some images to go with my short story (which you can find on my Medium blog).

Beyond the individual examples of generative AI (and its components, ChatGPT, Stable Diffusion, etc.) that we can all experiment with, the applications in the enterprise can be tremendously impactful and transformative for organizations and the future of work. Those next chapters in the story are being written right now.

Continue reading about Enterprise AI in these posts:

  1. AI Readiness is Not an Option
  2. Top 9 Considerations for Enterprise AI

Business Strategies for Deploying Disruptive Tech: Generative AI and ChatGPT

Generative AI is the biggest and hottest trend in AI (Artificial Intelligence) at the start of 2023. While generative AI has been around for several years, the arrival of ChatGPT (a conversational AI tool for all business occasions, built and trained from large language models) has been like a brilliant torch brought into a dark room, illuminating many previously unseen opportunities.

Every business wants to get on board with ChatGPT, to implement it, operationalize it, and capitalize on it. It is important to realize that the usual “hype cycle” rules prevail in such cases as this. First, don’t do something just because everyone else is doing it – there needs to be a valid business reason for your organization to be doing it, at the very least because you will need to explain it objectively to your stakeholders (employees, investors, clients). Second, doing something new (especially something “big” and disruptive) must align with your business objectives – otherwise, you may be steering your business into deep uncharted waters that you haven’t the resources and talent to navigate. Third, any commitment to a disruptive technology (including data-intensive and AI implementations) must start with a business strategy.

I suggest that the simplest business strategy starts with answering three basic questions: What? So what? Now what? That is: (1) What is it you want to do and where does it fit within the context of your organization? (2) Why should your organization be doing it and why should your people commit to it? (3) How do we get started, when, who will be involved, and what are the targeted benefits, results, outcomes, and consequences (including risks)? In short, you must be willing and able to answer the six WWWWWH questions (Who? What? When? Where? Why? and How?).

Another strategy perspective on technology-induced business disruption (including generative AI and ChatGPT deployments) is to consider the three F’s that affect (and can potentially derail) such projects. Those F’s are: Fragility, Friction, and FUD (Fear, Uncertainty, Doubt).

Fragility occurs when a built system is easily “broken” when some component is changed. These changes may include requirements drift, data drift, model drift, or concept drift. The first one (requirements drift) is a challenge in any development project (when the desired outcomes are changed, sometimes without notifying the development team), but the latter three are more apropos to data-intensive product development activities (which certainly describes AI projects). A system should be sufficiently agile and modular such that changes can be made with as little impact to the overall system design and operations as possible, thus keeping the project off the pathway to failure. Since ChatGPT is built from large language models that are trained against massive data sets (mostly business documents, internal text repositories, and similar resources) within your organization, consequently attention must be given to the stability, accessibility, and reliability of those resources.

Friction occurs when there is resistance to change or to success somewhere in the project lifecycle or management chain. This can be overcome with small victories (MVP minimum viable products, or MLP minimum lovable products) and with instilling (i.e., encouraging and rewarding) a culture of experimentation across the organization. When people are encouraged to experiment, where small failures are acceptable (i.e., there can be objective assessments of failure, lessons learned, and subsequent improvements), then friction can be minimized, failure can be alleviated, and innovation can flourish. A business-disruptive ChatGPT implementation definitely fits into this category: focus first on the MVP or MLP.

FUD occurs when there is too much hype and “management speak” in the discussions. FUD can open a pathway to failure wherever there is: (a) Fear that the organization’s data-intensive, machine learning, AI, and ChatGPT activities are driven by FOMO (fear of missing out, sparked by concerns that your competitors are outpacing your business); (b) Uncertainty in what the AI / ChatGPT advocates are talking about (a “Data Literacy” or “AI Literacy” challenge); or (c) Doubt that there is real value in the disruptive technology activities (due to a lack of quick-win MVP or MLP examples).

I have developed a few rules to help drive quick wins and facilitate success in data-intensive and AI (e.g., Generative AI and ChatGPT) deployments. These rules are not necessarily “Rocket Science” (despite the name of this blog site), but they are common business sense for most business-disruptive technology implementations in enterprises. Most of these rules focus on the data, since data is ultimately the fuel, the input, the objective evidence, and the source of informative signals that are fed into all data science, analytics, machine learning, and AI models.

Here are my 10 rules (i.e., Business Strategies for Deploying Disruptive Data-Intensive, AI, and ChatGPT Implementations):

  1. Honor business value above all other goals.
  2. Begin with the end in mind: goal-oriented, mission-focused, and outcomes-driven, while being data-informed and technology-enabled.
  3. Think strategically, but act tactically: think big, start small, learn fast.
  4. Know thy data: understand what it is (formats, types, sampling, who, what, when, where, why), encourage the use of data across the enterprise, and enrich your datasets with searchable (semantic and content-based) metadata (labels, annotations, tags). The latter is essential for AI implementations.
  5. Love thy data: data are never perfect, but all the data may produce value, though not immediately. Clean it, annotate it, catalog it, and bring it into the data family (connect the dots and see what happens). For example, outliers are often dismissed as random fluctuations in data, but they may be signaling at least one of these three different types of discovery: (a) data quality problems, associated with errors in the data measurement and capture processes; (b) data processing problems, associated with errors in the data pipeline and transformation processes; or (c) surprise discovery, associated with real previously unseen novel events, behaviors, or entities arising in your data stream.
  6. Do not covet thy data’s correlations: a random six-sigma event is one-in-a-million. So, if you have 1 trillion data points (e.g., a Terabyte of data), then there may be one million such “random events” that will tempt any decision-maker into ascribing too much significance to this natural randomness.
  7. Validation is a virtue, but generalization is vital: a model may work well once, but not on the next batch of data. We must monitor for overfitting (fitting the natural variance in the data), underfitting (bias), data drift, and model drift. Over-specifying and over-engineering a model for a data-intensive implementation will likely not be applicable to previously unseen data or for new circumstances in which the model will be deployed. A lack of generalization is a big source of fragility and dilutes the business value of the effort.
  8. Honor thy data-intensive technology’s “easy buttons” that enable data-to-discovery (D2D), data-to-“informed decision” (D2ID), data-to-“next best action” (D2NBA), and data-to-value (D2V). These “easy buttons” are: Pattern Detection (D2D), Pattern Recognition (D2ID), Pattern Exploration (D2NBA), and Pattern Exploitation (D2V).
  9. Remember to Keep it Simple and Smart (the “KISS” principle). Create a library of composable, reusable building blocks and atomic business logic components for integration within various generative AI implementations: microservices, APIs, cloud-based functions-as-a-service (FaaS), and flexible user interfaces. (Suggestion: take a look at MACH architecture.)
  10. Keep it agile, with short design, develop, test, release, and feedback cycles: keep it lean, and build on incremental changes. Test early and often. Expect continuous improvement. Encourage and reward a Culture of Experimentation that learns from failure, such as “Test, or get fired!

Finally, I offer a very similar (shorter and slightly different) set of Business Strategies for Deploying Disruptive Data-Intensive, AI, and ChatGPT Implementations, from the article “The breakthrough that is ChatGPT: How much does it cost to build?“. Here is the list from that article’s “C-Suite’s Guide to Developing a Successful AI Chatbot”:

  1. Define the business requirements.
  2. Conduct market research.
  3. Choose the right development partner.
  4. Develop a minimum viable product (MVP).
  5. Test and refine the chatbot.
  6. Launch the chatbot.

Editorial Review of “Building Industrial Digital Twins”

I was asked by the publisher to provide an editorial review of the book “Building Industrial Digital Twins: Design, develop, and deploy digital twin solutions for real-world industries using Azure Digital Twins“, by Shyam Varan Nath and Pieter van Schalkwyk. For this, I received a complimentary copy of the book and no other compensation.

Let us begin…

This book is a very timely contribution to the world of industrial digital transformation. The digital twin is more than a data collector. It is an insight engine, providing not only data for descriptive and diagnostic analytics applications, but also providing essential data for predictive and prescriptive analytics applications. This is all fueled and facilitated by data flows across processes, products, and people’s activities, used in synergy with computational models and simulations of the system being digitally twinned. In order to help an organization get started with a DT (digital twin), this book outlines the process of building the MVT (Minimum Viable Twin). All phases of the MVT process are discussed: strategy, designs, pilot, implementation, test, validation, operations, and monitoring. 

This book knows and forcefully proves what is the enabler and value producer of digital anything (especially and most emphatically the DT) — it is all about the data and the simulation — that’s business modeling at its finest, incorporating the best of technology (physical assets, sensors, and cloud), techniques (analytics, algorithms, and modeling), and talent (culture, people, and strategic plans).

There were many themes and topics (both broad and specific) that fascinated me and kept me engaged in discovering serendipitous knowledge nuggets throughout this book. Here are a few: 

1) Azure DT, whose cloud-based PaaS (Platform-as-a-Service) provides a viable, scalable, and accessible launchpad for DTaaS in any organization.

2) Streaming sensor data from the IoT (Internet of Things) and IIoT (Industrial IoT) become the source for an IoC (Internet of Context), ultimately delivering Insights-aaS, Context-aaS, and Forecasting-aaS.

3) The consistent emphasis on and elaboration of key DT value propositions, requirements, and KPI tracking.

4) The DT Canvas (chapter 4)!

5) Helpful discussions of phased DT deployments, prototypes, pilots, feedback, and validation.

6) Specific Industry 4.0 examples, with constant reminders that’s it all about the data plus analytics!

7) Forward-looking DTs in the industrial enterprise.

Beyond being a technical how-to manual (though it is definitely that), this book delivers so much more! It is truly a business digital transformation manual.

My top learning and pondering moments at Splunk .conf22

I recently attended the Splunk .conf22 conference. While the event was live in-person in Las Vegas, I attended virtually from my home office. Consequently I missed the incredible in-person experience of the brilliant speakers on the main stage, the technodazzle of 100’s of exhibitors’ offerings in the exhibit arena, and the smooth hip hop sounds from the special guest entertainer — guess who?

What I missed in-person was more than compensated for by the incredible online presentations by Splunk leaders, developers, and customers. If you have ever attended a major expo at one of the major Vegas hotels, you know that there is a lot of walking between different sessions — literally, miles of walking per day. That’s good for you, but it often means that you don’t attend all of the sessions that you would like because of the requisite rushing from venue to venue. None of that was necessary on the Splunk .conf22 virtual conference platform. I was able to see a lot, learn a lot, be impressed a lot, and ponder a lot about all of the wonderful features, functionalities, and future plans for the Splunk platform.

One of the first major attractions for me to attend this event is found in the primary descriptor of the Splunk Platform — it is appropriately called the Splunk Observability Cloud, which includes an impressive suite of Observability and Monitoring products and services. I have written and spoken frequently and passionately about Observability in the past couple of years. For example, I wrote this in 2021:

“Observability emerged as one of the hottest and (for me) most exciting developments of the year. Do not confuse observability with monitoring (specifically, with IT monitoring). The key difference is this: monitoring is what you do, and observability is why you do it. Observability is a business strategy: what you monitor, why you monitor it, what you intend to learn from it, how it will be used, and how it will contribute to business objectives and mission success. But the power, value, and imperative of observability does not stop there. Observability meets AI – it is part of the complete AIOps package: ‘keeping an eye on the AI.’ Observability delivers actionable insights, context-enriched data sets, early warning alert generation, root cause visibility, active performance monitoring, predictive and prescriptive incident management, real-time operational deviation detection (6-Sigma never had it so good!), tight coupling of cyber-physical systems, digital twinning of almost anything in the enterprise, and more. And the goodness doesn’t stop there.”

Continue reading my thoughts on Observability at http://rocketdatascience.org/?p=1589

The dominant references everywhere to Observability was just the start of awesome brain food offered at Splunk’s .conf22 event. Here is a list of my top moments, learnings, and musings from this year’s Splunk .conf:

  1. Observability for Unified Security with AI (Artificial Intelligence) and Machine Learning on the Splunk platform empowers enterprises to operationalize data for use-case-specific functionality across shared datasets. (Reference)
  2. The latest updates to the Splunk platform address the complexities of multi-cloud and hybrid environments, enabling cybersecurity and network big data functions (e.g., log analytics and anomaly detection) across distributed data sources and diverse enterprise IT infrastructure resources. (Reference)
  3. Splunk Enterprise 9.0 is here, now! Explore and test-drive it (with a free trial) here.
  4. The new Splunk Enterprise 9.0 release enables DevSecOps users to gain more insights from Observability data with Federated Search, with the ability to correlate ops with security alerts, and with Edge Management, all in one platform. (Reference)
  5. Security information and event management (SIEM) on the Splunk platform is enhanced with end-to-end visibility and platform extensibility, with machine learning and automation (AIOps), with risk-based alerting, and with Federated Search (i.e., Observability on-demand). (Reference)
  6. Customer success story: As a customer-obsessed bank with ultra-rapid growth, Nubank turned to Splunk to optimize data flows, analytics applications, customer support functions, and insights-obsessed IT monitoring. (Reference)
  7. The key characteristics of the Splunk Observability Cloud are Resilience, Security, Scalability, and EXTENSIBILITY. The latter specifically refers to the ease in which developers can extend Splunk’s capabilities to other apps, applying their AIOps and DevSecOps best practices and principles! Developers can start here.
  8. The Splunk Observability Cloud has many functions for data-intensive IT, Security, and Network operations, including Anomaly Detection Service, Federated Search, Synthetic Monitoring, Incident Intelligence, and much more. Synthetic monitoring is essentially digital twinning of your network and IT environment, providing insights through simulated risks, attacks, and anomalies via predictive and prescriptive modeling. [Reference]
  9. Splunk Observability Cloud’s Federated Search capability activates search and analytics regardless of where your data lives — on-site, in the cloud, or from a third party. (Reference)
  10. The new release of the Splunk Data Manager provides a simple, modern, automated experience of data ingest for Splunk Cloud admins, which reduces the time it takes to configure data collection (from hours/days to minutes). (Reference)
  11. Splunk works on data, data, data, but the focus is always on customer, customer, customer — because delivering best outcomes for customers is job #1. Explore Splunk’s amazing Partner ecosystem (Partnerverse) and the impressive catalog of partners’ solutions here.
  12. Splunk .conf22 Invites Organizations to Unlock Innovation With Data.

In summary, here is my list of key words and topics that illustrate the diverse capabilities and value-packed features of the Splunk Observability Cloud Platform that I learned about at the .conf22 event:

– Anomaly Detection Assistant
– Risk-based Alerting (powered by AI and Machine Learning scoring algorithms)
– Federated Search (Observability on-demand)
– End-to-End Visibility
– Platform Extensibility
– Massive(!) Scalability of the Splunk Observability Cloud (to billions of transactions per day)
– Insights-obsessed Monitoring (“We don’t need more information. We need more insights.”)
– APIs in Action (to Turn Data into Doing™)
– Splunk Incident Intelligence
– Synthetic Monitoring (Digital Twin of Network/IT infrastructure)
– Splunk Data Manager
– The Splunk Partner Universe (Partnerverse)

My closing thought — Cybersecurity is basically Data Analytics: detection, prediction, prescription, and optimizing for unpredictability. This is what Splunk lives for!

Follow me on LinkedIn here and on Twitter at @KirkDBorne.

Disclaimer: I was compensated as an independent freelance media influencer for my participation at the conference and for this article. The opinions expressed here are entirely my own and do not represent those of Splunk or of any Splunk partners. Any misrepresentations of the products and services mentioned in my statements are entirely my own responsibility. Nothing here should be construed as an offer to sell or as financial advice of any kind. My comments are entirely of a technical nature, focused on the technical capabilities of the items mentioned in the article.

Data Insights for Everyone — The Semantic Layer to the Rescue

What is a semantic layer? That’s a good question, but let’s first explain semantics. The way that I explained it to my data science students years ago was like this. In the early days of web search engines, those engines were primarily keyword search engines. If you knew the right keywords to search and if the content providers also used the same keywords on their website, then you could type the words into your favorite search engine and find the content you needed. So, I asked my students what results they would expect from such a search engine if I typed the following words into the search box: “How many cows are there in Texas?” My students were smart. They realized that the search results would probably not provide an answer to my question, but the results would simply list websites that included my words on the page or in the metadata tags: “Texas”, “Cows”, “How”, etc. Then, I explained to my students that a semantic-enabled search engine (with a semantic meta-layer, including ontologies and similar semantic tools) would be able to interpret my question’s meaning and then map that meaning to websites that can answer the question.

This was a good opening for my students to the wonderful world of semantics. I brought them deeper into the world by pointing out how much more effective and efficient the data professionals’ life would be if our data repositories had a similar semantic meta-layer. We would be able to go far beyond searching for correctly spelled column headings in databases or specific keywords in data documentation, to find the data we needed (assuming we even knew the correct labels, metatags, and keywords used by the dataset creators). We could search for data with common business terminology, regardless of the specific choice or spelling of the data descriptors in the dataset. Even more than that, we could easily start discovering and integrating, on-the-fly, data from totally different datasets that used different descriptors. For example, if I am searching for customer sales numbers, different datasets may label that “sales”, or “revenue”, or “customer_sales”, or “Cust_sales”, or any number of other such unique identifiers. What a nightmare that would be! But what a dream the semantic layer becomes!

When I was teaching those students so many years ago, the semantic layer itself was just a dream. Now it is a reality. We can now achieve the benefits, efficiencies, and data superhero powers that we previously could only imagine. But wait! There’s more.

Perhaps the greatest achievement of the semantic layer is to provide different data professionals with easy access to the data needed for their specific roles and tasks. The semantic layer is the representation of data that helps different business end-users discover and access the right data efficiently, effectively, and effortlessly using common business terms. The data scientists need to find the right data as inputs for their models — they also need a place to write-back the outputs of their models to the data repository for other users to access. The BI (business intelligence) analysts need to find the right data for their visualization packages, business questions, and decision support tools — they also need the outputs from the data scientists’ models, such as forecasts, alerts, classifications, and more. The semantic layer achieves this by mapping heterogeneously labeled data into familiar business terms, providing a unified, consolidated view of data across the enterprise.

The semantic layer delivers data insights discovery and usability across the whole enterprise, with each business user empowered to use the terminology and tools that are specific to their role. How data are stored, labeled, and meta-tagged in the data cloud is no longer a bottleneck to discovery and access. The decision-makers and data science modelers can fluidly share inputs and outputs with one another, to inform their role-specific tasks and improve their effectiveness. The semantic layer takes the user-specific results out of being a “one-off” solution on that user’s laptop to becoming an enterprise analytics accelerant, enabling business answer discovery at the speed of business questions.

Insights discovery for everyone is achieved. The semantic layer becomes the arbiter (multi-lingual data translator) for insights discovery between and among all business users of data, within the tools that they are already using. The data science team may be focused on feature importance metrics, feature engineering, predictive modeling, model explainability, and model monitoring. The BI team may be focused on KPIs, forecasts, trends, and decision-support insights. The data science team needs to know and to use that data which the BI team considers to be most important. The BI team needs to know and to use which trends, patterns, segments, and anomalies are being found in those data by the data science team. Sharing and integrating such important data streams has never been such a dream.

The semantic layer bridges the gaps between the data cloud, the decision-makers, and the data science modelers. The key results from the data science modelers can be written back to the semantic layer, to be sent directly to consumers of those results in the executive suite and on the BI team. Data scientists can focus on their tools; the BI users and executives can focus on their tools; and the data engineers can focus on their tools. The enterprise data science, analytics, and BI functions have never been so enterprisey. (Is “enterprisey” a word? I don’t know, but I’m sure you get my semantic meaning.)

That’s empowering. That’s data democratization. That’s insights democratization. That’s data fluency/literacy-building across the enterprise. That’s enterprise-wide agile curiosity, question-asking, hypothesizing, testing/experimenting, and continuous learning. That’s data insights for everyone.

Are you ready to learn more how you can bring these advantages to your organization? Be sure to watch the AtScale webinar “How to Bridge Data Science and Business Intelligence” where I join a panel in a multi-industry discussion on how a semantic layer can help organizations make smarter data-driven decisions at scale. There will be several speakers, including me. I will be speaking about “Model Monitoring in the Enterprise — Filling the Gaps”, specifically focused on “Filling the Communication Gaps Between BI and Data Science Teams With a Semantic Data Layer.”

Register to attend and view the webinar at https://bit.ly/3ySVIiu.

https://bit.ly/3ySVIiu

EX is the New CX

(This article is a continuation of my earlier article “When the Voice of the Customer Actually Talks.”)

I recently attended (virtually) CX Summit 2021, presented by Five9, which focused on “CX Reimagined.” At first this title for the event seemed a bit grandiose to me – Reimagined! After attending the event, I now think the title was perfect, and it could have gone even further. I saw how the “art of the possible” in CX (Customer Experience) and EX (Employee Experience) in the Contact Center is already being realized and is being taken to new realms of possibility through AI and Cloud empowerments.

The evolved (reimagined) Contact Center now comes with more options for digital channels to accommodate the customer, more voice data-powered services that serve both the customer and the contact center representative, and more seamless actions on both ends of the call line, even for complex inquiries. This is all enabled by the 3 A’s: AI, Automation, and voice Analytics. We have heard it before: “happy employee, happy customer!” That now looks like this: “EX is the new CX.” Boom!

In an information-packed talk from Gartner Analyst Drew Kraus on “Getting Past the Hype in Customer Service”, where he reviewed just how much hype there is in the customer service and support technologies market, it became clear to me that Five9 delivers on the needs, not the hype.

Another informative and data-packed presentation was by Five9 SVP Scott Kolman and COMMfusion analyst Blair Pleasant, where they presented and did a deep dive into the Five9 survey “2021 Customer Service Index – Learn how customers have reimagined the customer service experience.” I won’t go too deep here (you should watch the whole session on-demand). Some interesting highlights include:

  1. Five9 surveyed 2048 consumers, with participants from 7 countries, representing ages 19 to early 70’s. They also completed a similar survey in 2020. Side-by-side comparisons of the survey results (by age, by year of survey, and by country) for the different survey questions were quite informative and potentially quite useful for any contact center operation. If that’s what you or your business does, then you should investigate these “Voice of the Customer” results.
  2. Across all demographics, only 25% of respondents felt that their contact center experience got worse (either “much worse” or “slightly worse”) from 2020 to 2021. We might have expected a different result with the pandemic raging. Everyone else (75%) felt that their experience got better, much better, or had no opinion.
  3. Some very surprising results appeared (with significant differences between countries) when people were asked to rate the keys to “Good Service Experience”. Highly rated categories were “Rep gets right answer, even if it takes more time” (33%); “Rep can answer my question quickly’’ (26%); and “Don’t have to wait long to reach rep” (20%).
  4. Similarly, there were some significant differences by country when people were asked to rate the keys to “Bad Service Experience”. Top responses included: “Get passed from one rep to another” (34%); “Have to wait long to reach rep” (26%); and a tie for third place (at 13%) for “Cue/on hold system not helpful” and “Rep cannot answer my question quickly”. (Remember, that despite these seriously bad experiences, only 25% of respondents generally saw a drop in customer service experience in the past year.)
  5. One of the more enlightening survey results appeared when asked, “How likely are you to do business with a company if you have a Poor Service Experience?” The USA responses were significantly different than responses from the other 6 countries in the survey in one category: over 11% of USA respondents were “very likely” to continue doing business, versus 3-6% of non-USA respondents being “very likely”. However, in the “somewhat likely” category, all countries were in the range 10-16%, with the USA respondents close to the midpoint, near 14%. In my opinion (not expressed by the session presenters), a reason for these seemingly incompatible responses is that there are two sentiments being conflated in this one question. On the one hand, you have the bad experience on “this” call. On the other hand, you have perhaps the much worse (time-consuming) future experience of switching providers and dealing with the corresponding onboarding (for whatever service this is about). I might be “somewhat likely” to switch providers after one bad call experience, but I would not be “very likely” to go through the pain of switching providers and all that entails.

There were many interesting and powerful sessions in addition to this one, which I focused on here because it presented lots of survey data, and I love data! Another great session was the presentation by former astronaut (now Professor) Michael Massimino – brilliant and inspiring, with numerous words of wisdom, leadership advice, and life’s lessons learned. Of course, I admit that I was drawn into his NASA space stories, including the Hubble Telescope repair mission that almost went wrong, because I worked with the Hubble Space Telescope project for 10 years and I worked an additional 10 years at NASA’s Goddard Space Flight Center where many of the telescope’s instruments were tested.

My big takeaway from the Five9 CX Summit is how cloud, AI, automation, and voice analytics are rapidly driving change in the positive direction for contact center representatives and for customers who call in. Maybe that’s why the customer experience didn’t change much from 2020 to 2021, because a lot of those technologies have already been deployed in the past couple of years, particularly for Five9’s clients.

Chatbots and conversational AI are just part of the story – there’s so much more. Five9’s new cloud-enabled, AI-powered, voice data-driven solutions and services described at the summit are definitely worth exploring and investigating for your contact center: IVA (Intelligent Virtual Agents), VoiceStream, Agent Assist, Studio7, Practical AI, WFO (Work Flow Optimization), Conversation Architect, and UC (unified communications) integration into the contact center VX (Voice Experience) workflow.

Learn more about CX Reimagined and the roles of AI, Automation, Cloud, Voice Analytics, and Omnichannel Customer Engagement in the modern contact center at CX Summit 2021, presented by Five9. (Even if you missed the live event, the sessions are recorded, so you can watch them on-demand at any time you wish.) See for yourself where the Reimagined becomes the Realized in CX. And learn why EX is the new CX.

Note: This article was sponsored. The opinions expressed here are my own and do not represent the opinions of any other person, company, or entity.

#Five9CXSummit #CXNation

When the Voice of the Customer Actually Talks

For many years, organizations (mostly consumer-facing) have placed the “voice of the customer” (VoC) high on their priority list of top sources for customer intelligence. The goals of such activities are to improve customer service, customer interactions, customer engagement, and customer experience (CX) through just-in-time customer assistance, personalization, and loyalty-building activities. In recent years, even government agencies have increased their attention on Citizen Experience (CX) and Voice of the Citizen (VoC), to inform and guide their citizen services.

CX has become increasingly data-informed and data-driven, with VoC data being one of the key data sources. Other data sources include purchase patterns, online reviews, online shopping behavior analytics, and call center analytics. As good as these data analytics have been, collecting data and then performing pattern-detection and pattern-recognition analytics can be taken so much further now with AI-enabled customer interactions. 

AI is great for pattern recognition, product and service recommendations, anomaly detection, next-best action and next-best decision recommendations, and providing an insights power-boost to all of those. AI can be considered as Accelerated, Actionable, Amplified, Assisted, Augmented, even Awesome Intelligence, both for the customer and for the call center staff.

Consequently, VoC and AI have wonderfully come together in conversational AI applications, including chatbots. Chatbots can be deployed to answer FAQs, to direct calls to the right service representative, to detect customer sentiment, to monitor call center employee performance, to recall and recognize patterns in the customer’s prior history with the business, to confirm customer identity, to identify up-sell and cross-sell opportunities, to streamline data entry by automatically capturing intricate details of a customer’s preferences, concerns, requests, critical information, and callback expectations, and to detect when it’s time to direct the call to a human agent (the right human agent).

In short, the VoC reaches its peak value when it captures the full depth and breadth of what the customer is talking about and what they are trying to communicate. AI-enabled chatbots are thus bringing great value and effectiveness in consumers’ call center experiences. 

From the call center representative perspective, AI-enabled chatbots are a tremendous efficiency and effectiveness boost for these persons also. Many details of the initial customer interaction can be automatically captured, recorded, indexed, and made searchable even before the call is directed to the representative, increasing the likelihood that it is the right representative for that customer’s specific needs. Not only is the CX amplified, but so is the EX (Employee Experience). Surveys and reports have documented that the strong improvement in call center staff EX is a source of significant value to the entire organization. 

One dimension of this EX amplification that should not be overlooked is when advanced case management is required from a human call center agent. In cases like that, the agent is engaged in their best (most satisfying) capacity as the expert and most knowledgeable source to help the customer, in sharp contrast to other calls where they are engaged in answering the standard FAQs, or in quoting customer account information from a database that a chatbot could easily have retrieved, or in asking the customer to repeat the same information that the customer gave to a previous agent. Everybody wins when all those latter activities are handled swiftly, accurately, and non-redundantly prior to the person-to-person engagement that can then provide the best human touch in the entire caller experience.

Chatbots employ a suite of data-driven technologies, including: machine learning (for pattern detection and recognition, sentiment and emotion detection), natural language processing (covering natural language understanding NLU and natural language generation NLG), voice assistants (for voice search and autonomous action-enablement), cloud computing (to activate actions, services, document creation and document processing), AI (for auto-transcribing conversations, creating real-time action lists, and adding information to appropriate fields automatically), and more.

When the Voice of the Customer talks, the modern AI-powered Call Center listens and responds. 

Learn more about the modern Call Center and CX Reimagined at CX Summit 2021, presented by Five9. The Summit’s 5 tracks and multiple sessions will focus on the transformation of the contact center through the evolution of digital channels, AI, Automation and Analytics. By applying the power of data and the cloud we can reimagine CX and realize results in a rapidly changing marketplace. At the Summit, you can connect and network with contact center professionals, ecosystem partners and peers, you can learn to optimize your Five9 implementation to superpower your contact center, you can hear customer stories and product updates, and you can learn how Five9 can help you deliver a whole new level of customer service. Register here for CX Summit 2021 and see for yourself where the Reimagined becomes the Realized in CX: https://five9cxsummit.com/insix

Learn more about the Summit, some highlights and key takeaways, in my follow-up article “EX is the New CX.”

Note: This article was sponsored. The opinions expressed here are my own and do not represent the opinions of any other person, company, or entity.

#Five9CXSummit #CXNation

Are You Content with Your Organization’s Content Strategy?

In this post, we will examine ways that your organization can separate useful content into separate categories that amplify your own staff’s performance. Before we start, I have a few questions for you.

What attributes of your organization’s strategies can you attribute to successful outcomes? How long do you deliberate before taking specific deliberate actions? Do you converse with your employees about decisions that might be the converse of what they would expect? Is a process modification that saves a minute in someone’s workday considered too minute for consideration? Do you present your employees with a present for their innovative ideas? Do you perfect your plans in anticipation of perfect outcomes? Or do you project foregone conclusions on a project before it is completed?

If you have good answers to these questions, that is awesome! I would not contest any of your answers since this is not a contest. In fact, this is actually something quite different. Before you combine all these questions in a heap and thresh them in a combine, and before you buffet me with a buffet of skeptical remarks, stick with me and let me explain. Do not close the door on me when I am so close to giving you an explanation.

What you have just experienced is a plethora of heteronyms. Heteronyms are words that are spelled identically but have different meanings when pronounced differently. If you include the title of this blog, you were just presented with 13 examples of heteronyms in the preceding paragraphs. Can you find them all?

Seriously now, what do these word games have to do with content strategy? I would say that they have a great deal to do with it. Specifically, in the modern era of massive data collections and exploding content repositories, we can no longer simply rely on keyword searches to be sufficient. In the case of a heteronym, a keyword search would return both uses of the word, even though their meanings are quite different. In “information retrieval” language, we would say that we have high RECALL, but low PRECISION. In other words, we can find most occurrences of the word (recall), but not all the results correspond to the meaning of our search (precision). That is no longer good enough when the volume is so high.

The key to success is to start enhancing and augmenting content management systems (CMS) with additional features: semantic content and context. This is accomplished through tags, annotations, and metadata (TAM). TAM management, like content management, begins with business strategy.

Strategic content management focusses on business outcomes, business process improvement, efficiency (precision – i.e., “did I find only the content that I need without a lot of noise?”), and effectiveness (recall – i.e., “did I find all the content that I need?”). Just because people can request a needle in the haystack, it is not a good thing to deliver the whole haystack that contains that needle. Clearly, such a content delivery system is not good for business productivity. So, there must be a strategy regarding who, what, when, where, why, and how is the organization’s content to be indexed, stored, accessed, delivered, used, and documented. The content strategy should emulate a digital library strategy. Labeling, indexing, ease of discovery, and ease of access are essential if end-users are to find and benefit from the collection.

My favorite approach to TAM creation and to modern data management in general is AI and machine learning (ML). That is, use AI and machine learning techniques on digital content (databases, documents, images, videos, press releases, forms, web content, social network posts, etc.) to infer topics, trends, sentiment, context, content, named entity identification, numerical content extraction (including the units on those numbers), and negations. Do not forget the negations. A document that states “this form should not be used for XYZ” is exactly the opposite of a document that states “this form must be used for XYZ”. Similarly, a social media post that states “Yes. I believe that this product is good” is quite different from a post that states “Yeah, sure. I believe that this product is good. LOL.”

Contextual TAM enhances a CMS with knowledge-driven search and retrieval, not just keyword-driven. Contextual TAM includes semantic TAM, taxonomic indexing, and even usage-based tags (digital breadcrumbs of the users of specific pieces of content, including the key words and phrases that people used to describe the content in their own reports). Adding these to your organization’s content makes the CMS semantically searchable and usable. That’s far more laser-focused (high-precision) than keyword search.

One type of implementation of a content strategy that is specific to data collections are data catalogs. Data catalogs are very useful and important. They become even more useful and valuable if they include granular search capabilities. For example, the end-user may only need the piece of the dataset that has the content that their task requires, versus being delivered the full dataset. Tagging and annotating those subcomponents and subsets (i.e., granules) of the data collection for fast search, access, and retrieval is also important for efficient orchestration and delivery of the data that fuels AI, automation, and machine learning operations.

One way to describe this is “smart content” for intelligent digital business operations. Smart content includes labeled (tagged, annotated) metadata (TAM). These labels include content, context, uses, sources, and characterizations (patterns, features) associated with the whole content and with individual content granules. Labels can be learned through machine learning, or applied by human experts, or proposed by non-experts when those labels represent cognitive human-discovered patterns and features in the data. Labels can be learned and applied in existing CMS, in massive streaming data, and in sensor data (collected in devices at the “edge”).

Some specific tools and techniques that can be applied to CMS to generate smart content include these:

  • Natural language understanding and natural language generation
  • Topic modeling (including topic drift and topic emergence detection)
  • Sentiment detection (including emotion detection)
  • AI-generated and ML-inferred content and context
  • Entity identification and extraction
  • Numerical quantity extraction
  • Automated structured (searchable) database generation from textual (unstructured) document collections (for example: Textual ETL).

Consequently, smart content thrives at the convergence of AI and content. Labels are curated and stored with the content, thus enabling curation, cataloguing (indexing), search, delivery, orchestration, and use of content and data in AI applications, including knowledge-driven decision-making and autonomous operations. Techniques that both enable (contribute to) and benefit from smart content are content discovery, machine learning, knowledge graphs, semantic linked data, semantic data integration, knowledge discovery, and knowledge management. Smart content thus meets the needs for digital business operations and autonomous (AI and intelligent automation) activities, which must devour streams of content and data – not just any content, but smart content – the right (semantically identified) content delivered at the right time in the right context.

The four tactical steps in a smart content strategy include:

  1. Characterize and contextualize the patterns, events, and entities in the content collection with semantic (contextual) tags, annotation, and metadata (TAM).
  2. Collect, curate, and catalog (i.e., index) each TAM component to make it searchable, accessible, and reusable.
  3. Deliver the right content at the right time in the right context to the decision agent.
  4. Decide and act on the delivered insights and knowledge.

Remember, do not be content with your current content management strategy. But discover and deliver the perfect smart content that perfects your digital business outcomes. Smart content strategy can save end-users countless minutes in a typical workday, and that type of business process improvement certainly is not too minute for consideration.