Category Archives: Products

My top learning moments at Splunk .conf23

I recently attended the Splunk .conf23 conference in Las Vegas. Well, the conference was in Vegas, while I was far away in my home office watching the live conference keynote sessions online. Despite the thousands of miles (and kilometers) of separation, I could feel the excitement in the room as numerous announcements were made, individuals were honored, customer success stories were presented, and new solutions and product features were revealed. I will summarize many of those here (specifically my major learning moments), though this report cannot provide a complete review of all that was said at .conf23, primarily because I attended only the two main keynote sessions, but also because the phenomenal number of remarkable things to hear and learn in those sessions exceeded my capacity to record them all in one succinct report.

When I reviewed highlights from last year’s Splunk .conf22 conference in my summary report at that time, I focused a lot on the Splunk Observability Cloud and its incredible suite of Observability and Monitoring products and services. This reflected my strong interest in observability at that time. My strong interest hasn’t diminished, and neither has Splunk’s developments and product releases in that space, as seen in observability’s prominent mention within many of Splunk’s announcements at this year’s .conf23 event. For a detailed report on the current state of observability this year, you can access and download “The State of Observability 2023” report from Splunk. Here are four specific metrics from the report, highlighting the potentially huge enterprise system benefits coming from implementing Splunk’s observability and monitoring products and services:

  1. Four times as many leaders who implement observability strategies resolve unplanned downtime in just minutes, not hours or days.
  2. Leaders report one-third the number of outages per year, on average, compared to those organizations who do not implement observability and monitoring.
  3. Leaders are 7.9x as likely to say that their ROI on observability tools far exceeded expectations.
  4. 89% of leaders are completely confident in their ability to meet their application availability and performance requirements, versus just 43% of others.

Here are my top learnings from .conf23: 

  • Splunk announced a new strategic partnership with Microsoft Azure, thereby adding another major cloud provider to their other cloud provider partnerships, bringing Splunk products and services into more enterprises through the Azure Marketplace. This partnership also specifically extends hybrid cloud capabilities that will enhance organizations’ digital resilience, while enabling transformation, modernization, migration, and growth in all enterprise digital systems with confidence, trust, and security. 
  • Digital resilience was a major common theme across all of the Splunk announcements this week. As I heard someone say in the keynote session, “You had me at resilience!” By providing real-time data insights into all aspects of business and IT operations, Splunk’s comprehensive visibility and observability offerings enhance digital resilience across the full enterprise. Organizations are able to monitor integrity, quality drift, performance trends, real-time demand, SLA (service level agreement) compliance metrics, and anomalous behaviors (in devices, applications, and networks) to provide timely alerting, early warnings, and other confidence measures. From these data streams, real-time actionable insights can feed decision-making and risk mitigations at the moment of need. Such prescriptive capabilities can be more proactive, automated, and optimized, making digital resilience an objective fact for businesses, not just a business objective. I call that “digital resilience for the win!”
  • Several Splunk customer success stories were presented, with interesting details of their enterprise systems, the “back stories” that led them to Splunk, the transformations that have occurred since Splunk integration, and the metrics to back up the success stories. Customers presenting at .conf23 included FedEx, Carnival Corporation & plc, Inter IKEA, and VMware. Here are a few of the customer performance metrics presented (measuring performance changes following the Splunk integration into the customers’ enterprise systems): 3X Faster Response Time, 90% Faster Mean Time to Remediation, and 60X Faster Insights.
  • Splunk has brought greater integration and customer ease-of-use of their offerings through a Unified Security and Observability Platform. This unified operations center (Splunk Mission Control) amplifies the efficiency (time to problem resolution) and effectiveness (number of time-critical problems resolved) of ITOps and DevOps teams, requiring fewer manual steps in correlating data streams from multiple systems in order to determine the root cause of an incident. Further enhancing the user experience, the unified platform provides end-to-end enterprise systems visibility and federated search across those systems.
  • Splunk Mission Control (just mentioned above) Splunk describes it best: “Splunk Mission Control brings together Splunk’s industry-leading security technologies that help customers take control of their detection, investigation and response processes. Splunk’s security offerings include security analytics (Splunk Enterprise Security), automation and orchestration (Splunk SOAR), and threat intelligence capabilities. In addition, Splunk Mission Control offers simplified security workflows with processes codified into response templates. With Splunk Mission Control, security teams can focus on mission-critical objectives, and adopt more proactive, nimble security operations.”
  • Model-Assisted Threat Hunts, also known as Splunk M-ATH, is Splunk’s brand name for machine learning-assisted threat hunting and mitigation. M-ATH is part of the PEAK (Prepare, Execute, and Act with Knowledge) Framework, that consists of three types of hunts: (1) Hypothesis-driven (i.e., testing for hypothesized threats, behaviors, and activities), (2) Baseline (i.e., search for deviations from normal behaviors through EDA: Exploratory Data Analysis), and (3) M-ATH (i.e., automation of the first two type of hunts, using AI and machine learning). M-ATH includes ML-assisted adaptive thresholding and outlier handling, for improved alerts (i.e., faster alerting with fewer false positives and false negatives).
  • “Don’t be a SOAR loser!” Okay, that’s what one of the Splunk speakers said at .conf23. By that, he was referring to being a winner with Splunk SOAR: Security Orchestration, Automation and Response. SOAR orchestrates, prioritizes, and automates security teams (SecOps) workflows and tasks, enabling more accurate, more complete, smarter, and faster response to security incidents. As Splunk says, “Automate so you can innovate.” Isn’t that always a business truth? If you can free your analyst teams to think outside the box, hypothesize, innovate, and test new methods and solutions, then that is the sure path to being a SAFE (Security Analytics For the Enterprise) winner: soar with SOAR! While SIEM (Security Information and Event Management) aims to manage the data flows, logging, audits, alerted events, and incident responses, SOAR automates these security activities (using machine learning and AI), monitors the data and events for anomalous behaviors, classifies (prioritizes) the events, and then orchestrates optimized security operations and incident responses (using playbooks).
  • Saving my best two .conf23 learning moments for last, first up is Splunk Edge Hub. This is a physical device, in the IoT (Internet of Things) family of sensors, that collects and streams data from the edge (i.e., from edge devices, cameras, streaming data sources, monitoring systems, and sensors of all types) into Splunk systems that go to work on those data: security operations, anomaly detection, event classification, trend detection, drift detection, behavior detection, and any other edge application that requires monitoring and observability, with an injection of machine learning and AI for intelligent data understanding, classification, prioritization, optimization, and automation. Since business thrives at the edge (through insights discovery and actionable analytics at the point—time and place—of data collection), an edge hub is just what a business needs to mitigate risk, ensure visibility, escalate incidents for review, optimize the operational response, and monitor the associated activities (causes and effects). 
  • Splunk AI Assistant  Boom! This is the brilliant and innovative introduction of an AI assistant into Splunk products, services, and user workflows. This includes the latest and best of AI — generative AI and natural language interfaces integrated within the Splunk platform. This product release most definitely enables and “catalyzes digital resilience in cybersecurity and observability.” This is not just a product release. It is a “way of life” and “a way of doing business” with Splunk products and services. AI is not just a tacked-on feature, but it is a fundamental characteristic and property of those products’ features. Splunk AI increases productivity, efficiency, effectiveness, accuracy, completeness, reliability, and (yes!) resilience across all enterprise SecOps, ITOps, and AIOps functions, tasks, and workflows that are powered by Splunk. Generative AI enables the Splunk SecOps and ITOps tasks, workflows, processes, insights, alerts, and recommended actions to be domain-specific and customer-specific. It automatically detects anomalies and focuses attention where it’s needed most, for that business in that domain, while providing full control and transparency on which data and how data are used to train the AI, and how much control is assigned to the AI (by maintaining “human in the loop” functionality). With regard to the natural language features, Splunk AI Assistant leverages generative AI to provide an interactive chat experience and helps users create SPL (Splunk Processing Language) queries using natural language. This feature not only improves time-to-value, but it “helps make SPL more accessible, further democratizing an organization’s access to, and insights from, its data” – and that includes automated recommendations to the user for “next best action”, which is a great learning prompt for new Splunk users and SecOps beginners.


For a peek into my peak real-time experiences at .conf23, see my #splunkconf23 social thread on Twitter at https://bit.ly/3DjI5NU. Actually, go there and explore, because there is so much more to see there than I could cover in this one report.

Closing thoughts – AI (particularly generative AI) has been the hottest tech topic of the year, and Splunk .conf23 did not disappoint in their coverage of this topic. The agendas for some events are filled with generic descriptions that sing the praises of generative AI. This Splunk event .conf23 provided something far more beneficial and practical: they presented demonstrably valuable business applications of generative AI embedded in Splunk products, which deliver a convincing Splunk-specific productivity enhancer for new and existing users of Splunk products. When the tech hype train is moving as fast as it has been this year, it is hard for a business to quickly innovate, incorporate, and deliver substantially new features that use the new tech within their legacy products and services, but Splunk has done so, with top marks for those achievements.

Disclaimer: I was compensated as an independent freelance media influencer for my participation at the conference and for this article. The opinions expressed here are entirely my own and do not represent those of Splunk or of any Splunk partners. Any misrepresentations of the products and services mentioned in my statements are entirely my own responsibility. Nothing here should be construed as an offer to sell or as financial advice of any kind. My comments are entirely of a technical nature, focused on the technical capabilities of the items mentioned in the article.

Low-Latency Data Delivery and Analytics Product Delivery for Business Innovation and Enterprise AI Readiness

This article has been divided into 2 parts now:

Read other articles in this series on the importance of low-latency enterprise data infrastructure for business analytics:

Other related articles on the importance of data infrastructure for enterprise AI initiatives:

The Data Space-Time Continuum for Analytics Innovation and Business Growth

We discussed in another article the key role of enterprise data infrastructure in enabling a culture of data democratization, data analytics at the speed of business questions, analytics innovation, and business value creation from those innovative data analytics solutions. Now, we drill down into some of the special characteristics of data and enterprise data infrastructure that ignite analytics innovation.

First, a little history – years ago, at the dawn of the big data age, there was frequent talk of the three V’s of big data (data’s three biggest challenges): volume, velocity, and variety. Though those discussions are now considered “ancient history” in the current AI-dominated era, the challenges have not vanished. In fact, they have grown in importance and impact.

While massive data volumes appear less frequently now in strategic discussions and are being tamed with excellent data infrastructure solutions from Pure Storage, the data velocity and data variety challenges remain in their own unique “sweet spot” of business data strategy conversations. We addressed the data velocity challenges and solutions in our previous article: “Solving the Data Daze – Analytics at the Speed of Business Questions”. We will now take a look at the data variety challenge, and then we will return to modern enterprise data infrastructure solutions for handling all big data challenges.

Okay, data variety—what is there about data variety that makes it such a big analytics challenge? This challenge often manifests itself when business executives ask a question like this: “what value and advantages will all that diversity in data sources, venues, platforms, modalities, and dimensions actually deliver for us in order to outweigh the immense challenges that high data variety brings to our enterprise data team?”

Because nearly all organizations collect many types of data from many different sources for many business use cases, applications, apps, and development activities, consequently nearly every organization is facing this dilemma.

[continue reading the full article here]

Solving the Data Daze – Analytics at the Speed of Business Questions

Data is more than just another digital asset of the modern enterprise. It is an essential asset. And data is now a fundamental feature of any successful organization. Beyond the early days of data collection, where data was acquired primarily to measure what had happened (descriptive) or why something is happening (diagnostic), data collection now drives predictive models (forecasting the future) and prescriptive models (optimizing for “a better future”). Business leaders need more than backward-looking reports, though those are still required for some stakeholders and regulators. Leaders now require forward-looking insights for competitive market advantage and advancement.

So, what happens when the data flows are not quarterly, or monthly, or even daily, but streaming in real-time? The business challenges then become manifold: talent and technologies now must be harnessed, choreographed, and synchronized to keep up with the data flows that carry and encode essential insights flowing through business processes at light speed. Insights discovery (powered by analytics, data science, and machine learning) drives next-best decisions, next-best actions, and business process automation.

In the early days of the current data analytics revolution, one would often hear business owners say that they need their data to move at the speed of business. Well, it soon became clear that the real problem was the reverse: how can we have our business move at the speed of our data? Fortunately, countless innovative products and services in the data analytics world have helped organizations in that regard, through an explosion in innovation around data analytics, data science, data storytelling, data-driven decision support, talent development, automation, and AI (including the technologies associated with machine learning, deep learning, generative AI, and ChatGPT).

[continue reading the full article here]

My top learning and pondering moments at Splunk .conf22

I recently attended the Splunk .conf22 conference. While the event was live in-person in Las Vegas, I attended virtually from my home office. Consequently I missed the incredible in-person experience of the brilliant speakers on the main stage, the technodazzle of 100’s of exhibitors’ offerings in the exhibit arena, and the smooth hip hop sounds from the special guest entertainer — guess who?

What I missed in-person was more than compensated for by the incredible online presentations by Splunk leaders, developers, and customers. If you have ever attended a major expo at one of the major Vegas hotels, you know that there is a lot of walking between different sessions — literally, miles of walking per day. That’s good for you, but it often means that you don’t attend all of the sessions that you would like because of the requisite rushing from venue to venue. None of that was necessary on the Splunk .conf22 virtual conference platform. I was able to see a lot, learn a lot, be impressed a lot, and ponder a lot about all of the wonderful features, functionalities, and future plans for the Splunk platform.

One of the first major attractions for me to attend this event is found in the primary descriptor of the Splunk Platform — it is appropriately called the Splunk Observability Cloud, which includes an impressive suite of Observability and Monitoring products and services. I have written and spoken frequently and passionately about Observability in the past couple of years. For example, I wrote this in 2021:

“Observability emerged as one of the hottest and (for me) most exciting developments of the year. Do not confuse observability with monitoring (specifically, with IT monitoring). The key difference is this: monitoring is what you do, and observability is why you do it. Observability is a business strategy: what you monitor, why you monitor it, what you intend to learn from it, how it will be used, and how it will contribute to business objectives and mission success. But the power, value, and imperative of observability does not stop there. Observability meets AI – it is part of the complete AIOps package: ‘keeping an eye on the AI.’ Observability delivers actionable insights, context-enriched data sets, early warning alert generation, root cause visibility, active performance monitoring, predictive and prescriptive incident management, real-time operational deviation detection (6-Sigma never had it so good!), tight coupling of cyber-physical systems, digital twinning of almost anything in the enterprise, and more. And the goodness doesn’t stop there.”

Continue reading my thoughts on Observability at http://rocketdatascience.org/?p=1589

The dominant references everywhere to Observability was just the start of awesome brain food offered at Splunk’s .conf22 event. Here is a list of my top moments, learnings, and musings from this year’s Splunk .conf:

  1. Observability for Unified Security with AI (Artificial Intelligence) and Machine Learning on the Splunk platform empowers enterprises to operationalize data for use-case-specific functionality across shared datasets. (Reference)
  2. The latest updates to the Splunk platform address the complexities of multi-cloud and hybrid environments, enabling cybersecurity and network big data functions (e.g., log analytics and anomaly detection) across distributed data sources and diverse enterprise IT infrastructure resources. (Reference)
  3. Splunk Enterprise 9.0 is here, now! Explore and test-drive it (with a free trial) here.
  4. The new Splunk Enterprise 9.0 release enables DevSecOps users to gain more insights from Observability data with Federated Search, with the ability to correlate ops with security alerts, and with Edge Management, all in one platform. (Reference)
  5. Security information and event management (SIEM) on the Splunk platform is enhanced with end-to-end visibility and platform extensibility, with machine learning and automation (AIOps), with risk-based alerting, and with Federated Search (i.e., Observability on-demand). (Reference)
  6. Customer success story: As a customer-obsessed bank with ultra-rapid growth, Nubank turned to Splunk to optimize data flows, analytics applications, customer support functions, and insights-obsessed IT monitoring. (Reference)
  7. The key characteristics of the Splunk Observability Cloud are Resilience, Security, Scalability, and EXTENSIBILITY. The latter specifically refers to the ease in which developers can extend Splunk’s capabilities to other apps, applying their AIOps and DevSecOps best practices and principles! Developers can start here.
  8. The Splunk Observability Cloud has many functions for data-intensive IT, Security, and Network operations, including Anomaly Detection Service, Federated Search, Synthetic Monitoring, Incident Intelligence, and much more. Synthetic monitoring is essentially digital twinning of your network and IT environment, providing insights through simulated risks, attacks, and anomalies via predictive and prescriptive modeling. [Reference]
  9. Splunk Observability Cloud’s Federated Search capability activates search and analytics regardless of where your data lives — on-site, in the cloud, or from a third party. (Reference)
  10. The new release of the Splunk Data Manager provides a simple, modern, automated experience of data ingest for Splunk Cloud admins, which reduces the time it takes to configure data collection (from hours/days to minutes). (Reference)
  11. Splunk works on data, data, data, but the focus is always on customer, customer, customer — because delivering best outcomes for customers is job #1. Explore Splunk’s amazing Partner ecosystem (Partnerverse) and the impressive catalog of partners’ solutions here.
  12. Splunk .conf22 Invites Organizations to Unlock Innovation With Data.

In summary, here is my list of key words and topics that illustrate the diverse capabilities and value-packed features of the Splunk Observability Cloud Platform that I learned about at the .conf22 event:

– Anomaly Detection Assistant
– Risk-based Alerting (powered by AI and Machine Learning scoring algorithms)
– Federated Search (Observability on-demand)
– End-to-End Visibility
– Platform Extensibility
– Massive(!) Scalability of the Splunk Observability Cloud (to billions of transactions per day)
– Insights-obsessed Monitoring (“We don’t need more information. We need more insights.”)
– APIs in Action (to Turn Data into Doing™)
– Splunk Incident Intelligence
– Synthetic Monitoring (Digital Twin of Network/IT infrastructure)
– Splunk Data Manager
– The Splunk Partner Universe (Partnerverse)

My closing thought — Cybersecurity is basically Data Analytics: detection, prediction, prescription, and optimizing for unpredictability. This is what Splunk lives for!

Follow me on LinkedIn here and on Twitter at @KirkDBorne.

Disclaimer: I was compensated as an independent freelance media influencer for my participation at the conference and for this article. The opinions expressed here are entirely my own and do not represent those of Splunk or of any Splunk partners. Any misrepresentations of the products and services mentioned in my statements are entirely my own responsibility. Nothing here should be construed as an offer to sell or as financial advice of any kind. My comments are entirely of a technical nature, focused on the technical capabilities of the items mentioned in the article.

EX is the New CX

(This article is a continuation of my earlier article “When the Voice of the Customer Actually Talks.”)

I recently attended (virtually) CX Summit 2021, presented by Five9, which focused on “CX Reimagined.” At first this title for the event seemed a bit grandiose to me – Reimagined! After attending the event, I now think the title was perfect, and it could have gone even further. I saw how the “art of the possible” in CX (Customer Experience) and EX (Employee Experience) in the Contact Center is already being realized and is being taken to new realms of possibility through AI and Cloud empowerments.

The evolved (reimagined) Contact Center now comes with more options for digital channels to accommodate the customer, more voice data-powered services that serve both the customer and the contact center representative, and more seamless actions on both ends of the call line, even for complex inquiries. This is all enabled by the 3 A’s: AI, Automation, and voice Analytics. We have heard it before: “happy employee, happy customer!” That now looks like this: “EX is the new CX.” Boom!

In an information-packed talk from Gartner Analyst Drew Kraus on “Getting Past the Hype in Customer Service”, where he reviewed just how much hype there is in the customer service and support technologies market, it became clear to me that Five9 delivers on the needs, not the hype.

Another informative and data-packed presentation was by Five9 SVP Scott Kolman and COMMfusion analyst Blair Pleasant, where they presented and did a deep dive into the Five9 survey “2021 Customer Service Index – Learn how customers have reimagined the customer service experience.” I won’t go too deep here (you should watch the whole session on-demand). Some interesting highlights include:

  1. Five9 surveyed 2048 consumers, with participants from 7 countries, representing ages 19 to early 70’s. They also completed a similar survey in 2020. Side-by-side comparisons of the survey results (by age, by year of survey, and by country) for the different survey questions were quite informative and potentially quite useful for any contact center operation. If that’s what you or your business does, then you should investigate these “Voice of the Customer” results.
  2. Across all demographics, only 25% of respondents felt that their contact center experience got worse (either “much worse” or “slightly worse”) from 2020 to 2021. We might have expected a different result with the pandemic raging. Everyone else (75%) felt that their experience got better, much better, or had no opinion.
  3. Some very surprising results appeared (with significant differences between countries) when people were asked to rate the keys to “Good Service Experience”. Highly rated categories were “Rep gets right answer, even if it takes more time” (33%); “Rep can answer my question quickly’’ (26%); and “Don’t have to wait long to reach rep” (20%).
  4. Similarly, there were some significant differences by country when people were asked to rate the keys to “Bad Service Experience”. Top responses included: “Get passed from one rep to another” (34%); “Have to wait long to reach rep” (26%); and a tie for third place (at 13%) for “Cue/on hold system not helpful” and “Rep cannot answer my question quickly”. (Remember, that despite these seriously bad experiences, only 25% of respondents generally saw a drop in customer service experience in the past year.)
  5. One of the more enlightening survey results appeared when asked, “How likely are you to do business with a company if you have a Poor Service Experience?” The USA responses were significantly different than responses from the other 6 countries in the survey in one category: over 11% of USA respondents were “very likely” to continue doing business, versus 3-6% of non-USA respondents being “very likely”. However, in the “somewhat likely” category, all countries were in the range 10-16%, with the USA respondents close to the midpoint, near 14%. In my opinion (not expressed by the session presenters), a reason for these seemingly incompatible responses is that there are two sentiments being conflated in this one question. On the one hand, you have the bad experience on “this” call. On the other hand, you have perhaps the much worse (time-consuming) future experience of switching providers and dealing with the corresponding onboarding (for whatever service this is about). I might be “somewhat likely” to switch providers after one bad call experience, but I would not be “very likely” to go through the pain of switching providers and all that entails.

There were many interesting and powerful sessions in addition to this one, which I focused on here because it presented lots of survey data, and I love data! Another great session was the presentation by former astronaut (now Professor) Michael Massimino – brilliant and inspiring, with numerous words of wisdom, leadership advice, and life’s lessons learned. Of course, I admit that I was drawn into his NASA space stories, including the Hubble Telescope repair mission that almost went wrong, because I worked with the Hubble Space Telescope project for 10 years and I worked an additional 10 years at NASA’s Goddard Space Flight Center where many of the telescope’s instruments were tested.

My big takeaway from the Five9 CX Summit is how cloud, AI, automation, and voice analytics are rapidly driving change in the positive direction for contact center representatives and for customers who call in. Maybe that’s why the customer experience didn’t change much from 2020 to 2021, because a lot of those technologies have already been deployed in the past couple of years, particularly for Five9’s clients.

Chatbots and conversational AI are just part of the story – there’s so much more. Five9’s new cloud-enabled, AI-powered, voice data-driven solutions and services described at the summit are definitely worth exploring and investigating for your contact center: IVA (Intelligent Virtual Agents), VoiceStream, Agent Assist, Studio7, Practical AI, WFO (Work Flow Optimization), Conversation Architect, and UC (unified communications) integration into the contact center VX (Voice Experience) workflow.

Learn more about CX Reimagined and the roles of AI, Automation, Cloud, Voice Analytics, and Omnichannel Customer Engagement in the modern contact center at CX Summit 2021, presented by Five9. (Even if you missed the live event, the sessions are recorded, so you can watch them on-demand at any time you wish.) See for yourself where the Reimagined becomes the Realized in CX. And learn why EX is the new CX.

Note: This article was sponsored. The opinions expressed here are my own and do not represent the opinions of any other person, company, or entity.

#Five9CXSummit #CXNation

When the Voice of the Customer Actually Talks

For many years, organizations (mostly consumer-facing) have placed the “voice of the customer” (VoC) high on their priority list of top sources for customer intelligence. The goals of such activities are to improve customer service, customer interactions, customer engagement, and customer experience (CX) through just-in-time customer assistance, personalization, and loyalty-building activities. In recent years, even government agencies have increased their attention on Citizen Experience (CX) and Voice of the Citizen (VoC), to inform and guide their citizen services.

CX has become increasingly data-informed and data-driven, with VoC data being one of the key data sources. Other data sources include purchase patterns, online reviews, online shopping behavior analytics, and call center analytics. As good as these data analytics have been, collecting data and then performing pattern-detection and pattern-recognition analytics can be taken so much further now with AI-enabled customer interactions. 

AI is great for pattern recognition, product and service recommendations, anomaly detection, next-best action and next-best decision recommendations, and providing an insights power-boost to all of those. AI can be considered as Accelerated, Actionable, Amplified, Assisted, Augmented, even Awesome Intelligence, both for the customer and for the call center staff.

Consequently, VoC and AI have wonderfully come together in conversational AI applications, including chatbots. Chatbots can be deployed to answer FAQs, to direct calls to the right service representative, to detect customer sentiment, to monitor call center employee performance, to recall and recognize patterns in the customer’s prior history with the business, to confirm customer identity, to identify up-sell and cross-sell opportunities, to streamline data entry by automatically capturing intricate details of a customer’s preferences, concerns, requests, critical information, and callback expectations, and to detect when it’s time to direct the call to a human agent (the right human agent).

In short, the VoC reaches its peak value when it captures the full depth and breadth of what the customer is talking about and what they are trying to communicate. AI-enabled chatbots are thus bringing great value and effectiveness in consumers’ call center experiences. 

From the call center representative perspective, AI-enabled chatbots are a tremendous efficiency and effectiveness boost for these persons also. Many details of the initial customer interaction can be automatically captured, recorded, indexed, and made searchable even before the call is directed to the representative, increasing the likelihood that it is the right representative for that customer’s specific needs. Not only is the CX amplified, but so is the EX (Employee Experience). Surveys and reports have documented that the strong improvement in call center staff EX is a source of significant value to the entire organization. 

One dimension of this EX amplification that should not be overlooked is when advanced case management is required from a human call center agent. In cases like that, the agent is engaged in their best (most satisfying) capacity as the expert and most knowledgeable source to help the customer, in sharp contrast to other calls where they are engaged in answering the standard FAQs, or in quoting customer account information from a database that a chatbot could easily have retrieved, or in asking the customer to repeat the same information that the customer gave to a previous agent. Everybody wins when all those latter activities are handled swiftly, accurately, and non-redundantly prior to the person-to-person engagement that can then provide the best human touch in the entire caller experience.

Chatbots employ a suite of data-driven technologies, including: machine learning (for pattern detection and recognition, sentiment and emotion detection), natural language processing (covering natural language understanding NLU and natural language generation NLG), voice assistants (for voice search and autonomous action-enablement), cloud computing (to activate actions, services, document creation and document processing), AI (for auto-transcribing conversations, creating real-time action lists, and adding information to appropriate fields automatically), and more.

When the Voice of the Customer talks, the modern AI-powered Call Center listens and responds. 

Learn more about the modern Call Center and CX Reimagined at CX Summit 2021, presented by Five9. The Summit’s 5 tracks and multiple sessions will focus on the transformation of the contact center through the evolution of digital channels, AI, Automation and Analytics. By applying the power of data and the cloud we can reimagine CX and realize results in a rapidly changing marketplace. At the Summit, you can connect and network with contact center professionals, ecosystem partners and peers, you can learn to optimize your Five9 implementation to superpower your contact center, you can hear customer stories and product updates, and you can learn how Five9 can help you deliver a whole new level of customer service. Register here for CX Summit 2021 and see for yourself where the Reimagined becomes the Realized in CX: https://five9cxsummit.com/insix

Learn more about the Summit, some highlights and key takeaways, in my follow-up article “EX is the New CX.”

Note: This article was sponsored. The opinions expressed here are my own and do not represent the opinions of any other person, company, or entity.

#Five9CXSummit #CXNation

Are You Content with Your Organization’s Content Strategy?

In this post, we will examine ways that your organization can separate useful content into separate categories that amplify your own staff’s performance. Before we start, I have a few questions for you.

What attributes of your organization’s strategies can you attribute to successful outcomes? How long do you deliberate before taking specific deliberate actions? Do you converse with your employees about decisions that might be the converse of what they would expect? Is a process modification that saves a minute in someone’s workday considered too minute for consideration? Do you present your employees with a present for their innovative ideas? Do you perfect your plans in anticipation of perfect outcomes? Or do you project foregone conclusions on a project before it is completed?

If you have good answers to these questions, that is awesome! I would not contest any of your answers since this is not a contest. In fact, this is actually something quite different. Before you combine all these questions in a heap and thresh them in a combine, and before you buffet me with a buffet of skeptical remarks, stick with me and let me explain. Do not close the door on me when I am so close to giving you an explanation.

What you have just experienced is a plethora of heteronyms. Heteronyms are words that are spelled identically but have different meanings when pronounced differently. If you include the title of this blog, you were just presented with 13 examples of heteronyms in the preceding paragraphs. Can you find them all?

Seriously now, what do these word games have to do with content strategy? I would say that they have a great deal to do with it. Specifically, in the modern era of massive data collections and exploding content repositories, we can no longer simply rely on keyword searches to be sufficient. In the case of a heteronym, a keyword search would return both uses of the word, even though their meanings are quite different. In “information retrieval” language, we would say that we have high RECALL, but low PRECISION. In other words, we can find most occurrences of the word (recall), but not all the results correspond to the meaning of our search (precision). That is no longer good enough when the volume is so high.

The key to success is to start enhancing and augmenting content management systems (CMS) with additional features: semantic content and context. This is accomplished through tags, annotations, and metadata (TAM). TAM management, like content management, begins with business strategy.

Strategic content management focusses on business outcomes, business process improvement, efficiency (precision – i.e., “did I find only the content that I need without a lot of noise?”), and effectiveness (recall – i.e., “did I find all the content that I need?”). Just because people can request a needle in the haystack, it is not a good thing to deliver the whole haystack that contains that needle. Clearly, such a content delivery system is not good for business productivity. So, there must be a strategy regarding who, what, when, where, why, and how is the organization’s content to be indexed, stored, accessed, delivered, used, and documented. The content strategy should emulate a digital library strategy. Labeling, indexing, ease of discovery, and ease of access are essential if end-users are to find and benefit from the collection.

My favorite approach to TAM creation and to modern data management in general is AI and machine learning (ML). That is, use AI and machine learning techniques on digital content (databases, documents, images, videos, press releases, forms, web content, social network posts, etc.) to infer topics, trends, sentiment, context, content, named entity identification, numerical content extraction (including the units on those numbers), and negations. Do not forget the negations. A document that states “this form should not be used for XYZ” is exactly the opposite of a document that states “this form must be used for XYZ”. Similarly, a social media post that states “Yes. I believe that this product is good” is quite different from a post that states “Yeah, sure. I believe that this product is good. LOL.”

Contextual TAM enhances a CMS with knowledge-driven search and retrieval, not just keyword-driven. Contextual TAM includes semantic TAM, taxonomic indexing, and even usage-based tags (digital breadcrumbs of the users of specific pieces of content, including the key words and phrases that people used to describe the content in their own reports). Adding these to your organization’s content makes the CMS semantically searchable and usable. That’s far more laser-focused (high-precision) than keyword search.

One type of implementation of a content strategy that is specific to data collections are data catalogs. Data catalogs are very useful and important. They become even more useful and valuable if they include granular search capabilities. For example, the end-user may only need the piece of the dataset that has the content that their task requires, versus being delivered the full dataset. Tagging and annotating those subcomponents and subsets (i.e., granules) of the data collection for fast search, access, and retrieval is also important for efficient orchestration and delivery of the data that fuels AI, automation, and machine learning operations.

One way to describe this is “smart content” for intelligent digital business operations. Smart content includes labeled (tagged, annotated) metadata (TAM). These labels include content, context, uses, sources, and characterizations (patterns, features) associated with the whole content and with individual content granules. Labels can be learned through machine learning, or applied by human experts, or proposed by non-experts when those labels represent cognitive human-discovered patterns and features in the data. Labels can be learned and applied in existing CMS, in massive streaming data, and in sensor data (collected in devices at the “edge”).

Some specific tools and techniques that can be applied to CMS to generate smart content include these:

  • Natural language understanding and natural language generation
  • Topic modeling (including topic drift and topic emergence detection)
  • Sentiment detection (including emotion detection)
  • AI-generated and ML-inferred content and context
  • Entity identification and extraction
  • Numerical quantity extraction
  • Automated structured (searchable) database generation from textual (unstructured) document collections (for example: Textual ETL).

Consequently, smart content thrives at the convergence of AI and content. Labels are curated and stored with the content, thus enabling curation, cataloguing (indexing), search, delivery, orchestration, and use of content and data in AI applications, including knowledge-driven decision-making and autonomous operations. Techniques that both enable (contribute to) and benefit from smart content are content discovery, machine learning, knowledge graphs, semantic linked data, semantic data integration, knowledge discovery, and knowledge management. Smart content thus meets the needs for digital business operations and autonomous (AI and intelligent automation) activities, which must devour streams of content and data – not just any content, but smart content – the right (semantically identified) content delivered at the right time in the right context.

The four tactical steps in a smart content strategy include:

  1. Characterize and contextualize the patterns, events, and entities in the content collection with semantic (contextual) tags, annotation, and metadata (TAM).
  2. Collect, curate, and catalog (i.e., index) each TAM component to make it searchable, accessible, and reusable.
  3. Deliver the right content at the right time in the right context to the decision agent.
  4. Decide and act on the delivered insights and knowledge.

Remember, do not be content with your current content management strategy. But discover and deliver the perfect smart content that perfects your digital business outcomes. Smart content strategy can save end-users countless minutes in a typical workday, and that type of business process improvement certainly is not too minute for consideration.

The Power of Graph Databases, Linked Data, and Graph Algorithms

In 2019, I was asked to write the Foreword for the book Graph Algorithms: Practical Examples in Apache Spark and Neo4j, by Mark Needham and Amy E. Hodler. I wrote an extensive piece on the power of graph databases, linked data, graph algorithms, and various significant graph analytics applications. In their wisdom, the editors of the book decided that I wrote “too much”. So, they correctly shortened my contribution by about half in the final published version of my Foreword for the book.

The book is awesome, an absolute must-have reference volume, and it is free (for now, downloadable from Neo4j).

Graph Algorithms book

Now, for the first time, the full unabridged (and unedited) version of my initial contribution as the Foreword for the book is published here. (You should still get the book because it is a fantastic 250-page masterpiece for data scientists!) Any omissions, errors, or viewpoints in the piece below are entirely my own. I publish this in its original form in order to capture the essence of my point of view on the power of graph analytics.

As you read this, just remember the most important message: the natural data structure of the world is not rows and columns, but a graph. And this: perhaps the most powerful node in a graph model for real-world use cases might be “context”. How does one express “context” in a data model? Ahh, that’s the topic for another article. But this might help you get there: https://twitter.com/KirkDBorne/status/1232094501537275904

“All the World’s a Graph”

What do marketing attribution analysis, anti-money laundering, customer journey modeling, safety incident causal factor analysis, literature-based discovery, fraud network analysis, Internet search, the map app on your mobile phone, the spread of infectious diseases, and the theatrical performance of a William Shakespeare play all have in common? No, it is not something as trivial as the fact that all these phrases contain nouns and action verbs! What they have in common is that all these phrases proved that Shakespeare was right when he declared, “All the world’s a graph!”

Okay, the Bard of Avon did not actually say “Graph” in that sentence, but he did say “Stage” in the sentence. However, notice that all the examples mentioned above involve entities and the relationships between them, including both direct and indirect (transitive) relationships — a graph! Entities are the nodes in the graph — these can be people, events, objects, concepts, or places. The relationships between the nodes are the edges in the graph. Therefore, isn’t the very essence of a Shakespearean play the live action portrayal of entities (the nodes) and their relationships (the edges)? Consequently, maybe Shakespeare could have said “Graph” in his famous declaration.

What makes graph algorithms and graph databases so interesting and powerful isn’t the simple relationship between two entities: A is related to B. After all, the standard relational model of databases instantiated these types of relationships in its very foundation decades ago: the ERD (Entity-Relationship Diagram). What makes graphs so remarkably different and important are directional relationships and transitive relationships. In directional relationships, A may cause B, but not the opposite. In transitive relationships, A can be directly related to B and B can be directly related to C, while A is not directly related to C, so that consequently A is transitively related to C.

Because of transitivity relationships, particularly when they are numerous and diverse with many possible relationship (network) patterns and many possible degrees of separation between the entities, the graph model enables discovery of relationships between two entities that otherwise may seem wholly disconnected, unrelated, and difficult (if not impossible) to discover in a relational database. Hence, the graph model can be applied productively and effectively in numerous network analysis use cases.

Consider this marketing attribution use case: person A sees the marketing campaign, person A talks about it on their social media account, person B is connected to person A and sees the comment, and subsequently person B buys the product. From the marketing campaign manager’s perspective, the standard relational model would fail to identify the attribution, since B did not see the campaign and A did not respond to the campaign. The campaign looks like a failure. But it is not a failure — its actual success (and positive ROI) is discovered by the graph analytics algorithm through the transitive relationship between the marketing campaign and the final customer purchase, through an intermediary (entity-in-the-middle)!

Next, consider the anti-money laundering (AML) use case: person A and person C are under suspicion for illicit trafficking. Any interaction between the two (e.g., a financial transaction in a financial database) would be flagged by the authorities, and the interactions would come under great scrutiny. However, if A and C never transact any business together, but instead conduct their financial dealings through a safe, respected, unflagged financial authority B, then who would ever notice the transaction? Well, the graph analytics algorithm would notice! The graph engine would discover that there was a transitive relationship between A and C through the intermediary B (the entity-in-the-middle).

Similar descriptions of the power of graph can be given for the other use cases mentioned in the opening paragraph above, all of which are examples of network analysis through graph algorithms. Each of those cases deeply involves entities (people, objects, events, actions, concepts, and places) and their relationships (touch points, both causal and simple associations). Because of their great interest and power, we highlight two more of those use cases: Internet search and Literature-Based Discovery (LBD).

In Internet search a hyperlinked network (graph-based) algorithm is used by a major search engine to find the central authoritative node across the entire Internet for any given set of search words. The directionality of the edge is most important in this use case since the authoritative node in the network is the one that many other nodes point toward.

LBD is a knowledge network (graph-based) application in which significant discoveries are enabled across the knowledgebase of thousands (and even millions) of research journal articles — the discovery of “hidden knowledge” is only made through the connection between two published research results that may have a large number of degrees of separation (transitive relationships) between them. LBD is being applied to cancer research studies, where the massive semantic medical knowledgebase of symptoms, diagnoses, treatments, drug interactions, genetic markers, short-term results, and long-term consequences may be “hiding” previously unknown cures or beneficial treatments of the most impenetrable cases. The knowledge is already in the network, if only we were to connect the dots to discover it.

The book Graph Algorithms: Practical Examples in Apache Spark and Neo4j is aimed at broadening our knowledge and capabilities around these types of graph analyses, including algorithms, concepts, and practical machine learning applications of the algorithms. From basic concepts to fundamental algorithms, to processing platforms and practical use cases, the authors have compiled an instructive and illustrative guide to the wonderful world of graphs.

Chapter 1 provides a beautiful introduction to graphs, graph analytics algorithms, network science, and graph analytics use cases. In the discussion of power-law distributions, we see again another way that graphs differ from more familiar statistical analyses that assume a normal distribution of properties in random populations. Prepare yourself for some unexpected insights when you realize that power-law distributions are incredibly common in the natural world — graph analytics is a great tool for exploring those scale-free structures and their long tails. By the way, I always love a discussion that mentions the Pareto distribution.

Chapter 2 steps up our graph immersion by introducing us to the many different types of graphs that represent the rich variety of informative relationships that can exist between nodes, including directed and undirected, cyclic and acyclic, trees, and more. If you have always wondered what a DAG was, now you have no more excuses for not knowing. It’s all here. The chapter ends with a quick summary of things to come in greater detail in future chapters, by defining the three major categories of graph algorithms: pathfinding, centrality, and community detection.

Chapter 3 focuses on the graph processing platforms that are mentioned in the subtitle to the book: Apache Spark and Neo4j. In the Apache Spark section, you will find information about the Spark Graph Project, GraphFrames, and Cypher (the graph query language). In the Neo4j section, you will learn about its APOC library: Awesome Procedures On Cypher. Brief instructions on installing these graph engines are included, to prepare you for the use cases and sample applications that are provided later in the book.

Chapters 4, 5, and 6 then dive into the three major graph algorithm categories mentioned earlier. For example, the map app on your mobile phone employs a version of the pathfinding algorithm. Root cause analysis, customer journey modeling, and the spread of infectious diseases are other example applications of pathfinding. Internet search and influencer detection in social networks are example applications of the centrality algorithm. Fraud network analysis, AML, and LBD are example applications of community detection.

Marketing attribution is a use case that may productively incorporate applications of all three graph analytics algorithm categories, depending on the specific question being asked: (1) how did the marketing message flow from source to final action? (pathfinding); (2) was there a dominant influencer who initiated the most ROI from the marketing campaign? (centrality); or (3) is there a community (a set of common personas) that are most responsive to the marketing campaign? (community detection).

Let’s not forget one more example application — a well-conceived theatrical masterpiece will almost certainly be an instantiation of community detection (co-conspirators, love triangles, and all that). That masterpiece will undoubtedly include a major villain or a central hero (representing centrality). Such a masterpiece is probably also a saga (the story of a journey), containing intrigues, strategies, and plots that move ingeniously, methodically, and economically (in three acts or less) toward some climactic ending (thus representing pathfinding).

In Chapter 7, we find many code samples for example applications of graph algorithms, thus rendering all the above knowledge real and useful to the reader. In this manner, the book becomes an instructional tutorial as well as a guide on the side. Putting graph algorithms into practice through these examples is one of the most brilliant contributions of this book — giving you the capability to do it for yourself and to start reaping the rewards of exploring the most natural data structure to describe the world: not rows and columns, but a graph! You will be able to connect the dots that aren’t connected in traditional data structures, build a knowledge graph, explore the graph for insights, and exploit it for value. Let’s put this another way: your graph-powered team will be able to increase the value of your organization’s data assets in ways that others may not have ever imagined. Your team will become graph heroes.

Finally, in Chapter 8, the connection between graph algorithms and machine learning that was implicit throughout the book now becomes explicit. The training data and feature sets that feed machine learning algorithms can now be immensely enriched with tags, labels, annotations, and metadata that were inferred and/or provided naturally through the transformation of your repository of data into a graph of data. Any node and its relationship to a particular node becomes a type of contextual metadata for that particular note. All of that “metadata” (which is simply “other data about your data”) enables rich discovery of shortest paths, central nodes, and communities.

Graph modeling of your data set thus enables more efficient and effective feature extraction and selection (also described in Chapter 8), as the graph exposes the most important, influential, representative, and explanatory attributes to be included in machine learning models that aim to predict a particular target outcome variable as accurately as possible.

When considering the power of graph, we should keep in mind that perhaps the most powerful node in a graph model for real-world use cases might be “context”, including the contextual metadata that we already mentioned. Context may include time, location, related events, nearby entities, and more. Incorporating context into the graph (as nodes and as edges) can thus yield impressive predictive analytics and prescriptive analytics capabilities.

When all these pieces and capabilities are brought together, the graph analytics engine is thereby capable of exploring deep relationships between events, actions, people, and other things across both spatial and temporal (as well as other contextual) dimensions. Consequently, a graph algorithm-powered analysis tool may be called a Spatial-Temporal Analytics Graph Engine (STAGE!). Therefore, if Shakespeare was alive today, maybe he would agree with that logic and would still say “All the world’s a STAGE.” In any case, he would want to read this book to learn how to enrich his stories with deeper insights into the world and with more interesting relationships.