Category Archives: Strategy

Low-Latency Data Delivery and Analytics Product Delivery for Business Innovation and Enterprise AI Readiness

This article has been divided into 2 parts now:

Read other articles in this series on the importance of low-latency enterprise data infrastructure for business analytics:

Other related articles on the importance of data infrastructure for enterprise AI initiatives:

The Data Space-Time Continuum for Analytics Innovation and Business Growth

We discussed in another article the key role of enterprise data infrastructure in enabling a culture of data democratization, data analytics at the speed of business questions, analytics innovation, and business value creation from those innovative data analytics solutions. Now, we drill down into some of the special characteristics of data and enterprise data infrastructure that ignite analytics innovation.

First, a little history – years ago, at the dawn of the big data age, there was frequent talk of the three V’s of big data (data’s three biggest challenges): volume, velocity, and variety. Though those discussions are now considered “ancient history” in the current AI-dominated era, the challenges have not vanished. In fact, they have grown in importance and impact.

While massive data volumes appear less frequently now in strategic discussions and are being tamed with excellent data infrastructure solutions from Pure Storage, the data velocity and data variety challenges remain in their own unique “sweet spot” of business data strategy conversations. We addressed the data velocity challenges and solutions in our previous article: “Solving the Data Daze – Analytics at the Speed of Business Questions”. We will now take a look at the data variety challenge, and then we will return to modern enterprise data infrastructure solutions for handling all big data challenges.

Okay, data variety—what is there about data variety that makes it such a big analytics challenge? This challenge often manifests itself when business executives ask a question like this: “what value and advantages will all that diversity in data sources, venues, platforms, modalities, and dimensions actually deliver for us in order to outweigh the immense challenges that high data variety brings to our enterprise data team?”

Because nearly all organizations collect many types of data from many different sources for many business use cases, applications, apps, and development activities, consequently nearly every organization is facing this dilemma.

[continue reading the full article here]

Solving the Data Daze – Analytics at the Speed of Business Questions

Data is more than just another digital asset of the modern enterprise. It is an essential asset. And data is now a fundamental feature of any successful organization. Beyond the early days of data collection, where data was acquired primarily to measure what had happened (descriptive) or why something is happening (diagnostic), data collection now drives predictive models (forecasting the future) and prescriptive models (optimizing for “a better future”). Business leaders need more than backward-looking reports, though those are still required for some stakeholders and regulators. Leaders now require forward-looking insights for competitive market advantage and advancement.

So, what happens when the data flows are not quarterly, or monthly, or even daily, but streaming in real-time? The business challenges then become manifold: talent and technologies now must be harnessed, choreographed, and synchronized to keep up with the data flows that carry and encode essential insights flowing through business processes at light speed. Insights discovery (powered by analytics, data science, and machine learning) drives next-best decisions, next-best actions, and business process automation.

In the early days of the current data analytics revolution, one would often hear business owners say that they need their data to move at the speed of business. Well, it soon became clear that the real problem was the reverse: how can we have our business move at the speed of our data? Fortunately, countless innovative products and services in the data analytics world have helped organizations in that regard, through an explosion in innovation around data analytics, data science, data storytelling, data-driven decision support, talent development, automation, and AI (including the technologies associated with machine learning, deep learning, generative AI, and ChatGPT).

[continue reading the full article here]

Business Strategies for Deploying Disruptive Tech: Generative AI and ChatGPT

Generative AI is the biggest and hottest trend in AI (Artificial Intelligence) at the start of 2023. While generative AI has been around for several years, the arrival of ChatGPT (a conversational AI tool for all business occasions, built and trained from large language models) has been like a brilliant torch brought into a dark room, illuminating many previously unseen opportunities.

Every business wants to get on board with ChatGPT, to implement it, operationalize it, and capitalize on it. It is important to realize that the usual “hype cycle” rules prevail in such cases as this. First, don’t do something just because everyone else is doing it – there needs to be a valid business reason for your organization to be doing it, at the very least because you will need to explain it objectively to your stakeholders (employees, investors, clients). Second, doing something new (especially something “big” and disruptive) must align with your business objectives – otherwise, you may be steering your business into deep uncharted waters that you haven’t the resources and talent to navigate. Third, any commitment to a disruptive technology (including data-intensive and AI implementations) must start with a business strategy.

I suggest that the simplest business strategy starts with answering three basic questions: What? So what? Now what? That is: (1) What is it you want to do and where does it fit within the context of your organization? (2) Why should your organization be doing it and why should your people commit to it? (3) How do we get started, when, who will be involved, and what are the targeted benefits, results, outcomes, and consequences (including risks)? In short, you must be willing and able to answer the six WWWWWH questions (Who? What? When? Where? Why? and How?).

Another strategy perspective on technology-induced business disruption (including generative AI and ChatGPT deployments) is to consider the three F’s that affect (and can potentially derail) such projects. Those F’s are: Fragility, Friction, and FUD (Fear, Uncertainty, Doubt).

Fragility occurs when a built system is easily “broken” when some component is changed. These changes may include requirements drift, data drift, model drift, or concept drift. The first one (requirements drift) is a challenge in any development project (when the desired outcomes are changed, sometimes without notifying the development team), but the latter three are more apropos to data-intensive product development activities (which certainly describes AI projects). A system should be sufficiently agile and modular such that changes can be made with as little impact to the overall system design and operations as possible, thus keeping the project off the pathway to failure. Since ChatGPT is built from large language models that are trained against massive data sets (mostly business documents, internal text repositories, and similar resources) within your organization, consequently attention must be given to the stability, accessibility, and reliability of those resources.

Friction occurs when there is resistance to change or to success somewhere in the project lifecycle or management chain. This can be overcome with small victories (MVP minimum viable products, or MLP minimum lovable products) and with instilling (i.e., encouraging and rewarding) a culture of experimentation across the organization. When people are encouraged to experiment, where small failures are acceptable (i.e., there can be objective assessments of failure, lessons learned, and subsequent improvements), then friction can be minimized, failure can be alleviated, and innovation can flourish. A business-disruptive ChatGPT implementation definitely fits into this category: focus first on the MVP or MLP.

FUD occurs when there is too much hype and “management speak” in the discussions. FUD can open a pathway to failure wherever there is: (a) Fear that the organization’s data-intensive, machine learning, AI, and ChatGPT activities are driven by FOMO (fear of missing out, sparked by concerns that your competitors are outpacing your business); (b) Uncertainty in what the AI / ChatGPT advocates are talking about (a “Data Literacy” or “AI Literacy” challenge); or (c) Doubt that there is real value in the disruptive technology activities (due to a lack of quick-win MVP or MLP examples).

I have developed a few rules to help drive quick wins and facilitate success in data-intensive and AI (e.g., Generative AI and ChatGPT) deployments. These rules are not necessarily “Rocket Science” (despite the name of this blog site), but they are common business sense for most business-disruptive technology implementations in enterprises. Most of these rules focus on the data, since data is ultimately the fuel, the input, the objective evidence, and the source of informative signals that are fed into all data science, analytics, machine learning, and AI models.

Here are my 10 rules (i.e., Business Strategies for Deploying Disruptive Data-Intensive, AI, and ChatGPT Implementations):

  1. Honor business value above all other goals.
  2. Begin with the end in mind: goal-oriented, mission-focused, and outcomes-driven, while being data-informed and technology-enabled.
  3. Think strategically, but act tactically: think big, start small, learn fast.
  4. Know thy data: understand what it is (formats, types, sampling, who, what, when, where, why), encourage the use of data across the enterprise, and enrich your datasets with searchable (semantic and content-based) metadata (labels, annotations, tags). The latter is essential for AI implementations.
  5. Love thy data: data are never perfect, but all the data may produce value, though not immediately. Clean it, annotate it, catalog it, and bring it into the data family (connect the dots and see what happens). For example, outliers are often dismissed as random fluctuations in data, but they may be signaling at least one of these three different types of discovery: (a) data quality problems, associated with errors in the data measurement and capture processes; (b) data processing problems, associated with errors in the data pipeline and transformation processes; or (c) surprise discovery, associated with real previously unseen novel events, behaviors, or entities arising in your data stream.
  6. Do not covet thy data’s correlations: a random six-sigma event is one-in-a-million. So, if you have 1 trillion data points (e.g., a Terabyte of data), then there may be one million such “random events” that will tempt any decision-maker into ascribing too much significance to this natural randomness.
  7. Validation is a virtue, but generalization is vital: a model may work well once, but not on the next batch of data. We must monitor for overfitting (fitting the natural variance in the data), underfitting (bias), data drift, and model drift. Over-specifying and over-engineering a model for a data-intensive implementation will likely not be applicable to previously unseen data or for new circumstances in which the model will be deployed. A lack of generalization is a big source of fragility and dilutes the business value of the effort.
  8. Honor thy data-intensive technology’s “easy buttons” that enable data-to-discovery (D2D), data-to-“informed decision” (D2ID), data-to-“next best action” (D2NBA), and data-to-value (D2V). These “easy buttons” are: Pattern Detection (D2D), Pattern Recognition (D2ID), Pattern Exploration (D2NBA), and Pattern Exploitation (D2V).
  9. Remember to Keep it Simple and Smart (the “KISS” principle). Create a library of composable, reusable building blocks and atomic business logic components for integration within various generative AI implementations: microservices, APIs, cloud-based functions-as-a-service (FaaS), and flexible user interfaces. (Suggestion: take a look at MACH architecture.)
  10. Keep it agile, with short design, develop, test, release, and feedback cycles: keep it lean, and build on incremental changes. Test early and often. Expect continuous improvement. Encourage and reward a Culture of Experimentation that learns from failure, such as “Test, or get fired!

Finally, I offer a very similar (shorter and slightly different) set of Business Strategies for Deploying Disruptive Data-Intensive, AI, and ChatGPT Implementations, from the article “The breakthrough that is ChatGPT: How much does it cost to build?“. Here is the list from that article’s “C-Suite’s Guide to Developing a Successful AI Chatbot”:

  1. Define the business requirements.
  2. Conduct market research.
  3. Choose the right development partner.
  4. Develop a minimum viable product (MVP).
  5. Test and refine the chatbot.
  6. Launch the chatbot.

Data Insights for Everyone — The Semantic Layer to the Rescue

What is a semantic layer? That’s a good question, but let’s first explain semantics. The way that I explained it to my data science students years ago was like this. In the early days of web search engines, those engines were primarily keyword search engines. If you knew the right keywords to search and if the content providers also used the same keywords on their website, then you could type the words into your favorite search engine and find the content you needed. So, I asked my students what results they would expect from such a search engine if I typed the following words into the search box: “How many cows are there in Texas?” My students were smart. They realized that the search results would probably not provide an answer to my question, but the results would simply list websites that included my words on the page or in the metadata tags: “Texas”, “Cows”, “How”, etc. Then, I explained to my students that a semantic-enabled search engine (with a semantic meta-layer, including ontologies and similar semantic tools) would be able to interpret my question’s meaning and then map that meaning to websites that can answer the question.

This was a good opening for my students to the wonderful world of semantics. I brought them deeper into the world by pointing out how much more effective and efficient the data professionals’ life would be if our data repositories had a similar semantic meta-layer. We would be able to go far beyond searching for correctly spelled column headings in databases or specific keywords in data documentation, to find the data we needed (assuming we even knew the correct labels, metatags, and keywords used by the dataset creators). We could search for data with common business terminology, regardless of the specific choice or spelling of the data descriptors in the dataset. Even more than that, we could easily start discovering and integrating, on-the-fly, data from totally different datasets that used different descriptors. For example, if I am searching for customer sales numbers, different datasets may label that “sales”, or “revenue”, or “customer_sales”, or “Cust_sales”, or any number of other such unique identifiers. What a nightmare that would be! But what a dream the semantic layer becomes!

When I was teaching those students so many years ago, the semantic layer itself was just a dream. Now it is a reality. We can now achieve the benefits, efficiencies, and data superhero powers that we previously could only imagine. But wait! There’s more.

Perhaps the greatest achievement of the semantic layer is to provide different data professionals with easy access to the data needed for their specific roles and tasks. The semantic layer is the representation of data that helps different business end-users discover and access the right data efficiently, effectively, and effortlessly using common business terms. The data scientists need to find the right data as inputs for their models — they also need a place to write-back the outputs of their models to the data repository for other users to access. The BI (business intelligence) analysts need to find the right data for their visualization packages, business questions, and decision support tools — they also need the outputs from the data scientists’ models, such as forecasts, alerts, classifications, and more. The semantic layer achieves this by mapping heterogeneously labeled data into familiar business terms, providing a unified, consolidated view of data across the enterprise.

The semantic layer delivers data insights discovery and usability across the whole enterprise, with each business user empowered to use the terminology and tools that are specific to their role. How data are stored, labeled, and meta-tagged in the data cloud is no longer a bottleneck to discovery and access. The decision-makers and data science modelers can fluidly share inputs and outputs with one another, to inform their role-specific tasks and improve their effectiveness. The semantic layer takes the user-specific results out of being a “one-off” solution on that user’s laptop to becoming an enterprise analytics accelerant, enabling business answer discovery at the speed of business questions.

Insights discovery for everyone is achieved. The semantic layer becomes the arbiter (multi-lingual data translator) for insights discovery between and among all business users of data, within the tools that they are already using. The data science team may be focused on feature importance metrics, feature engineering, predictive modeling, model explainability, and model monitoring. The BI team may be focused on KPIs, forecasts, trends, and decision-support insights. The data science team needs to know and to use that data which the BI team considers to be most important. The BI team needs to know and to use which trends, patterns, segments, and anomalies are being found in those data by the data science team. Sharing and integrating such important data streams has never been such a dream.

The semantic layer bridges the gaps between the data cloud, the decision-makers, and the data science modelers. The key results from the data science modelers can be written back to the semantic layer, to be sent directly to consumers of those results in the executive suite and on the BI team. Data scientists can focus on their tools; the BI users and executives can focus on their tools; and the data engineers can focus on their tools. The enterprise data science, analytics, and BI functions have never been so enterprisey. (Is “enterprisey” a word? I don’t know, but I’m sure you get my semantic meaning.)

That’s empowering. That’s data democratization. That’s insights democratization. That’s data fluency/literacy-building across the enterprise. That’s enterprise-wide agile curiosity, question-asking, hypothesizing, testing/experimenting, and continuous learning. That’s data insights for everyone.

Are you ready to learn more how you can bring these advantages to your organization? Be sure to watch the AtScale webinar “How to Bridge Data Science and Business Intelligence” where I join a panel in a multi-industry discussion on how a semantic layer can help organizations make smarter data-driven decisions at scale. There will be several speakers, including me. I will be speaking about “Model Monitoring in the Enterprise — Filling the Gaps”, specifically focused on “Filling the Communication Gaps Between BI and Data Science Teams With a Semantic Data Layer.”

Register to attend and view the webinar at https://bit.ly/3ySVIiu.

https://bit.ly/3ySVIiu

Are You Content with Your Organization’s Content Strategy?

In this post, we will examine ways that your organization can separate useful content into separate categories that amplify your own staff’s performance. Before we start, I have a few questions for you.

What attributes of your organization’s strategies can you attribute to successful outcomes? How long do you deliberate before taking specific deliberate actions? Do you converse with your employees about decisions that might be the converse of what they would expect? Is a process modification that saves a minute in someone’s workday considered too minute for consideration? Do you present your employees with a present for their innovative ideas? Do you perfect your plans in anticipation of perfect outcomes? Or do you project foregone conclusions on a project before it is completed?

If you have good answers to these questions, that is awesome! I would not contest any of your answers since this is not a contest. In fact, this is actually something quite different. Before you combine all these questions in a heap and thresh them in a combine, and before you buffet me with a buffet of skeptical remarks, stick with me and let me explain. Do not close the door on me when I am so close to giving you an explanation.

What you have just experienced is a plethora of heteronyms. Heteronyms are words that are spelled identically but have different meanings when pronounced differently. If you include the title of this blog, you were just presented with 13 examples of heteronyms in the preceding paragraphs. Can you find them all?

Seriously now, what do these word games have to do with content strategy? I would say that they have a great deal to do with it. Specifically, in the modern era of massive data collections and exploding content repositories, we can no longer simply rely on keyword searches to be sufficient. In the case of a heteronym, a keyword search would return both uses of the word, even though their meanings are quite different. In “information retrieval” language, we would say that we have high RECALL, but low PRECISION. In other words, we can find most occurrences of the word (recall), but not all the results correspond to the meaning of our search (precision). That is no longer good enough when the volume is so high.

The key to success is to start enhancing and augmenting content management systems (CMS) with additional features: semantic content and context. This is accomplished through tags, annotations, and metadata (TAM). TAM management, like content management, begins with business strategy.

Strategic content management focusses on business outcomes, business process improvement, efficiency (precision – i.e., “did I find only the content that I need without a lot of noise?”), and effectiveness (recall – i.e., “did I find all the content that I need?”). Just because people can request a needle in the haystack, it is not a good thing to deliver the whole haystack that contains that needle. Clearly, such a content delivery system is not good for business productivity. So, there must be a strategy regarding who, what, when, where, why, and how is the organization’s content to be indexed, stored, accessed, delivered, used, and documented. The content strategy should emulate a digital library strategy. Labeling, indexing, ease of discovery, and ease of access are essential if end-users are to find and benefit from the collection.

My favorite approach to TAM creation and to modern data management in general is AI and machine learning (ML). That is, use AI and machine learning techniques on digital content (databases, documents, images, videos, press releases, forms, web content, social network posts, etc.) to infer topics, trends, sentiment, context, content, named entity identification, numerical content extraction (including the units on those numbers), and negations. Do not forget the negations. A document that states “this form should not be used for XYZ” is exactly the opposite of a document that states “this form must be used for XYZ”. Similarly, a social media post that states “Yes. I believe that this product is good” is quite different from a post that states “Yeah, sure. I believe that this product is good. LOL.”

Contextual TAM enhances a CMS with knowledge-driven search and retrieval, not just keyword-driven. Contextual TAM includes semantic TAM, taxonomic indexing, and even usage-based tags (digital breadcrumbs of the users of specific pieces of content, including the key words and phrases that people used to describe the content in their own reports). Adding these to your organization’s content makes the CMS semantically searchable and usable. That’s far more laser-focused (high-precision) than keyword search.

One type of implementation of a content strategy that is specific to data collections are data catalogs. Data catalogs are very useful and important. They become even more useful and valuable if they include granular search capabilities. For example, the end-user may only need the piece of the dataset that has the content that their task requires, versus being delivered the full dataset. Tagging and annotating those subcomponents and subsets (i.e., granules) of the data collection for fast search, access, and retrieval is also important for efficient orchestration and delivery of the data that fuels AI, automation, and machine learning operations.

One way to describe this is “smart content” for intelligent digital business operations. Smart content includes labeled (tagged, annotated) metadata (TAM). These labels include content, context, uses, sources, and characterizations (patterns, features) associated with the whole content and with individual content granules. Labels can be learned through machine learning, or applied by human experts, or proposed by non-experts when those labels represent cognitive human-discovered patterns and features in the data. Labels can be learned and applied in existing CMS, in massive streaming data, and in sensor data (collected in devices at the “edge”).

Some specific tools and techniques that can be applied to CMS to generate smart content include these:

  • Natural language understanding and natural language generation
  • Topic modeling (including topic drift and topic emergence detection)
  • Sentiment detection (including emotion detection)
  • AI-generated and ML-inferred content and context
  • Entity identification and extraction
  • Numerical quantity extraction
  • Automated structured (searchable) database generation from textual (unstructured) document collections (for example: Textual ETL).

Consequently, smart content thrives at the convergence of AI and content. Labels are curated and stored with the content, thus enabling curation, cataloguing (indexing), search, delivery, orchestration, and use of content and data in AI applications, including knowledge-driven decision-making and autonomous operations. Techniques that both enable (contribute to) and benefit from smart content are content discovery, machine learning, knowledge graphs, semantic linked data, semantic data integration, knowledge discovery, and knowledge management. Smart content thus meets the needs for digital business operations and autonomous (AI and intelligent automation) activities, which must devour streams of content and data – not just any content, but smart content – the right (semantically identified) content delivered at the right time in the right context.

The four tactical steps in a smart content strategy include:

  1. Characterize and contextualize the patterns, events, and entities in the content collection with semantic (contextual) tags, annotation, and metadata (TAM).
  2. Collect, curate, and catalog (i.e., index) each TAM component to make it searchable, accessible, and reusable.
  3. Deliver the right content at the right time in the right context to the decision agent.
  4. Decide and act on the delivered insights and knowledge.

Remember, do not be content with your current content management strategy. But discover and deliver the perfect smart content that perfects your digital business outcomes. Smart content strategy can save end-users countless minutes in a typical workday, and that type of business process improvement certainly is not too minute for consideration.

Top 10 Data Innovation Trends During 2020

The year 2020 was remarkably different in many ways from previous years. In at least one way, it was not different, and that was in the continued development of innovations that are inspired by data. This steady march of data-driven innovation has been a consistent characteristic of each year for at least the past decade. These data-fueled innovations come in the form of new algorithms, new technologies, new applications, new concepts, and even some “old things made new again”.

I provide below my perspective on what was interesting, innovative, and influential in my watch list of the Top 10 data innovation trends during 2020.

1) Automated Narrative Text Generation tools became incredibly good in 2020, being able to create scary good “deep fake” articles. The GPT-3 (Generative Pretrained Transformer, 3rd generation text autocomplete) algorithm made headlines since it demonstrated that it can start with a very thin amount of input (a short topic description, or a question), from which it can then generate several paragraphs of narrative that are very hard (perhaps impossible) to distinguish from human-generated text. However, it is far from perfect, since it certainly does not have reasoning skills, and it also loses its “train of thought” after several paragraphs (e.g., by making contradictory statements at different places in the narrative, even though the statements are nicely formed sentences).

2) MLOps became the expected norm in machine learning and data science projects. MLOps takes the modeling, algorithms, and data wrangling out of the experimental “one off” phase and moves the best models into deployment and sustained operational phase. MLOps “done right” addresses sustainable model operations, explainability, trust, versioning, reproducibility, training updates, and governance (i.e., the monitoring of very important operational ML characteristics: data drift, concept drift, and model security).

3) Concept drift by COVID – as mentioned above, concept drift is being addressed in machine learning and data science projects by MLOps, but concept drift is so much bigger than MLOps. Specifically, it feels to many of us like a decade of business transformation was compressed into the one year 2020. How and why businesses make decisions, customers make decisions, and anybody else makes decisions became conceptually and contextually different in 2020. Customer purchase patterns, supply chain, inventory, and logistics represent just a few domains where we saw new and emergent behaviors, responses, and outcomes represented in our data and in our predictive models. The old models were not able to predict very well based on the previous year’s data since the previous year seemed like 100 years ago in “data years”. Another example was in new data-driven cybersecurity practices introduced by the COVID pandemic, including behavior biometrics (or biometric analytics), which were driven strongly by the global “work from home” transition, where many insecurities in networks, data-sharing, and collaboration / communication tools were exposed. Behavior biometrics may possibly become the essential cyber requirement for unique user identification, finally putting weak passwords out of commission. Data and network access controls have similar user-based permissions when working from home as when working behind the firewall at your place of business, but the security checks and usage tracking can be more verifiable and certified with biometric analytics. This is critical in our massively data-sharing world and enterprises.

4) AIOps increasingly became a focus in AI strategy conversations. While it is similar to MLOps, AIOps is less focused on the ML algorithms and more focused on automation and AI applications in the enterprise IT environment – i.e., focused on operationalizing AI, including data orchestration, the AI platform, AI outcomes monitoring, and cybersecurity requirements. AIOps appears in discussions related to ITIM (IT infrastructure monitoring), SIEM (security information and event management), APM (application performance monitoring), UEBA (user and entity behavior analytics), DevSecOps, Anomaly Detection, Rout Cause Analysis, Alert Generation, and related enterprise IT applications.

5) The emergence of Edge-to-Cloud architectures clearly began pushing Industry 4.0 forward (with some folks now starting to envision what Industry 5.0 will look like). The Edge-to-Cloud architectures are responding to the growth of IoT sensors and devices everywhere, whose deployments are boosted by 5G capabilities that are now helping to significantly reduce data-to-action latency. In some cases, the analytics and intelligence must be computed and acted upon at the edge (Edge Computing, at the point of data collection), as in autonomous vehicles. In other cases, the analytics and insights may have more significant computation requirements and less strict latency requirements, thus allowing the data to be moved to larger computational resources in the cloud. The almost forgotten “orphan” in these architectures, Fog Computing (living between edge and cloud), is now moving to a more significant status in data and analytics architecture design.

6) Federated Machine Learning (FML) is another “orphan” concept (formerly called Distributed Data Mining a decade ago) that found new life in modeling requirements, algorithms, and applications in 2020. To some extent, the pandemic contributed to this because FML enforces data privacy by essentially removing data-sharing as a requirement for model-building across multiple datasets, multiple organizations, and multiple applications. FML model training is done incrementally and locally on the local dataset, with the meta-parameters of the local models then being shared with a centralized model-inference engine (which does not see any of the private data). The centralized ML engine then builds a global model, which is communicated back to the local nodes. Multiple iterations in parameter-updating and hyperparameter-tuning can occur between local nodes and the central inference engine, until satisfactory model performance is achieved. All through these training stages, data privacy is preserved, while allowing for the generation of globally useful, distributable, and accurate models.

7) Deep learning (DL) may not be “the one algorithm to dominate all others” after all. There was some research published earlier in 2020 that found that traditional, less complex algorithms can be nearly as good or better than deep learning on some tasks. This could be yet another demonstration of the “no free lunch theorem”, which basically states that there is no single universal algorithm that is the best for all problems. Consequently, the results of the new DL research may not be so surprising, but they certainly prompt us with necessary reminders that sometimes simple is better than complexity, and that the old saying is often still true: “perfect is the enemy of good enough.”

8) RPA (Robotic Process Automation) and intelligent automation were not new in 2020, but the surge in their use and in the number of providers was remarkable. While RPA is more rule-based (informed by business process mining, to automate work tasks that have very little variation), intelligent automation is more data-driven, adaptable, and self-learning in real-time. RPA mimics human actions, by repetition of routine tasks based on a set of rules. Intelligent automation simulates human intelligence, which responds and adapts to emergent patterns in new data, and which is capable of learning to automate non-routine tasks. Keep an eye on the intelligent automation space for new and exciting developments to come in the near future around hyperautomation and enterprise intelligence, such as the emergence of learning business systems that learn and adapt their processes based on signals in enterprise data across numerous business functions: finance, marketing, HR, customer service, production, operations, sales, and management.

9) The Rise of Data Literacy initiatives, imperatives, instructional programs, and institutional awareness in 2020 was one of the two most exciting things that I witnessed during the year. (The other one of the two is next on my list.) I have said for nearly 20 years that data literacy must become a key component of education at all levels and an aptitude of nearly all employees in all organizations. The world is data, revolves around data, produces and consumes massive quantities of data, and drives innovative emerging technologies that are inspired by, informed by, and fueled by data: augmented reality (AR), virtual reality (VR), autonomous vehicles, computer vision, digital twins, drones, robotics, AI, IoT, hyperautomation, virtual assistants, conversational AI, chatbots, natural language understanding and generation (NLU, NLG), automatic language translation, 4D-printing, cyber resilience, and more. Data literacy is essential for future of work, future innovation, work from home, and everyone that touches digital information. Studies have shown that organizations that are not adopting data literacy programs are not only falling behind, but they may stay behind, their competition. Get on board with data literacy! Now!

10) Observability emerged as one of the hottest and (for me) most exciting developments of the year. Do not confuse observability with monitoring (specifically, with IT monitoring). The key difference is this: monitoring is what you do, and observability is why you do it. Observability is a business strategy: what you monitor, why you monitor it, what you intend to learn from it, how it will be used, and how it will contribute to business objectives and mission success. But the power, value, and imperative of observability does not stop there. Observability meets AI – it is part of the complete AIOps package: “keeping an eye on the AI.” Observability delivers actionable insights, context-enriched data sets, early warning alert generation, root cause visibility, active performance monitoring, predictive and prescriptive incident management, real-time operational deviation detection (6-Sigma never had it so good!), tight coupling of cyber-physical systems, digital twinning of almost anything in the enterprise, and more. And the goodness doesn’t stop there. The emergence of standards, like OpenTelemetry, can unify all aspects of your enterprise observability strategy: process instrumentation, sensing, metrics specification, context generation, data collection, data export, and data analysis of business process performance and behavior monitoring in the cloud. This plethora of benefits is a real game-changer for open-source self-service intelligent data-driven business process monitoring (BPM) and application performance monitoring (APM), feedback, and improvement. As mentioned above, monitoring is “what you are doing”, and observability is “why you are doing it.” If your organization is not having “the talk” about observability, now is the time to start – to understand why and how to produce business value through observability into the multitude of data-rich digital business applications and processes all across the modern enterprise. Don’t drown in those deep seas of data. Instead, develop an Observability Strategy to help your organization ride the waves of data, to help your business innovation and transformation practices move at the speed of data.

In summary, my top 10 data innovation trends from 2020 are:

  • GPT-3
  • MLOps
  • Concept Drift by COVID
  • AIOps
  • Edge-to-Cloud and Fog Computing
  • Federated Machine Learning
  • Deep Learning meets the “no free lunch theorem”
  • RPA and Intelligent Automation
  • Rise of Data Literacy
  • Observability

If I were to choose what was hottest trend in 2020, it would not be a single item in this top 10 list. The hottest trend would be a hybrid (convergence) of several of these items. That hybrid would include: Observability, coupled with Edge and the ever-rising ubiquitous IoT (sensors on everything), boosted by 5G and cloud technologies, fueling ever-improving ML and DL algorithms, all of which are enabling “just-in-time” intelligence and intelligent automation (for data-driven decisions and action, at the point of data collection), deployed with a data-literate workforce, in a sustainable and trusted MLOps environment, where algorithms, data, and applications work harmoniously and are governed and secured by AIOps.

If we learned anything from the year 2020, it should be that trendy technologies do not comprise a menu of digital transformation solutions to choose from, but there really is only one combined solution, which is the hybrid convergence of data innovation technologies. From my perspective, that was the single most significant data innovation trend of the year 2020.

Analytics Insights and Careers at the Speed of Data

How to make smarter data-driven decisions at scale: http://bit.ly/3rS3ZQW

The determination of winners and losers in the data analytics space is a much more dynamic proposition than it ever has been. One CIO said it this way, “If CIOs invested in machine learning three years ago, they would have wasted their money. But if they wait another three years, they will never catch up.”  Well, that statement was made five years ago! A lot has changed in those five years, and so has the data landscape.

The dynamic changes of the business requirements and value propositions around data analytics have been increasingly intense in depth (in the number of applications in each business unit) and in breadth (in the enterprise-wide scope of applications in all business units in all sectors). But more significant has been the acceleration in the number of dynamic, real-time data sources and corresponding dynamic, real-time analytics applications.

We no longer should worry about “managing data at the speed of business,” but worry more about “managing business at the speed of data.”

One of the primary drivers for the phenomenal growth in dynamic real-time data analytics today and in the coming decade is the Internet of Things (IoT) and its sibling the Industrial IoT (IIoT). With its vast assortment of sensors and streams of data that yield digital insights in situ in almost any situation, the IoT / IIoT market has a projected market valuation of $1.5 trillion by 2030. The accompanying technology Edge Computing, through which those streaming digital insights are extracted and then served to end-users, has a projected valuation of $800 billion by 2028.

With dynamic real-time insights, this “Internet of Everything” can then become the “Internet of Intelligent Things”, or as I like to say, “The Internet used to be a thing. Now things are the Internet.” The vast scope of this digital transformation in dynamic business insights discovery from entities, events, and behaviors is on a scale that is almost incomprehensible. Traditional business analytics approaches (on laptops, in the cloud, or with static datasets) will not keep up with this growing tidal wave of dynamic data.

Another dimension to this story, of course, is the Future of Work discussion, including creation of new job titles and roles, and the demise of older job titles and roles. One group has declared, “IoT companies will dominate the 2020s: Prepare your resume!” This article quotes an older market projection (from 2019), which estimated “the global industrial IoT market could reach $14.2 trillion by 2030.”

In dynamic data-driven applications, automation of the essential processes (in this case, data triage, insights discovery, and analytics delivery) can give a power boost to ride that tidal wave of fast-moving data streams. One can prepare for and improve skill readiness for these new business and career opportunities in several ways:

  • Focus on the automation of business processes: e.g., artificial intelligence, robotics, robotic process automation, intelligent process automation, chatbots.
  • Focus on the technologies and engineering components: e.g., sensors, monitoring, cloud-to-edge, microservices, serverless, insights-as-a-service APIs, IFTTT (IF-This-Then-That) architectures.
  • Focus on the data science: e.g., machine learning, statistics, computer vision, natural language understanding, coding, forecasting, predictive analytics, prescriptive analytics, anomaly detection, emergent behavior discovery, model explainability, trust, ethics, model monitoring (for data drift and concept drift) in dynamic environments (MLOps, ModelOps, AIOps).
  • Focus on specific data types: e.g., time series, video, audio, images, streaming text (such as social media or online chat channels), network logs, supply chain tracking (e.g., RFID), inventory monitoring (SKU / UPC tracking).
  • Focus on the strategies that aim these tools, talents, and technologies at reaching business mission and goals: e.g., data strategy, analytics strategy, observability strategy (i.e., why and where are we deploying the data-streaming sensors, and what outcomes should they achieve?).

Insights discovery from ubiquitous data collection (via the tens of billions of connected devices that will be measuring, monitoring, and tracking nearly everything internally in our business environment and contextually in the broader market and global community) is ultimately about value creation and business outcomes. Embedding real-time dynamic analytics at the edge, at the point of data collection, or at the moment of need will dynamically (and positively) change the slope of your business or career trajectory. Dynamic sense-making, insights discovery, next-best-action response, and value creation is essential when data is being acquired at an enormous rate. Only then can one hope to realize the promised trillion-dollar market value of the Internet of Everything.

For more advice, check out this upcoming webinar panel discussion, sponsored by AtScale, with data and analytics leaders from Wayfair, Cardinal Health, bol.com, and Slickdeals: “How to make smarter data-driven decisions at scale.” Each panelist will share an overview of their data & analytics journey, and how they are building a self-service, data-driven culture that scales. Join us on Wednesday, March 31, 2021 (11:00am PT | 2:00pm ET). Save your spot here: http://bit.ly/3rS3ZQW. I hope that you find this event useful. And I hope to see you there!

Please follow me on LinkedIn and follow me on Twitter at @KirkDBorne.

RPA and IPA – Their Similarities are Different, but Their Rapid Growth Trajectories are the Same

When I was growing up, friends at school would occasionally ask me if my older brother and I were twins. We were not, though we looked twin-like! As I grew tired of answering that question, one day I decided to give a more thoughtful answer to the question (beyond a simple “No”). I replied: “Yes, we are twins. We were born 20 months apart!” My response caused the questioner to pause and think about what I said, and perhaps reframe their own thinking.

This story reminds me that two things may appear very much the same, but they may be found to be not so similar after deeper inspection and reflection. RPA and IPA are like that. Their similarities are different!

What are RPA and IPA? RPA is Robotic Process Automation, and IPA is Intelligent Process Automation. Sound similar? Yes, in fact, their differences are similar!

In the rest of this article, we will refer to IPA as intelligent automation (IA), which is simply short-hand for intelligent process automation.

Process automation is relatively clear – it refers to an automatic implementation of a process, specifically a business process in our case. One can automate a very complicated and time-consuming process, even for a one-time bespoke application – the ROI must be worth it, to justify doing this only once. Robotic Process Automation is for “more than once” automation. RPA then refers specifically to the robotic repetition of a business process. Repetition implies that the same steps are repeated many times, for example claims processing or business form completion or invoice processing or invoice submission or more data-specific activities, such as data extraction from documents (such as PDFs), data entry, data validation, and report preparation.

The benefits of RPA accrue from its robotic repetition of a well-defined process (specifically some sort of digital process with defined instructions, inputs, rules, and outputs), which is to be repeated without error, thus removing potential missteps in the process, or accidental omission of steps, or employee fatigue. The process must be simple, stable, repetitive, and routine, and it must be carried out the same way hundreds or thousands (or more) times. This robotic repetition of the process assures that the steps are replicated identically, correctly, and rapidly from one application to the next, thus yielding higher business productivity. Consequently, as organizations everywhere are undergoing significant digital transformation, we have been witnessing increases both in the use of RPA in organizations and in the number of RPA products in the market.

So, what about Intelligent Automation? IA refers to the addition of “intelligence” to the RPA – transforming it into “smart RPA” or even “cognitive RPA”. This is accomplished by adding more data-driven, machine learning, and AI (artificial intelligence) components to the process discovery, process mining, and process learning stages. IA incorporates feedback, learning, improvement, and optimization in the automation loop. The market and business appetite for IA is growing rapidly, particularly as more organizations are seeking to add AI to their enterprise functions and to step up the value derived from their process automation activities.

The pivot from RPA to IA right now has spurred Automation Anywhere, Inc. (AAI) to conduct a survey of businesses and organizations in numerous sectors and industries, to assess the current and future (Now and Next) state of RPA in the enterprise. AAI’s recently published “Now and Next State of RPA” report presents detailed results of that survey.

The AAI report covers these industries: energy/utilities, financial/insurance, government, healthcare, industrial/manufacturing, life sciences, retail/consumer, services/consulting, technology, telecom, and transportation/airlines.

The first part of the report addresses the “Now” – the present-day impact of RPA and IA – how organizations are deploying process automation, what their priorities are, how much are they investing in it, and what are the benefits being achieved. The “Next” part of the report probes organizations’ forward-looking strategies and goals over the next one to two years. Of course, both the “now” and “next” sections are affected and informed by the COVID-19 pandemic.

Some top trends from the report include:

  • Cloud is becoming the platform of choice for RPA deployments. Cloud is top of mind for leaders everywhere, and that includes cloud migration and cloud security.
  • Organizations are thinking big about RPA, with millions of bots already implemented by AAI customers. Some AAI customers have individually deployed tens of thousands of bots.
  • The pandemic has rapidly increased demand for RPA, especially in customer-facing functions, such as call centers.
  • Most organizations (63%) are deploying or scaling their efforts, while 27% are still evaluating RPA solutions, and 10% have no plans at this time.
  • Spending on RPA is increasing, with an estimated doubling of the number of bots deployed in the next 12 months in most organizations who are already actively using RPA bots.
  • Productivity improvement is the single most important driver for adopting RPA, IA, and AI.
  • The average ROI from RPA/IA deployments is 250%.
  • Interest in AI is high and growing, specifically in the areas of smart analytics, customer-centricity, chatbots, and predictive modeling. Many organizations are seeing a strong alignment of their AI and RPA (hence, IA) projects.
  • The top barriers to adoption are (in order of significance): lack of expertise, insufficient budget, lack of resources, lack of use cases, limited ROI, organizational resistance, scalability concerns, and security concerns.
  • Top use cases in the back office include finance (general ledger data reconciliation), accounts payable (invoice processing and payment), and HR (new employee onboarding).
  • Top use cases in the front office include customer records management (including account updates), customer service (request handling and call center support), and sales order processing.
  • RPA education and training are on the rise (specifically online training). This reflects the corresponding increase in interest among many organizations in data literacy, analytics skills, AI literacy, and AI skills training. Education initiatives address the change management, employee reskilling, and worker upskilling that organizations are now prioritizing.
  • Organizations report an increased excitement for RPA/IA, an awareness of significant new opportunities for adoption, and a desire for more mature RPA – that is, intelligent automation!

There are many more details, including specific insights for different sectors, in the 21-page “Now and Next State of RPA” report from Automation Anywhere, Inc. You can download the full report here: https://bit.ly/3p0IZpJ. Download it for free and learn more how the robotic twinning and repetition of business processes is the most intelligent step in accelerating digital transformation and boosting business productivity.

Productivity is the top driver of RPA and AI.
Open-ended survey questions gave insights into organizations’ planned AI projects.
Training initiatives are a high priority for increasing ROI from RPA and AI projects.

How We Teach The Leaders of Tomorrow To Be Curious, Ask Questions and Not Be Afraid To Fail Fast To Learn Fast

I recently enjoyed recording a podcast with Joe DosSantos (Chief Data Officer at Qlik). This was one in a series of #DataBrilliant podcasts by Qlik, which you can also access here (Apple Podcasts) and here (Spotify). I summarize below some of the topics that Joe and I discussed in the podcast. Be sure to listen to the full recording of our lively conversation, which covered Data Literacy, Data Strategy, Data Leadership, and more.

The Age of Hype Cycles

The data age has been marked by numerous “hype cycles.” First, we heard how Big Data, Data Science, Machine Learning (ML) and Advanced Analytics would have the honor to be the technologies that would cure cancer, end world hunger and solve the world’s biggest challenges. Then came third-generation Artificial Intelligence (AI), Blockchain and soon Quantum Computing, with each one seeking that honor.

From all this hope and hype, one constant has always been there: a focus on value creation from data. As a scientist, I have always recommended a scientific approach: State your problem first, be curious (ask questions), collect facts to address those questions (acquire data), investigate, analyze, ask more questions, include a sensible serving of skepticism, and (above all else) aim to fail fast in order to learn fast. As I discussed with Joe DosSantos when I spoke with him for the latest episode of Data Brilliant, you don’t need to be a data scientist to follow these principles. These apply to everyone, in all organizations and walks of life, in every sector.

One characteristic of science that is especially true in data science and implicit in ML is the concept of continuous learning and refining our understanding. We build models to test our understanding, but these models are not “one and done.” They are part of a cycle of learning. In ML, the learning cycle is sometimes called backpropagation, where the errors (inaccurate predictions) of our models are fed back into adjusting the model’s input parameters in a way that aims to improve the output accuracy. A more colloquial expression for this is: good judgment comes from experience, and experience comes from bad judgment.

Data Literacy For All

I know that for some, the term data and some of the other terminology I’ve mentioned already can be scary. But they shouldn’t be. We are all surrounded by – and creating – masses of data every single day. As Joe and I talked about, one of the first hurdles in data literacy is getting people to recognize that everything is data. What you see with your eyes? That’s data. What you hear with your ears? Data. The words that come out of your mouth that other people hear? That’s all data. Images, text, documents, audio, video and all the apps on your phone, all the things you search for on the internet? Yet again, that’s data.

Every single day, everyone around the world is using data and the principles I mention above, many without realizing it. So, now we need to bring this value to our businesses.

How To Build A Successful Enterprise Data Strategy

In my chat with Joe, we talked about many data concepts in the context of enterprise digital transformation. As always, but especially during the race toward digital transformation that has been accelerated by the 2020 pandemic, a successful enterprise data strategy that leads to business value creation can benefit from first addressing these six key questions:

(1) What mission objective and outcomes are you aiming to achieve?

(2) What is the business problem, expressed in data terminology? Specifically, is it a detection problem (fraud or emergent behavior), a discovery problem (new customers or new opportunities), a prediction problem (what will happen) or an optimization problem (how to improve outcomes)?

(3) Do you have the talent (key people representing diverse perspectives), tools (data technologies) and techniques (AI and ML knowledge) to make it happen?

(4) What data do you have to fuel the algorithms, the training and the modeling processes?

(5) Is your organizational culture ready for this (for data-informed decisions; an experimentation mindset; continuous learning; fail fast to learn fast; with principled AI and data governance)?

(6) What operational technology environment do you have to deploy the implementation (cloud or on-premise platform)?

Data Leadership

As Joe and I discussed, your ultimate business goal is to build a data-fueled enterprise that delivers business value from data. Therefore, ask questions, be purposeful (goal-oriented and mission-focused), be reasonable in your expectations and remain reasonably skeptical – because as famous statistician, George Box, once said “all models are wrong, but some are useful.”

Now, listen to the full podcast here.