Tag Archives: Data Infrastructure

An AI Data Platform for All Seasons

Pure Storage empowers enterprise AI with advanced data storage technologies and validated reference architectures for emerging generative AI use cases.

Summary

AI devours data. With award-winning AI-ready infrastructure, an AI data platform, and collaboration with NVIDIA, Pure Storage is delivering solutions and services that enable organizations to manage the high-performance data and compute requirements of enterprise AI.

AI Then and AI Now

They (some wise anonymous folks out there) say that there is a time and place for everything. They also say there is a season for every purpose. I believe that the time, place, and season for artificial intelligence (AI) data platforms have arrived. To see this, look no further than Pure Storage, whose core mission is to “empower innovators by simplifying how people consume and interact with data.”

In the past, it was sufficient to bring order to the randomness of enterprise data collection through applications of technology resources (databases and storage devices) that were aimed primarily at organizing, storing, indexing, and managing enterprise information assets for single purposes or single business units. However, this data was still left mostly unexploited for its maximum potential and enterprise-wide business value.

Also in the past, it was sufficient for business automation to consist primarily of rigid rule-based robotic non-adaptive repetition of processes and fixed tasks, requiring very little (if any) new knowledge input (i.e., live data consumption) or real-time adaptation to changing business conditions.

And also in the past, it was sufficient for AI to be relegated to academic researchers or R&D departments of big organizations who mostly produced research reports or journal papers, and not much else.

Fast-forward to 2024 and we see a totally different landscape: massive data sets feeding dynamic cross-enterprise processes, increasing automation and dynamic adaption of complex multi-step tasks, and ubiquitous value-producing applications of AI. In particular, in the past year, generative AI has played a major role in the explosive development and growth of these transformations within enterprises. 

Pure Storage Meets the Demands of Enterprise AI 

To support, sustain, and assure the continued success and cost-effectiveness of the enormous data-fueled AI-powered transformations in such a rapidly changing environment, Pure Storage has stepped up their delivery of an incredible array of award-winning AI-ready infrastructure (AIRI//S™) products and services with an AI data platform that provides the fundamental AI environment for enterprise data management (storage, access, orchestration, delivery), hyperscaled AI training, and AI inference on demand (on-prem, in data centers, at edge sites, and in micro edge devices). 

One example of Pure Storage’s advantage in meeting AI’s data infrastructure requirements is demonstrated in their DirectFlash® Modules (DFMs), with an estimated lifespan of 10 years and with super-fast flash storage capacity of 75 terabytes (TB) now, to be followed up with a roadmap that is planning for capacities of 150TB, 300TB, and beyond. Another example is Pure Storage’s FlashBlade® which was invented to help companies handle the rapidly increasing amount of unstructured data coming into greater use, as required in the training of multi-modal AI models. One more example is Pure Storage’s development of non-disruptive upgrades (NDUs), a feature of Pure Storage’s architecture that permits upgrades and expansion of the data infrastructure with no impact on data availability or performance, and with no downtime or data migrations. 

Pure Storage’s Announcements at GTC 2024

The preceding examples are industry-leading and exemplary, and yet there’s still more. At the NVIDIA GTC 2024 conference, Pure Storage announced so much more! Here are a few more details on some of those announcements.

A Data Platform for AI

Data is the fuel for AI, because AI devours data—finding patterns in data that drive insights, decisions, and action. Ease of data orchestration (ingest, cleaning, transformation, discovery, access, exploration, delivery, training, inference, deployment) is essential for data-devouring AI products and services. A data platform for AI is key to innovation and long-term affordability, scalability, sustainability, and advancement of enterprise AI applications. Anything less than a complete data platform for AI is a deal-breaker for enterprise AI. Pure Storage provides the ideal data platform for AI, as it provides unified storage for structured and unstructured data and provides enterprise data services for Kubernetes, supporting the entire AI data pipeline, because storage matters! 

At GTC 2024, Pure demonstrated the features of their data platform for AI, specifically highlighting these benefits and features of the platform: (a) Helps organizations accelerate model training and inference; (b) Improves operational efficiency for AI/IT infrastructure teams, as well as AI/ML developers and engineers; (c) Delivers cost and energy efficiency as an enterprise scales their AI operations; and (d) Provides an AI storage platform that delivers ultimate reliability and is built to handle all future AI storage needs. 

Optimizing GenAI Apps with RAG—Pure Storage + NVIDIA for the Win!

One of the most popular techniques associated with generative AI (GenAI) this past year has been retrieval-augmented generation (RAG). RAG is the essential link between two things: (a) the general large language models (LLMs) available in the market, and (b) a specific organization’s local knowledge base. In deep learning applications (including GenAI, LLMs, and computer vision), a data object (e.g., document, image, video, audio clip) is reduced (transformed) to a condensed vector representation using deep neural networks. The knowledge base then becomes the comprehensive collection of these condensed representations of the enterprise business data repositories, stored in vector format in a vector database—Vector DB being another major data technology development finding widespread adoption this past year. 

As a consequence of these activities, RAG provides the bespoke use case-specific context to an organization’s proprietary GenAI LLM applications. This contextualization of the GenAI LLM is not only enterprise-specific, local, and customized, but it is also proprietary—maintaining the privacy and security of the GenAI LLM application within the security firewalls and policies of that organization. Additionally, RAG ensures the use of an organization’s most recent data while eliminating the need for constant retraining of the LLMs. Pure Storage has worked with NVIDIA (GPU memory and GPU servers) to boost the speed, accuracy, and on-prem power of such enterprise GenAI LLM applications. Here are some specific documented results: 

(a) “NVIDIA GPUs are used for compute and Pure Storage FlashBlade//S provides all-flash enterprise storage for a large vector database and its associated raw data. In a specific case [presented at GTC], the raw data consisted of a large collection of public documents, typical of a public or private document repository used for RAG.” 

(b) “Document embedding, indexing, [and ingest] were completed 36% more quickly when using the Pure Storage FlashBlade//S with a native S3 interface than when using local SSDs that were inside each server, demonstrating that Pure Storage’s fast networked all-flash storage can help accelerate RAG document embedding.” 

Pure Storage’s RAG pipeline, in conjunction with NVIDIA GPUs and NVIDIA’s NeMo Retriever collection of GenAI microservices, ensures accuracy, currency, privacy, and relevance of proprietary enterprise LLMs. Time to insight and time to action in AI applications are faster and better with Pure Storage.

OVX Validated Reference Architecture for AI-ready Infrastructures

First question: What is OVX validation? OVX is NVIDIA’s standard validation paradigm for computing systems that combine high-performance GPU acceleration, graphics, and AI with fast, low-latency networking that are used to design and power complex 3D virtual worlds and digital twins that are transforming how businesses design, simulate, and optimize complex systems and processes. In this fantastic emerging realm of breathtaking technological achievements and innovations, Pure Storage has achieved OVX validation of their reference architecture for AI-ready infrastructures. At this stage, OVX validation applies directly to the increasing business demand for GenAI workloads (including RAG, LLMs, knowledge bases, and Vector DB), full-stack ready-to-run enterprise AI infrastructure, and local proprietary custom data + AI compute, storage, and networking solutions. Note: When you see “full-stack,” read “Pure Storage + NVIDIA working together seamlessly.”

Second question: What about technical debt and the cost of “lift and shift” to these new AI-ready architectures? For Pure Storage, OVX validation also certifies that Pure Storage’s AI-ready infrastructure will run on NVIDIA GPUs and on other vendors’ servers, which is a great savings on technical debt for those organizations that operate diverse server farms. OVX validation complements Pure Storage’s certified reference architecture for NVIDIA DGX BasePOD that was announced last year as well as their FlashStack® for AI Cisco Validated Designs announced here.

Since one of the only certainties about the future is its uncertainty, it is a great benefit that Pure Storage Evergreen//One™ provides storage-as-a-service (STaaS) guarantees and enables future-proof growth with non-disruptive upgrades. That means that Pure Storage owns the hardware (“the end user doesn’t pay for it”), but the end user buys a subscription to the storage with the same agility and flexibility of public cloud storage, and with all the security, proprietary protection, and performance of on-prem all-flash sustainable infrastructure. This is Pure Storage’s SLA-guaranteed cloud-like STaaS!

More Pure Storage Announcements at GTC 2024 

Pure Storage’s RAG development (described earlier) is accelerating successful AI adoption across vertical industries. Pure Storage is accomplishing this by creating vertical-specific RAGs in collaboration with NVIDIA. First, “Pure Storage has created a financial services RAG solution to summarize and query massive data sets with higher accuracy than off-the-shelf LLMs. Financial services institutions can now gain faster insight using AI to create instant summaries and analysis from various financial documents and other sources.” Pure Storage will soon release additional RAGs for healthcare and the public sector.

Expanded investment in the AI partner ecosystem: Pure Storage is further investing in its AI partner ecosystem with NVIDIA, engaging in new partnerships with independent software vendors (ISVs). Some of these investments are aimed at optimizing GPU utilization through advanced orchestration and scheduling, and others enable machine learning teams to build, evaluate, and govern their model development lifecycle. Additionally, Pure Storage is working closely with numerous AI-focused resellers and service partners to further operationalize joint customer AI deployments

Looking at AI Now and at What’s Next

As the award-winning leader in AI-ready (and future-ready) data infrastructure, Pure Storage is collaborating with NVIDIA to empower their global customers with a proven framework to manage the high-performance data and compute requirements that these enterprises need to drive successful AI deployments, both now and into the future. Every technical leader, line of business (LOB) leader, VP of Infrastructure for AI, VP of AI/Data Science, and CDO/CTO/CAIO can benefit right now from these technologies and services.

To put all of Pure Storage’s recent accomplishments, products, services, and solutions into a single statement, I would say that Pure Storage’s primary purpose (their North Star) is to guide and accelerate their customers’ adoption of AI through the Pure Storage platform for AI.

To learn more about all of this, make connections, learn new skills, and get ready for what’s next in this rapidly evolving season of AI, be sure to register and attend the Pure//Accelerate® 2024 live event June 18-21, 2024, at Resorts World Las Vegas. The event will have a special track on “Today’s and Tomorrow’s Applications of AI.” Don’t miss it!

Register Now for Pure//Accelerate 2024

Drive your data success at Pure//Accelerate® at Resorts World Las Vegas from June 18-21. This is the premier event to make connections, learn new skills, and get ready for what’s next. Here’s a sneak peek of what to expect:

Where can you see the latest data storage innovations, hear from visionary thought leaders, and discover the secret to transformation with data?  This year, June 18-21, Pure//Accelerate® is lighting up the Vegas Strip at Resorts World Las Vegas.

https://blog.purestorage.com/news-events/register-early-for-pure-accelerate-2024/

It’s Not Magic if it is Producing Real Global Benefits and Business Value

When I was young, my dad told me about an incident that he experienced at work. He was a US Air Force officer. On that particular day, there was an unannounced (surprise) drill – a simulated national defense emergency. Though it was just a drill, it was still important to get it done right and efficiently. One of his responsibilities was to contact certain high-ranking officers and communicate with them about the situation (in this case, the simulated emergency). He told me that, in one case, he was able to make contact by phone with one of those top officers within 20 seconds of the start of the drill. If that guy had been in the office or in one of the major facilities, then those 20 seconds would not have been surprising. But it turns out that it was that guy’s day off and he was out playing golf. This was the early 1970’s – hence, no mobile phones, no cell towers, and no internet in your pocket.

How was it possible to find this guy in the middle of some golf course and have him on the phone within 20 seconds in that era? From my young person’s perspective, it was a miracle! Or maybe it was magic. As Sir Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” So, what enabled this “magic” in the early 1970’s? The answer: satellite phones! The high-ranking officer was required to have such a device with him at all times (either in his possession or to have an assistant accompany him nearby with that device available at a moment’s notice).

A connected world—with a digital divide

Now, fast-forward to the current digital world – for most of us today, the expectation seems to be that we all should have easy access to ubiquitous mobile phones, cell towers, high-speed broadband networks, and the internet at all times! However, though that may be the rule for most of us, it is not the norm for everyone! There are plenty of exceptions to the rule, especially in those areas of the world that fall in the gaps of today’s digital service providers: underdeveloped countries, remote regions of developed countries, regions in which natural disasters or national emergencies have lost ties to those digital services (including national defense emergencies), and massive public events that attract literally billions of people in-person and online to one specific geolocation (which definitely cannot be handled by the standard placement and distribution of cell towers).

Those exceptions to the rule are not rare. In fact, they are quite common. It is imperative that something be done about this, to close those digital gaps, to bring the benefits of digital services to all, and to boost the global digital business value chain. If we cannot make these technologies available for everyone, we risk perpetuating a divide between the haves and have-nots.

Promoting connectivity on a global scale

There is a company that is addressing this “digital divide” and this data-intensive digital connectivity imperative on a global scale. SES is getting it done with major innovations and 21st century technological upgrades to critical satellite communications, going far beyond the traditional satellite technology of the past century.

But this isn’t my dad’s satellite phone service. The SES connectivity system supplies digital (video and data) connectivity worldwide to broadcasters, mobile network operators, fixed network operators, digital content providers, internet service providers, and organizations of all sorts (government agencies, businesses, and other institutions). SES satellites deliver high-speed, high-volume, broadband, and effectively ubiquitous digital access for these organizations over nearly the entire planet Earth.

When I say, “nearly the entire planet”, I mean 96% of the population. Over seven and a half billion people around the world are now a fraction of a second from contact anywhere, anytime. I believe that we can all agree that a “fraction of a second” (e.g., 150 milliseconds of satellite latency over large areas) is much better than 20 seconds – in fact, over 100X better than the 1970’s counterparts!

Furthermore, when the situation demands it (such as a major event in a specific location that is attracting a billion+ digital viewers, or a localized natural disaster requiring massive global deployment of services to that spot), these satellites can be programmed to focus their beam and full capabilities on transmitting massive digital content in and out of that tiny area.

The fast, global, ubiquitous digital access that we are describing is nothing like the crackly satellite phone conversations of the past (have you seen this in the old movies?). We are talking about smooth and faultless streaming video, crisp and clean phone calls, information-intensive online meetings, error-free data-sharing, and nearly instantaneous social media accesses and interactions by vast numbers of people (think World Cup Finals, or a live concert by top music artists for a global cause). Let’s not forget the time-critical demands of digital business and digital government that require instant access to data-intensive cloud-based data and data services! That’s a massive digital enterprise requirement for a massive number of organizations, not just an entertainment requirement for the masses.

An in-person visit to SES in Virginia

SES is headquartered in Luxembourg, with facilities around the world, including Manassas Virginia, where I visited a few weeks ago when I met with Nihar Shah, Head of Strategy and Market Intelligence for SES. In addition to a fun wide-ranging fact-filled chat with Nihar, there was a lot more that I learned about SES in that short visit. It was a great pleasure for me (who spent nearly 20 years working at NASA) to see the satellite operations center, the “big box of electronics” that is placed on-site at major events (including sporting events, for my and your favorite sports), and the dedicated SES staff. I learned how SES has deployed and operates one of the world’s largest satellite constellations, including MEO (Medium Earth Orbit) satellites and GEO (Geosynchronous Earth Orbit) satellites – different orbits for different requirements, plus an incredible technology stack that manages the communications hand-offs between the MEO satellites as they fly fast over any given location. What did we say earlier about “any sufficiently advanced technology”? It certainly is not magic when it delivers real global benefits and generates significant business value.

To top off this experience, I was immediately impressed when I walked in the front door of the SES Global Operations Center, not only because of the welcoming staff, but also because I saw the company mission statement front and center as I entered the lobby. I am a huge fan of meaningful, believable, inspirational, and actionable mission statements. SES might easily have one of the best mission statements that I have seen – and that’s not only because they refer to their statement of purpose as their “North Star”, which suits me (as an astronomer) very well. The statement reads: “We do the extraordinary in space, to deliver amazing experiences everywhere on Earth.” (Note: see me and Nihar in the attached photo below, with the SES statement of purpose.)

I look forward to learning more about SES. You too can learn more about SES and their global content connectivity solutions at https://www.ses.com/.

Kirk and Nihar Shah at the SES Global Operations Center

My top learning moments at Splunk .conf23

I recently attended the Splunk .conf23 conference in Las Vegas. Well, the conference was in Vegas, while I was far away in my home office watching the live conference keynote sessions online. Despite the thousands of miles (and kilometers) of separation, I could feel the excitement in the room as numerous announcements were made, individuals were honored, customer success stories were presented, and new solutions and product features were revealed. I will summarize many of those here (specifically my major learning moments), though this report cannot provide a complete review of all that was said at .conf23, primarily because I attended only the two main keynote sessions, but also because the phenomenal number of remarkable things to hear and learn in those sessions exceeded my capacity to record them all in one succinct report.

When I reviewed highlights from last year’s Splunk .conf22 conference in my summary report at that time, I focused a lot on the Splunk Observability Cloud and its incredible suite of Observability and Monitoring products and services. This reflected my strong interest in observability at that time. My strong interest hasn’t diminished, and neither has Splunk’s developments and product releases in that space, as seen in observability’s prominent mention within many of Splunk’s announcements at this year’s .conf23 event. For a detailed report on the current state of observability this year, you can access and download “The State of Observability 2023” report from Splunk. Here are four specific metrics from the report, highlighting the potentially huge enterprise system benefits coming from implementing Splunk’s observability and monitoring products and services:

  1. Four times as many leaders who implement observability strategies resolve unplanned downtime in just minutes, not hours or days.
  2. Leaders report one-third the number of outages per year, on average, compared to those organizations who do not implement observability and monitoring.
  3. Leaders are 7.9x as likely to say that their ROI on observability tools far exceeded expectations.
  4. 89% of leaders are completely confident in their ability to meet their application availability and performance requirements, versus just 43% of others.

Here are my top learnings from .conf23: 

  • Splunk announced a new strategic partnership with Microsoft Azure, thereby adding another major cloud provider to their other cloud provider partnerships, bringing Splunk products and services into more enterprises through the Azure Marketplace. This partnership also specifically extends hybrid cloud capabilities that will enhance organizations’ digital resilience, while enabling transformation, modernization, migration, and growth in all enterprise digital systems with confidence, trust, and security. 
  • Digital resilience was a major common theme across all of the Splunk announcements this week. As I heard someone say in the keynote session, “You had me at resilience!” By providing real-time data insights into all aspects of business and IT operations, Splunk’s comprehensive visibility and observability offerings enhance digital resilience across the full enterprise. Organizations are able to monitor integrity, quality drift, performance trends, real-time demand, SLA (service level agreement) compliance metrics, and anomalous behaviors (in devices, applications, and networks) to provide timely alerting, early warnings, and other confidence measures. From these data streams, real-time actionable insights can feed decision-making and risk mitigations at the moment of need. Such prescriptive capabilities can be more proactive, automated, and optimized, making digital resilience an objective fact for businesses, not just a business objective. I call that “digital resilience for the win!”
  • Several Splunk customer success stories were presented, with interesting details of their enterprise systems, the “back stories” that led them to Splunk, the transformations that have occurred since Splunk integration, and the metrics to back up the success stories. Customers presenting at .conf23 included FedEx, Carnival Corporation & plc, Inter IKEA, and VMware. Here are a few of the customer performance metrics presented (measuring performance changes following the Splunk integration into the customers’ enterprise systems): 3X Faster Response Time, 90% Faster Mean Time to Remediation, and 60X Faster Insights.
  • Splunk has brought greater integration and customer ease-of-use of their offerings through a Unified Security and Observability Platform. This unified operations center (Splunk Mission Control) amplifies the efficiency (time to problem resolution) and effectiveness (number of time-critical problems resolved) of ITOps and DevOps teams, requiring fewer manual steps in correlating data streams from multiple systems in order to determine the root cause of an incident. Further enhancing the user experience, the unified platform provides end-to-end enterprise systems visibility and federated search across those systems.
  • Splunk Mission Control (just mentioned above) Splunk describes it best: “Splunk Mission Control brings together Splunk’s industry-leading security technologies that help customers take control of their detection, investigation and response processes. Splunk’s security offerings include security analytics (Splunk Enterprise Security), automation and orchestration (Splunk SOAR), and threat intelligence capabilities. In addition, Splunk Mission Control offers simplified security workflows with processes codified into response templates. With Splunk Mission Control, security teams can focus on mission-critical objectives, and adopt more proactive, nimble security operations.”
  • Model-Assisted Threat Hunts, also known as Splunk M-ATH, is Splunk’s brand name for machine learning-assisted threat hunting and mitigation. M-ATH is part of the PEAK (Prepare, Execute, and Act with Knowledge) Framework, that consists of three types of hunts: (1) Hypothesis-driven (i.e., testing for hypothesized threats, behaviors, and activities), (2) Baseline (i.e., search for deviations from normal behaviors through EDA: Exploratory Data Analysis), and (3) M-ATH (i.e., automation of the first two type of hunts, using AI and machine learning). M-ATH includes ML-assisted adaptive thresholding and outlier handling, for improved alerts (i.e., faster alerting with fewer false positives and false negatives).
  • “Don’t be a SOAR loser!” Okay, that’s what one of the Splunk speakers said at .conf23. By that, he was referring to being a winner with Splunk SOAR: Security Orchestration, Automation and Response. SOAR orchestrates, prioritizes, and automates security teams (SecOps) workflows and tasks, enabling more accurate, more complete, smarter, and faster response to security incidents. As Splunk says, “Automate so you can innovate.” Isn’t that always a business truth? If you can free your analyst teams to think outside the box, hypothesize, innovate, and test new methods and solutions, then that is the sure path to being a SAFE (Security Analytics For the Enterprise) winner: soar with SOAR! While SIEM (Security Information and Event Management) aims to manage the data flows, logging, audits, alerted events, and incident responses, SOAR automates these security activities (using machine learning and AI), monitors the data and events for anomalous behaviors, classifies (prioritizes) the events, and then orchestrates optimized security operations and incident responses (using playbooks).
  • Saving my best two .conf23 learning moments for last, first up is Splunk Edge Hub. This is a physical device, in the IoT (Internet of Things) family of sensors, that collects and streams data from the edge (i.e., from edge devices, cameras, streaming data sources, monitoring systems, and sensors of all types) into Splunk systems that go to work on those data: security operations, anomaly detection, event classification, trend detection, drift detection, behavior detection, and any other edge application that requires monitoring and observability, with an injection of machine learning and AI for intelligent data understanding, classification, prioritization, optimization, and automation. Since business thrives at the edge (through insights discovery and actionable analytics at the point—time and place—of data collection), an edge hub is just what a business needs to mitigate risk, ensure visibility, escalate incidents for review, optimize the operational response, and monitor the associated activities (causes and effects). 
  • Splunk AI Assistant  Boom! This is the brilliant and innovative introduction of an AI assistant into Splunk products, services, and user workflows. This includes the latest and best of AI — generative AI and natural language interfaces integrated within the Splunk platform. This product release most definitely enables and “catalyzes digital resilience in cybersecurity and observability.” This is not just a product release. It is a “way of life” and “a way of doing business” with Splunk products and services. AI is not just a tacked-on feature, but it is a fundamental characteristic and property of those products’ features. Splunk AI increases productivity, efficiency, effectiveness, accuracy, completeness, reliability, and (yes!) resilience across all enterprise SecOps, ITOps, and AIOps functions, tasks, and workflows that are powered by Splunk. Generative AI enables the Splunk SecOps and ITOps tasks, workflows, processes, insights, alerts, and recommended actions to be domain-specific and customer-specific. It automatically detects anomalies and focuses attention where it’s needed most, for that business in that domain, while providing full control and transparency on which data and how data are used to train the AI, and how much control is assigned to the AI (by maintaining “human in the loop” functionality). With regard to the natural language features, Splunk AI Assistant leverages generative AI to provide an interactive chat experience and helps users create SPL (Splunk Processing Language) queries using natural language. This feature not only improves time-to-value, but it “helps make SPL more accessible, further democratizing an organization’s access to, and insights from, its data” – and that includes automated recommendations to the user for “next best action”, which is a great learning prompt for new Splunk users and SecOps beginners.


For a peek into my peak real-time experiences at .conf23, see my #splunkconf23 social thread on Twitter at https://bit.ly/3DjI5NU. Actually, go there and explore, because there is so much more to see there than I could cover in this one report.

Closing thoughts – AI (particularly generative AI) has been the hottest tech topic of the year, and Splunk .conf23 did not disappoint in their coverage of this topic. The agendas for some events are filled with generic descriptions that sing the praises of generative AI. This Splunk event .conf23 provided something far more beneficial and practical: they presented demonstrably valuable business applications of generative AI embedded in Splunk products, which deliver a convincing Splunk-specific productivity enhancer for new and existing users of Splunk products. When the tech hype train is moving as fast as it has been this year, it is hard for a business to quickly innovate, incorporate, and deliver substantially new features that use the new tech within their legacy products and services, but Splunk has done so, with top marks for those achievements.

Disclaimer: I was compensated as an independent freelance media influencer for my participation at the conference and for this article. The opinions expressed here are entirely my own and do not represent those of Splunk or of any Splunk partners. Any misrepresentations of the products and services mentioned in my statements are entirely my own responsibility. Nothing here should be construed as an offer to sell or as financial advice of any kind. My comments are entirely of a technical nature, focused on the technical capabilities of the items mentioned in the article.

Three Emerging Analytics Products Derived from Value-driven Data Innovation and Insights Discovery in the Enterprise

I recently saw an informal online survey that asked users which types of data (tabular, text, images, or “other”) are being used in their organization’s analytics applications. This was not a scientific or statistically robust survey, so the results are not necessarily reliable, but they are interesting and provocative. The results showed that (among those surveyed) approximately 90% of enterprise analytics applications are being built on tabular data. The ease with which such structured data can be stored, understood, indexed, searched, accessed, and incorporated into business models could explain this high percentage. A similarly high percentage of tabular data usage among data scientists was mentioned here.

If my explanation above is the correct interpretation of the high percentage, and if the statement refers to successfully deployed applications (i.e., analytics products, in contrast to non-deployed training experiments, demos, and internal validations of the applications), then maybe we would not be surprised if a new survey (not yet conducted) was to reveal that a similar percentage of value-producing enterprise data innovation and analytics/ML/AI applications (hereafter, “analytics products”) are based on on-premises (on-prem) data sources. Why? … because the same productivity benefits mentioned above for tabular data sources (fast and easy data access) would also be applicable in these cases (on-prem data sources). And no one could deny that these benefits would be substantial. What could be faster and easier than on-prem enterprise data sources?

Accompanying the massive growth in sensor data (from ubiquitous IoT devices, including location-based and time-based streaming data), there have emerged some special analytics products that are growing in significance, especially in the context of innovation and insights discovery from on-prem enterprise data sources. These enterprise analytics products are related to traditional predictive and prescriptive analytics, but these emergent products may specifically require low-latency (on-prem) data delivery to support enterprise requirements for timely, low-latency analytics product delivery. These three emergent analytics products are:

(a) Sentinel Analytics – focused on monitoring (“keeping an eye on”) multiple enterprise systems and business processes, as part of an observability strategy for time-critical business insights discovery and value creation from enterprise data sources. For example, sensors can monitor and “watch” systems and processes for emergent trends, patterns, anomalies, behaviors, and early warning signs that require interventions. Monitoring of data sources can include online web usage actions, streaming IT system patterns, system-generated log files, customer behaviors, environmental (ESG) factors, energy usage, supply chain, logistics, social and news trends, and social media sentiment. Observability represents the business strategy behind the monitoring activities. The strategy addresses the “what, when, where, why, and how” questions from business leaders concerning the placement of “sensors” that are used to collect the essential data that power the sentinel analytics product, in order to generate timely insights and thereby enable better data-informed “just in time” business decisions.

(b) Precursor Analytics – the use of AI and machine learning to identify, evaluate, and generate critical early-warning alerts in enterprise systems and business processes, using high-variety data sources to minimize false alarms (i.e., using high-dimensional data feature space to disambiguate events that seem to be similar, but are not). Precursor analytics is related to sentinel analytics. The latter is associated primarily with “watching” the data for interesting patterns, while precursor analytics is associated primarily with training the business systems to quickly identify those specific patterns and events that could be associated with high-risk events, thus requiring timely attention, intervention, and remediation. One could say that sentinel analytics is more like unsupervised machine learning, while precursor analytics is more like supervised machine learning. That is not a totally clear separation and distinction, but it might help to clarify their different applications of data science. Data scientists work with business users to define and learn the rules by which precursor analytics models produce high-accuracy early warnings. For example, an exploration of historical data may reveal that an increase in customer satisfaction (or dissatisfaction) with one particular product is correlated with some other satisfaction (or dissatisfaction) metric downstream at a later date. Consequently, based on this learning, deploying a precursor analytics product to detect the initial trigger event early can thus enable a timely response to the situation, which can produce a positive business outcome and prevent an otherwise certain negative outcome.

(c) Cognitive Analytics – focused on “surprise” discovery in diverse data streams across numerous enterprise systems and business processes, using machine learning and data science to emulate and automate the curiosity and cognitive abilities of humans – enabling the discovery of novel, interesting, unexpected, and potentially business-relevant signals across all enterprise data streams. These may not be high risk. They might actually be high-reward discoveries. For example, in one company, an employee noticed that it was the customer’s birthday during their interaction and offered a small gift to the customer at that moment—a gift that was pre-authorized by upper management because they understood that their employees are customer-facing and they anticipated that their employees would need to have the authority to take such customer-pleasing actions “in the moment”. The outcome was very positive indeed, as this customer reported the delightful experience on their social media account, thereby spreading positive sentiment about the business to a wide audience. Instead of relying on employees to catch all surprises in the data streams, the enterprise analytics applications can be trained to automatically watch for, identify, and act on these surprises. In the customer birthday example, the cognitive analytics product can be set up for automated detection and response, which can occur without the employee in the loop at all, such as in a customer’s online shopping experience or in a chat with the customer call center bot.

These three analytics products are derived from business value-driven data innovation and insights discovery in the enterprise. Investigating and deploying these are a worthy strategic move for any organization that is swimming in a sea (or lake or ocean) of on-prem enterprise data sources.

In closing, let us look at some non-enterprise examples of these three types of analytics:

  • Sentinel – the sentinel on the guard station at a military post is charged with watching for incoming activity. They are assigned this duty just in case something occurs during the night or when everyone else is busy with other operational things. That “something” might be an enemy approaching or a wild bear in the forest. In either case, keeping an eye on the situation is critical for the success of the operation. Another example of a sentinel is a marked increase in the volatility of stock market prices, indicating that there may be a lot of FUD (fear, uncertainty, and doubt) in the market that could lead to wild swings or downturns. In fact, anytime that any streaming data monitoring metric shows higher than usual volatility, this may be an indicator that the monitored thing requires some attention, an investigation, and possibly an intervention.
  • Precursor – prior to large earthquakes, it has been found that increased levels of radon are detected in soil, in groundwater, and even in the air in people’s home basements. This precursor is presumed to be caused by the radon being released from cavities within the Earth’s crust as the crust is being strained prior to the sudden slippage (the earthquake). Earthquakes themselves can be precursors to serious events – specifically, a large earthquake detected at the bottom of the ocean can produce a massive tidal wave, that can travel across the ocean and have drastic consequences on distant shores. In some cases, the precursor can occur sufficiently in advance of the tidal wave’s predicted arrival at inhabited shores, thereby enabling early warnings to be broadcasted. In both of these cases, the precursor (radon release or ocean-based earthquake) is not the biggest problem, though they may be seen as sentinels of an on-going event, but the precursor is an early warning sign of a potentially bigger catastrophe that’s coming (a major land-based earthquake or a tidal wave hitting major population centers along coastlines, respectively).
  • Cognitive – a cognitive person walking into an intense group meeting (perhaps a family or board meeting) can probably tell the mood of the room fairly quickly. The signals are there, though mostly contextual, thus probably missed by a cognitively impaired person. A cognitive person is curious about odd things that they see and hear—things or circumstances or behaviors that seem out of context, unusual, and surprising. The thing itself (or the data about the thing) may not be surprising (though it could be), but the context (the “metadata”, which is “other data about the primary data”) provides a signal that something needs attention here. Perhaps the simplest expression of being cognitive in this data-drenched world comes from a quote attributed to famous science writer Isaac Asimov: “The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I found it!) but ‘That’s funny…‘.”

The cognitive enterprise versus the cognitively impaired enterprise – which of these would your organization prefer to be? Get moving now with sentinel, precursor, and cognitive analytics through data innovation and insights discovery with your on-prem enterprise data sources.

Read more about analytics innovation from on-prem enterprise data sources in this 3-part blog series:

  1. Solving the Data Daze – Analytics at the Speed of Business Questions
  2. The Data Space-Time Continuum for Analytics Innovation and Business Growth
  3. Delivering Low-Latency Analytics Products for Business Success

Low-Latency Data Delivery and Analytics Product Delivery for Business Innovation and Enterprise AI Readiness

This article has been divided into 2 parts now:

Read other articles in this series on the importance of low-latency enterprise data infrastructure for business analytics:

Other related articles on the importance of data infrastructure for enterprise AI initiatives:

The Data Space-Time Continuum for Analytics Innovation and Business Growth

We discussed in another article the key role of enterprise data infrastructure in enabling a culture of data democratization, data analytics at the speed of business questions, analytics innovation, and business value creation from those innovative data analytics solutions. Now, we drill down into some of the special characteristics of data and enterprise data infrastructure that ignite analytics innovation.

First, a little history – years ago, at the dawn of the big data age, there was frequent talk of the three V’s of big data (data’s three biggest challenges): volume, velocity, and variety. Though those discussions are now considered “ancient history” in the current AI-dominated era, the challenges have not vanished. In fact, they have grown in importance and impact.

While massive data volumes appear less frequently now in strategic discussions and are being tamed with excellent data infrastructure solutions from Pure Storage, the data velocity and data variety challenges remain in their own unique “sweet spot” of business data strategy conversations. We addressed the data velocity challenges and solutions in our previous article: “Solving the Data Daze – Analytics at the Speed of Business Questions”. We will now take a look at the data variety challenge, and then we will return to modern enterprise data infrastructure solutions for handling all big data challenges.

Okay, data variety—what is there about data variety that makes it such a big analytics challenge? This challenge often manifests itself when business executives ask a question like this: “what value and advantages will all that diversity in data sources, venues, platforms, modalities, and dimensions actually deliver for us in order to outweigh the immense challenges that high data variety brings to our enterprise data team?”

Because nearly all organizations collect many types of data from many different sources for many business use cases, applications, apps, and development activities, consequently nearly every organization is facing this dilemma.

[continue reading the full article here]

Solving the Data Daze – Analytics at the Speed of Business Questions

Data is more than just another digital asset of the modern enterprise. It is an essential asset. And data is now a fundamental feature of any successful organization. Beyond the early days of data collection, where data was acquired primarily to measure what had happened (descriptive) or why something is happening (diagnostic), data collection now drives predictive models (forecasting the future) and prescriptive models (optimizing for “a better future”). Business leaders need more than backward-looking reports, though those are still required for some stakeholders and regulators. Leaders now require forward-looking insights for competitive market advantage and advancement.

So, what happens when the data flows are not quarterly, or monthly, or even daily, but streaming in real-time? The business challenges then become manifold: talent and technologies now must be harnessed, choreographed, and synchronized to keep up with the data flows that carry and encode essential insights flowing through business processes at light speed. Insights discovery (powered by analytics, data science, and machine learning) drives next-best decisions, next-best actions, and business process automation.

In the early days of the current data analytics revolution, one would often hear business owners say that they need their data to move at the speed of business. Well, it soon became clear that the real problem was the reverse: how can we have our business move at the speed of our data? Fortunately, countless innovative products and services in the data analytics world have helped organizations in that regard, through an explosion in innovation around data analytics, data science, data storytelling, data-driven decision support, talent development, automation, and AI (including the technologies associated with machine learning, deep learning, generative AI, and ChatGPT).

[continue reading the full article here]

Top 9 Considerations for Enterprise AI

Artificial intelligence (AI) is top of mind for executives, business leaders, investors, and most workplace employees everywhere. The impacts are expected to be large, deep, and wide across the enterprise, to have both short-term and long-term effects, to have significant potential to be a force both for good and for bad, and to be a continuing concern for all conscientious workers. In confronting these winds of change, enterprise leaders are faced with many new questions, decisions, and requirements – including the big question: are these winds of change helping us to move our organization forward (tailwinds) or are they sources of friction in our organization (headwinds)?

The current AI atmosphere in enterprises reminds us of the internet’s first big entrance into enterprises nearly three decades ago. I’m not referring to the early days of email and Usenet newsgroups, but the tidal wave of Web and e-Commerce applications that burst onto the business scene in the mid-to-late 1990’s. While those technologies brought much value to the enterprise, they also brought an avalanche of IT security concerns into the C-suite, leading to more authoritative roles for the CIO and the CISO. The fraction of enterprise budgets assigned to these IT functions (especially cybersecurity) suddenly and dramatically increased. That had and continues to have a very big and long-lasting impact.

The Web/e-Commerce tidal wave also brought a lot of hype and FOMO, which ultimately led to the Internet bubble burst (the dot-com crash) in the early 2000’s. AI, particularly the new wave of generative AI applications, has the potential to repeat this story, potentially unleashing a wave of similar patterns in the enterprise. Are we heading for another round of hype / high hopes / exhilaration / FOMO / crash and burn with AI? I hope not.

I would like to believe that a sound, rational, well justified, and strategic introduction of the new AI technologies (including ChatGPT and other generative AI applications) into enterprises can offer a better balance on the fast slopes of technological change (i.e., protecting enterprise leaders from getting out too far over their skis). In our earlier article, we discussed “AI Readiness is Not an Option.” In this article here, we offer some considerations for enterprise AI to add to those strategic conversations. Specifically, we look at considerations from the perspective of the fuel for enterprise AI applications: the algorithms, the data, and the enterprise AI infrastructure. Here is my list:

[continue reading the full article here]