Generative AI is the biggest and hottest trend in AI (Artificial Intelligence) at the start of 2023. While generative AI has been around for several years, the arrival of ChatGPT (a conversational AI tool for all business occasions, built and trained from large language models) has been like a brilliant torch brought into a dark room, illuminating many previously unseen opportunities.
Every business wants to get on board with ChatGPT, to implement it, operationalize it, and capitalize on it. It is important to realize that the usual “hype cycle” rules prevail in such cases as this. First, don’t do something just because everyone else is doing it – there needs to be a valid business reason for your organization to be doing it, at the very least because you will need to explain it objectively to your stakeholders (employees, investors, clients). Second, doing something new (especially something “big” and disruptive) must align with your business objectives – otherwise, you may be steering your business into deep uncharted waters that you haven’t the resources and talent to navigate. Third, any commitment to a disruptive technology (including data-intensive and AI implementations) must start with a business strategy.
I suggest that the simplest business strategy starts with answering three basic questions: What? So what? Now what? That is: (1) What is it you want to do and where does it fit within the context of your organization? (2) Why should your organization be doing it and why should your people commit to it? (3) How do we get started, when, who will be involved, and what are the targeted benefits, results, outcomes, and consequences (including risks)? In short, you must be willing and able to answer the six WWWWWH questions (Who? What? When? Where? Why? and How?).
Another strategy perspective on technology-induced business disruption (including generative AI and ChatGPT deployments) is to consider the three F’s that affect (and can potentially derail) such projects. Those F’s are: Fragility, Friction, and FUD (Fear, Uncertainty, Doubt).
Fragility occurs when a built system is easily “broken” when some component is changed. These changes may include requirements drift, data drift, model drift, or concept drift. The first one (requirements drift) is a challenge in any development project (when the desired outcomes are changed, sometimes without notifying the development team), but the latter three are more apropos to data-intensive product development activities (which certainly describes AI projects). A system should be sufficiently agile and modular such that changes can be made with as little impact to the overall system design and operations as possible, thus keeping the project off the pathway to failure. Since ChatGPT is built from large language models that are trained against massive data sets (mostly business documents, internal text repositories, and similar resources) within your organization, consequently attention must be given to the stability, accessibility, and reliability of those resources.
Friction occurs when there is resistance to change or to success somewhere in the project lifecycle or management chain. This can be overcome with small victories (MVP minimum viable products, or MLP minimum lovable products) and with instilling (i.e., encouraging and rewarding) a culture of experimentation across the organization. When people are encouraged to experiment, where small failures are acceptable (i.e., there can be objective assessments of failure, lessons learned, and subsequent improvements), then friction can be minimized, failure can be alleviated, and innovation can flourish. A business-disruptive ChatGPT implementation definitely fits into this category: focus first on the MVP or MLP.
FUD occurs when there is too much hype and “management speak” in the discussions. FUD can open a pathway to failure wherever there is: (a) Fear that the organization’s data-intensive, machine learning, AI, and ChatGPT activities are driven by FOMO (fear of missing out, sparked by concerns that your competitors are outpacing your business); (b) Uncertainty in what the AI / ChatGPT advocates are talking about (a “Data Literacy” or “AI Literacy” challenge); or (c) Doubt that there is real value in the disruptive technology activities (due to a lack of quick-win MVP or MLP examples).
I have developed a few rules to help drive quick wins and facilitate success in data-intensive and AI (e.g., Generative AI and ChatGPT) deployments. These rules are not necessarily “Rocket Science” (despite the name of this blog site), but they are common business sense for most business-disruptive technology implementations in enterprises. Most of these rules focus on the data, since data is ultimately the fuel, the input, the objective evidence, and the source of informative signals that are fed into all data science, analytics, machine learning, and AI models.
Here are my 10 rules (i.e., Business Strategies for Deploying Disruptive Data-Intensive, AI, and ChatGPT Implementations):
- Honor business value above all other goals.
- Begin with the end in mind: goal-oriented, mission-focused, and outcomes-driven, while being data-informed and technology-enabled.
- Think strategically, but act tactically: think big, start small, learn fast.
- Know thy data: understand what it is (formats, types, sampling, who, what, when, where, why), encourage the use of data across the enterprise, and enrich your datasets with searchable (semantic and content-based) metadata (labels, annotations, tags). The latter is essential for AI implementations.
- Love thy data: data are never perfect, but all the data may produce value, though not immediately. Clean it, annotate it, catalog it, and bring it into the data family (connect the dots and see what happens). For example, outliers are often dismissed as random fluctuations in data, but they may be signaling at least one of these three different types of discovery: (a) data quality problems, associated with errors in the data measurement and capture processes; (b) data processing problems, associated with errors in the data pipeline and transformation processes; or (c) surprise discovery, associated with real previously unseen novel events, behaviors, or entities arising in your data stream.
- Do not covet thy data’s correlations: a random six-sigma event is one-in-a-million. So, if you have 1 trillion data points (e.g., a Terabyte of data), then there may be one million such “random events” that will tempt any decision-maker into ascribing too much significance to this natural randomness.
- Validation is a virtue, but generalization is vital: a model may work well once, but not on the next batch of data. We must monitor for overfitting (fitting the natural variance in the data), underfitting (bias), data drift, and model drift. Over-specifying and over-engineering a model for a data-intensive implementation will likely not be applicable to previously unseen data or for new circumstances in which the model will be deployed. A lack of generalization is a big source of fragility and dilutes the business value of the effort.
- Honor thy data-intensive technology’s “easy buttons” that enable data-to-discovery (D2D), data-to-“informed decision” (D2ID), data-to-“next best action” (D2NBA), and data-to-value (D2V). These “easy buttons” are: Pattern Detection (D2D), Pattern Recognition (D2ID), Pattern Exploration (D2NBA), and Pattern Exploitation (D2V).
- Remember to Keep it Simple and Smart (the “KISS” principle). Create a library of composable, reusable building blocks and atomic business logic components for integration within various generative AI implementations: microservices, APIs, cloud-based functions-as-a-service (FaaS), and flexible user interfaces. (Suggestion: take a look at MACH architecture.)
- Keep it agile, with short design, develop, test, release, and feedback cycles: keep it lean, and build on incremental changes. Test early and often. Expect continuous improvement. Encourage and reward a Culture of Experimentation that learns from failure, such as “Test, or get fired!”
Finally, I offer a very similar (shorter and slightly different) set of Business Strategies for Deploying Disruptive Data-Intensive, AI, and ChatGPT Implementations, from the article “The breakthrough that is ChatGPT: How much does it cost to build?“. Here is the list from that article’s “C-Suite’s Guide to Developing a Successful AI Chatbot”:
- Define the business requirements.
- Conduct market research.
- Choose the right development partner.
- Develop a minimum viable product (MVP).
- Test and refine the chatbot.
- Launch the chatbot.