Not so fast: the truth is far more nuanced for the technology C-suite. Here’s why.

For arguably the first time in more than a generation, one topic features at Fortune 500 Board and kitchen tables, within private members clubs and music clubs, between business leaders, government officials, industry experts and even teachers: AI.

For CEOs this phenomena is fuelling a race to understand, invest and implement AI at great speed and scale. It is easy to see why: AI’s proponents see the discovery of fledgling technologies that can augment human creativity and ingenuity, thereby improving productivity, collaboration, cost containment, and even driving a better bottom-line.

With such opportunity, however, comes a great deal of responsibility. Technology and data leaders, increasingly sitting in boardrooms and reporting directly to the CEO, must apply the necessary guardrails around these technologies in order to safely experiment, and scale where appropriate, while protecting an organisation’s IP, customers, employees, and even brand reputation.

Chief among those concerns is the realisation that most organisations do not have the data maturity—such as ownership, control, frameworks and governance—to support or get business value from this major investment. 

In other words, without data, there is no AI. 

To examine this assumption in more detail, HotTopics in partnership with Tealium, welcomed leading CIOs, Chief Digital and Data Officers to a Food for Thought lunch, an invite-only three-course lunch over which leaders debated ‘No Data, No AI’. 

Together, they challenged each other on the evolving relationship between data and AI, what that means for foundational data strategies leaders are employing, and much, much more, as described in detail below. 

Many thanks to the following for their contribution to the debate:

Moderators
Peter Stojanovic, Editor, HotTopics
Doug Drinkwater, Director, Strategy & Editorial, HotTopics

Guests
Mike Anderson
Karen Ambrose
Peter Krishnan
Scott Rolph
Sabah Carter
Chris Gullick
Ian Cohen
Nuala Kennedy-Preston
Tim Lum
Andreas Galatoulas
Niraj Patel
Dan Kellett
Ari Cohen

  • Investigating the nuances of ‘No data, No AI’
  • Data literacy, talent
  • Foundational data
  • Governance 
  • Ethics and accountability
  • More thought, not more rules 

 

Investigating the nuances of ‘No data, No AI’

‘No data, no AI’, as it transpires, is a good conversation starter, if incorrect in practice. 

The volume of data held by organisations has ballooned over the last decade – analyst firm Gartner said in 2019 that 95 percent of the world’s data had been created in the previous two years, and yet the quantity of data is rarely the issue. Instead, technology executives are interested in its quality, transparency, reliability and integrity. In other words, can the data be trusted?

On the other hand, it is not straightforward to say, ‘no quality data, no AI’.  

For instance, some businesses, such as regulators, do not hold vast amounts of their own data – most of it comes from their markets or is externally generated. Much of this data is also unstructured, and requires a degree of analysis to interpret it all. In this case: no AI, no structured data.

One leader went a step further.  “If I have structured data, why do I need AI?” 

Another executive ventured that they were more interested in unstructured data now because of its inherent value when included in a value chain that includes advanced interpretive data strategies, given the promise of generative AI, a type of advanced AI. Especially when used for experimentation or education purposes, they added. 

A contrarian perspective also emerged. 

“Many companies do not need data to get AI. A generative AI model that has been trained on massive amounts of data and uses enormous amounts of compute [power]? That just becomes an API, which can be integrated into your processes.”

It becomes more difficult to introduce a pithy equation for data and AI. No data can still mean AI, if one has the resources. No quality data may still mean AI, as long as the structures are in place to elevate unstructured data in tandem with more accessible AI. No AI, just structured data, is still a viable model for now. In all, it was a freeing start to the debate; a leader can interpret the relationship between data and AI for themselves and their businesses, returning to that all-important, oft-forgotten question: what do I actually need to allow my business to do its business?

One quip received a good reception around the table. “I’m using [the promise of] AI to finally get the investment in my [foundational] data [resources] that I’ve been asking for, for years.” In other words, want AI? Invest in our data.

Perhaps a more suitable question might therefore be “No data strategy, No AI benefits?”

Data literacy, talent

Regardless of the relationship between data and AI and how it can be interpreted, truisms remain: all employees need to understand what the business does and how the right data activates that. 

What data does your company hold and where can it be found? Which teams need which data, in what format, in what context? Do you know the insights you’re expecting when interpreting high quality data? Do you understand the AI application and its value to that interpretation? Questions like these and more were bandied around the table, met with many nodding heads. In general, from the CEO to the graduate employees, the answer is most likely, ‘No.’ That is a lot for the CIO, CDO or CTO to shoulder.

This extends to all parts of the workforce. At a recent HotTopics AI event, the CIO of the UK Government’s Cabinet Office reflected on how excited Government ministers were about AI, “without necessarily understanding the foundations you need to put in place.”

As one Food for Thought guest put it: “What you find out quite quickly is how little some of your colleagues know about their own business. And it’s that knowledge of their own business that is vital.”

But the conversation was not wholly negative. AI is an opportunity at its heart, after all. 

Talent is emerging with the inherent skills to work more closely with some forms of AI, like generative AI. “I have been fortunate to find employees who have both the language ability and the coding ability to be really quite powerful talent [with generative AI]. They write APIs, they write interfaces, but they also have the language ability to really unleash an AI co-pilot’s potential. It’s a fantastic skill profile.”

That power can be extrapolated to a wider community. “Startup principles” can be employed in some quarters of the business to experiment with—and enjoy!—AI. It provides teams with confidence and agency, encouraging a mindset that is more geared towards working with the technology than against it, battling the ‘don’t tell me what to do’ culture that so often stymies transformation. It may also ameliorate staff churn, another thorn when it comes to continuous improvement. Staff that are trusted to interpret AI in turn trust their employers more.

The question that couldn’t be answered? Within the C-suite, who is responsible for AI. The CIO, again? A Chief Digital Officer? AI? Data? As one Chief Data Officer put it: “Every single Chief Data Officer that I know has got a different remit, a different portfolio.” 

Foundational data

These are not just esoteric discussions. How leaders interpret the relationship between data and AI has very real implications for the foundational data qualities businesses need. 

To power machine learning, AI and more advanced techniques like large language models, data needs to be available. Structured data needs to be consented and filtered; leaders across the table are training and running different models and are demanding flexibility from these data layers and models, and pragmatism from the teams involved. 

Data is being collected across the business, in real-time. How readily available is that data (from device, cloud or edge) to whatever AI you plan to use it for? The consent question also came up, with leaders looking to reduce risk and update governance (see next section), and of course built trust in the whole system—for no small reason was ‘Trust’ the World Economic Forum’s theme for 2024.

In short, executives want more data availability, in real-time, and they hope AI, or the renewed interest from CEOs into this space, will help them finally break through the silos with the business that is causing bottlenecks, or affecting data flows and transparency.

Pace was also mentioned. How can we be more experimental, spin up initiatives faster, whilst adhering to an evolving compliance landscape? At the very least it was important for those around the table to recognise they all face the same issues. 

Governance 

Our leaders moved on to how they are experimenting with governance models. Inclusivity and iteratively were two terms mentioned, for example. 

“We’re conducting a DPIA right at the inception point, alongside regular check-in points for the DPO,” said one executive, transparently explaining it was an experiment to see whether maintaining the governance stream throughout the processes is a better process for all. “It allows the DPO to ask the right questions at the right time, continuously.”

Not all leaders agreed with the premise. Isn’t this too much for the DPO? we heard. What happens with multiple projects? Will that initiate a breakdown in governance as they become too lenient to deal with the number of jobs or will they grind AI experimentation to a halt?

It is a complex puzzle, the other agreed. Any projects longer than six months fail to take into account the changeability of AI. Too little communication between data scientists and engineers, and the DPIA and adjacent talent, and you run the risk of an inefficient process; work for nothing, in effect. Too much, and frustrations amount—nobody likes being told what to do too many times, we were reminded.

Some leaders also believe the burden we are placing on data scientists is too great. 

“Just because they can write good code, why are they deciding the boundary of what’s right and wrong?” Why are they essentially in charge of front- to back-end development when integrating data, AI, and the structures required to protect us, too? “That role is massively over-optimistic.”

That perspective was thrown into sharp relief as the topic of bias in data emerged. 

To what degree is data bias, and, more controversially, are there instances where bias data is necessary? The answers to those may lie in a conversation we had later in the lunch as leaders considered the approaches taken to use data and AI: when using data and AI to make predictions or qualifications, where and when does the concept of fairness come into play? Data-driven facts do not always lie comfortably with our interpretation, or expectation, of fairness. Some leaders are bringing non-technologists to the table to consider these tricky questions. Relationships with legal, product, marketing and so on may help engineers solve the problems in the data that have not been solved in the business.

In further commentary, bias and “hallucination”—when generative AI models give incorrect information tend to go hand in hand. Depending on how models have been trained, and with what data, anomalies will inevitably occur based on poor quality data or inherent bias in either the reference data or the way questions are constructed. Solutions to this involve creating boundaries for your models, applying predefined filters and rigorously testing your assumptions. These should be key parts of anyone’s AI deployment roadmap.

Ethics and accountability

Perhaps the largest portion of the lunch debate was dedicated, in some way, to the ethics of using AI, of data, and the structures and accountability therein. It is not surprising. With many across the industry still struggling to keep up with the technology, it is increasingly the remit of the technology leader to educate the business, too. To put it another way, marking their homework whilst also setting the exam questions.

When it comes to AI, can we? Should we? 

“My personal opinion is that private data is just as important to society as private property. It’s a fundamental right and I think as AI gets ever more powerful, ensuring that we can keep our personal data and our corporate data for the good of the people,” one leader said.

I think that the principal is right,” another responded, “and nobody in the room would disagree with you when you say it’s as important as private property, but the problem is, it didn’t start from that premise. The way private property was enshrined in law right from the beginning [no matter where you live] was always typically your home is your home. 

“Private data is different. If you gave away your digital platform for free, how would you expect people to pay for it since they’re used to the frictionless experience?”

As we heard so often during the lunch, the genie is out of the bottle.

More thought, not more rules 

As the leaders dissected data, AI and its relationship, the factors involving sound data foundational practices, and the governance principles being enacted, one of the closing points seemed to encapsulate what was needed well: to get the insights you want with the data you have and the AI you’re investing in, a transformation of thought is required. 

Refresher courses for leaders on data ethics frameworks can be incredibly useful. Their frameworks do not tell you right from wrong. It encourages you to think about it critically. What are your data considerations and do they serve the business as well as the customer or client? It makes you consider your biases objectively. How will you make up for them and compensate for them? 

In short, it is a framework for giving your whole data architecture more thought, not more rules.

This was well shown in an example shared by one of the guests.

“It’s a classic example used in insurance. It was an advanced analytics model designed to work out how much to charge on current motor insurance premiums. An experiment was done. The same data was added for everything onto two claims, except for one data point that was changed. The first premium came out at £500 a year and the second premium came out at £1500 a year.

“Anyone want to guess what the one data point they changed was? It was First Name. So the first one was John Smith for £500 and the second one was Mohammed Smith for £1500. Your first thought is, my God, it’s racist. Well, actually, no. It’s incompetence. 

“What has happened is that the data scientists or actuaries have fed all of the claims data into the model, including the first name of the person, which was never going to be relevant. The model spat it out as an indicator, the most common first male name in the UK; but it is proxy data. It’s the incompetence of not having thought through how inputting first name data will impact the model, on a correlation and causation frame of thinking. It is this sort of simple mistake that causes a huge amount of problems.”

Post Author

Peter Stojanovic
Peter Stojanovic is the Editor of HotTopics and its award-winning industry event for C-suite leaders, The Studio. He is a highly experienced online journalist and moderator across business and technology settings, and interviewer of FTSE 100 & 250 and Fortune 500 business leaders.

Sign Up for Our Blog

By submitting this form, you agree to Tealium's Terms of Use and Privacy Policy.
Back to Blog

Want a CDP that works with your tech stack?

Talk to a CDP expert and see if Tealium is the right fit to help drive ROI for your business.

Get a Demo