Each day, we hear more about how Artificial Intelligence (AI) is being used in businesses, with pressures coming from the C-suite level. Consequently, we hear about AI fails that result in a poor customer experience. So, how can you make sure that your AI-powered projects work well? It starts with the data. For models to be effective, the right data is required (see our latest product, Tealium for AI). Having a thoughtful strategy to collect data that is consented, filtered, and activated in real-time, is critical to delight your customers.

I sat down with two seasoned leaders to discuss AI, and the steps you should be taking today. 

AI is officially at its peak hype cycle, and a lot of businesses are justifiably jumping in based on the potential. However, every day seems to bring another scandal or embarrassment. What’s going on out there? What strategies can companies adopt to mitigate risks and ensure responsible and effective use of AI technologies?

AI is a game-changer. It is here to stay and is already showing how transformative it can be. The hype and clear value-add have made it a top priority for all CEOs around the world. 

As a result, companies are jumping in, experimenting, and trying to be early to market with their AI offerings. That is the biggest issue right now. AI as a technology is still evolving and it is not ready to just put out there in front of customers as a brand channel. 

While some things are working very well, like using Generative AI (GenAI) to help with content creation, coding, text-related deliverables, and content productivity in general; other things are not ready for prime time. These models still need to be trained and fine-tuned to represent a brand accurately. They often produce results that are not explainable and there are ethical issues with bias and copyright, as well as security issues with being able to lock these models down. Recent examples such as Google’s Gemini producing racially incorrect images, and Chevy’s chatbot selling cars for a dollar, show that these products need more work (and more controls). 

On the flip side, using GenAI to work on specific tasks, like refining, translating, summarizing, or modifying existing content/code is working well. This can give the false sense that this technology is ready. Part of it is ready, but the larger part needs to be figured out. 

To force tech companies to prioritize the challenges, the EU has passed The EU AI Act so that safety, ethics, responsibility, and transparency are taken into account as this technology evolves. 

What’s the lesson that brands should be learning? Does this mean AI is too much of a challenge and you should wait? How crucial is data quality and consent in the success of AI-powered projects, and what steps can businesses take to ensure they have the right data foundation?

Waiting isn’t an option and AI is here to stay. It’s amazing, it’s an opportunity for brands to embrace it. The technology is ready, but what’s not ready is the data. The thing that’s being popularly discussed is that these AI models will eat up whatever data they are given, and spit things out. The “garbage in, garbage out” rule still applies. The AI revolution happened so quickly, that brands are still catching up… That means consent, the collection of the data, protecting and transforming the data, data residency, and more. With AI being top-of-mind from the board level down, every brand is being told to do more with AI. So, what can brands do now to prepare for AI? Start with the data. You can use Tealium for AI to power AI initiatives with consented, filtered, and enriched data in real-time.

Using the wrong data to power AI models can have resoundingly negative effects on a business. How should companies think about solving this? Where do you start?

AI is driven completely by the data you feed into the model. That includes training and running the model and fine-tuning the results. The EU AI Act is looking to enforce controls regarding which data goes into AI models. The regulation seeks to force brands and tech companies to explain exactly 1) how the results were generated 2) prove that they have legitimate reasons to create it and 3) permission to use the data for their models. 

The explainability of the model itself will come from the AI partner, but all the data that is fed into the model needs to be consented to, and there needs to be an audit trail to prove how it was collected, with the relevant data privacy signals. 

Diligence in auditability and use of consented data is already a core requirement under all data privacy regulations globally. With the oncoming tidal wave of AI, this requirement is magnified several times. 

I can imagine a future where companies will have to show exactly why someone was put into a specific audience and what data was used to generate the propensity score, along with proving consent for that use of data. On a positive note, while it seems onerous right now, it is pushing the idea of responsible and transparent data use, which is a great goal. 

Having a good data foundation is useful beyond AI. Why is collecting accurate, consented data a good exercise not only for AI, but for other areas like personalization? What areas would you highlight?

I’ve always said that real-time data collection is a strategy, not an afterthought. First off, your data supply chain should be consistent, whether that is AI-driven, or simple delivery of Customer Experience (CX) like an email, a call center experience, or speaking to a customer in the real world. Make sure you have consistency across your data supply chain and you make sure that data is synced across your systems. Now, we can layer AI into those systems to personalize your CX. If you think of data as the refinement pipeline, everyone can benefit from this quality data no matter what the channel or use case (see our blog post, How To Capitalize on AI with CDP Use Cases). So, now is the time to think about your data collection strategy, make sure it’s consented at every touch point and that everyone is working off the same consistent data. That solves the data wrangling problem, regulatory issues, and the biggest one which is losing customer trust by delivering a poor experience. 

To learn more about how businesses can fuel their AI initiatives with high-quality, consented, filtered data in real-time, explore our Tealium for AI page.

Post Author

Natasha Lockwood
Natasha is Senior Integrated Marketing Manager at Tealium.

Sign Up for Our Blog

By submitting this form, you agree to Tealium's Terms of Use and Privacy Policy.
Back to Blog

Want a CDP that works with your tech stack?

Talk to a CDP expert and see if Tealium is the right fit to help drive ROI for your business.

Get a Demo