In 2023, the global digital economy underwent a notable evolution arising from advances in AI, data governance, multilateral digital capacity building, regulatory sandboxes and several legislative developments. 

In this first instalment of a two-part ‘Derisking Data’ blog series, gain insight into 5 key regulatory developments that will redefine the contours of AI and data usage for differentiated CX in 2024 and beyond:

  • APRA’s 100 Critical Risk Data Elements Pilot Study 

The Australian Prudential Regulation Authority (APRA) is the prudential regulator of the Australian financial services industry. Recognising the importance of mitigating data risk to a resilient banking infrastructure, APRA launched its 100 Critical Risk Data Elements (CRDE) Pilot Study. The timeliness of the release of the CRDE Pilot Study’s findings in late 2023 followed a spate of high-profile data breaches that highlighted cyber vulnerabilities. Consequently, this situational context has indicated the imperative of effective data governance throughout a data lifecycle to mitigate cyber risks, improve consumer welfare and enhance market efficiency. 

The multiyear CRDE Pilot Study sought to acquire insights into the data risk management practices of a select group of retail banks. The key findings highlighted the trends in the data management practices of APRA-regulated entities, as well as areas for enhancements to data governance. For prudentially-regulated entities, APRA has identified enterprise-level data governance, a scalable technology roadmap and offering data as a product as key best practice recommendations derived from the findings of the CRDE Pilot Study.

  • New Zealand Consumer Data Right 

Following in the footsteps of its neighbour in the Pacific, Australia, New Zealand (NZ) is forging a path towards developing its own Consumer Data Right (CDR) regime. From a policy perspective, the NZ CDR regime is similar to that of Australia’s CDR regime; namely, to improve consumer choice, value and welfare, with a view to increase innovation and market competition. The Customer and Data Protection Bill provides a framework for the NZ CDR that broadly mirrors Australia’s CDR regime, with deployment to commence in the retail banking industry. For organisations operating in both Australia and NZ, learnings from the former’s CDR implementation can be instructive in guiding the process for compliant participation in the NZ CDR regime. Importantly, the NZ CDR regime will empower consumers with a right to data portability, for which a fit-for-purpose technology infrastructure will be essential to the fulfilment of that right.

  • The Australian Government Interim Response to the Safe and Responsible AI in Australia Consultation

Australia is a signatory to the Organisation for Economic Co-operation and Development’s (OECD) AI Principles, which seek to advance innovative and secure AI on a foundation of human rights and democratic norms. Recently, the Australian Government Safe and Responsible AI in Australia discussion paper sought to address gaps in the present AI governance framework, as well as mechanisms to promote the development and adoption of AI to advance Australia’s global competitiveness. 

On 17 January 2024, the Australian Government Interim Response to the Safe and Responsible AI in Australia consultation was released, drawing several conclusions that will inform the future direction of AI regulation. The Australian Government Interim Response has identified an inadequacy in the existing legal framework to address AI risks, the need for a risk-based and technology-neutral approach to AI regulation, and mandating obligations via legislation in lieu of voluntary commitments. The scope of any prospective legislation will seek to regulate the use of AI in high-risk settings, with appropriate testing, transparency and accountability mechanisms likely to be imposed.

Importantly, the force multiplier effect of AI technologies creates the capacity to yield either productive or unproductive outcomes at speed and scale. In turn, the consequences of AI adoption for both businesses and consumers are profound, whether such consequences may be beneficial or detrimental. Measures to regulate AI use will be imperative to harm minimisation and the protection of consumer welfare; for example, with respect to risks arising from algorithmic bias. From a corporate responsibility perspective, the consequential reputational risk is too great to overlook. Prudent organisations will act now to create a robust AI governance framework by implementing cybersecurity protections for AI use, ensuring human oversight of AI systems and establishing a trusted data foundation to power productive and ethical AI adoption.

  • EU Data Act

The European Union (EU) continues to forge its leadership status in innovative data regulation to create competitive data-driven digital markets built upon shared data value. The EU Data Act allows users of connected devices to gain access to the data generated by their use of such devices in a manner similar to that of manufacturers and service providers. Amid the rise of the Internet of Things (IoT), the EU Data Act will create a distinction between product data and service data to enable focus on the functionalities of data and the benefits of distinct data sets for the end user. Critically, the EU Data Act seeks to strengthen the bargaining power of consumers to create fair and efficient digital markets for all.

  • US Executive Order on Safe, Secure and Trustworthy Artificial Intelligence

The landmark US Executive Order on Safe, Secure and Trustworthy Artificial Intelligence (EO) addressed the broad ambit of issues arising from the proliferation of AI across all spheres of society. Key aspects of the directive for companies include reporting obligations in relation to large AI models and reporting requirements for large computing clusters that meet specified technical thresholds. Importantly, the EO directive towards the US Department of Commerce’s National Institute of Standards and Technology (NIST) to develop best practice standards and reporting requirements will provide companies with guidance for the ethical and compliant deployment of AI tools and systems. From a privacy perspective, the directive to the NIST will require the creation of guidelines through which executive agencies can deploy differential privacy as a privacy-enhancing measure; including, in relation to AI usage. 

Derisking Data with a CDP

For organisations, navigating regulatory complexity to advance innovation-led growth requires derisking data to create shared value in an interconnected digital ecosystem. 

Tealium’s 2024 State of the CDP Report (Report) found that organisations with a CDP achieve more value from AI technologies [80%] than those without a CDP [51%]. The Report also highlighted that 91% of companies with a CDP believe that the tool provides the foundation of data quality essential to enabling AI/ML projects. Moreover, the Report has identified privacy compliance as the third most valuable CDP use case, with 94% of CDP users of 4+ years indicating preparedness to adapt to evolving privacy regulations. As compliance with privacy obligations can impede efficient AI adoption, a CDP is a prudent investment that will pay dividends in the long-term digital transformation of an organisation.

Building Better Data Practices

Derisking data begins with weaving better data practices into the fabric of an organisation. In turn, organisations will be best positioned to rapidly deploy AI technologies for accelerated innovation and agility built upon privacy and security.

To implement better data practices with a CDP, organisations can take the following 3 key steps:

  1. Data governance – Centralised data governance and enterprise-level data practices enable a data taxonomy that embeds data within an organisation’s operating framework. In turn, this approach minimises data risk and maximises operational efficiency for data-driven commercial growth. Tealium’s industry-leading CDP can provide a trusted foundational data layer to power enterprise-wide data initiatives at scale.
  2. Data as a product – Unlocking the full value of data requires treating data as a strategic asset to ensure it is suitable for its intended purpose. In turn, organisations will be positioned to adhere to the privacy principle of purpose limitation by collecting and processing personal data for specified, express and legitimate purposes alone; thereby, reducing compliance and security risks. With Tealium’s CDP, organisations can enhance data quality and insights generation in real time to enable data as a product, and empower its workforce to derive data value with minimal risk.
  3. Multiyear technology roadmap – Establishing a trusted data foundation built on scalable technology will enable organisations to deliver long-term strategic data initiatives that can be adapted in line with evolving business needs in a dynamic market. Tealium’s scalable CDP has been adopted by the world’s most innovative companies as the foundation of a robust and flexible data architecture that disintegrates data silos and builds business resilience.

Next week, part two of the ‘Derisking Data’ blog series will explore 5 additional regulatory developments set to reshape the use of AI and data in the future CX landscape.

Learn more about how a CDP can improve competitive differentiation in a dynamic regulatory environment by accessing our complimentary webinar: Tealium x Deloitte | The Future of Trust webinar.

Post Author

Anna Koleth
Anna is the Head of Product & Content Marketing, APJ at Tealium

Sign Up for Our Blog

By submitting this form, you agree to Tealium's Terms of Use and Privacy Policy.
Back to Blog

Want a CDP that works with your tech stack?

Talk to a CDP expert and see if Tealium is the right fit to help drive ROI for your business.

Get a Demo