A lack of privacy knowledge among teams is the number one obstacle enterprises are facing when it comes to building a resilient data architecture. There is a huge knowledge gap between the people creating the data architecture and the ones that are spearheading the privacy efforts. In most cases, there is usually a disconnect on how to close the knowledge gap between the legal and privacy teams and the engineering and architectural teams. It can be challenging for people who don’t know about data privacy to understand how data is manipulated, used, and orchestrated, hence the knowledge gap.
In order to overcome this challenge, companies need to start investing in privacy knowledge. Here are four ways to do this:
1. Require team members to get certified by IAPP, a resource for professionals who want to develop and advance their careers by helping their organizations successfully manage risks and protect their data. This is a vendor driven approach that allows team members to certify themselves. IAPP offers a variety of training courses and certifications that teach people the information they need to know about privacy for their specific roles. For example, an engineer might learn how to code in a certain way or a manager might learn how to properly execute a project based on privacy requirements.
2. Adapt organizational structures by introducing hybrid roles to support privacy needs. Once a company invests in training and allows teams to understand privacy requirements, organizational structures might need to be adapted via the creation of hybrid roles. These hybrid roles could include an engineer focused on privacy, or a privacy expert focused on technology, instead of having engineering teams purely work in silos with other engineers.
Any team that relies on processing data or that works with data, such as marketing, customer success, sales, should have one of these hybrid roles to help spearhead privacy efforts across the company. This hybrid role can support the team they are on and share knowledge from a privacy perspective to help avoid situations that are non compliant before they become a problem. To help explain the impact these “hybrid” roles can have on an organization, I like to use this college example:
You have one person in a class who is good at writing, and one person who is good at presenting. If you only allow each of them to focus on what they are strong in, they will never improve. However, if you put them together, they can combine their strengths and help each other.
This same concept applies to privacy. Many companies have privacy experts who move compliance efforts forward, but without technology experts alongside them, whatever data architecture the company is building will never work efficiently without this collaboration. This hybrid approach is more efficient and strategic than the outdated, siloed approach many teams are still operating within today.
A bold prediction I have for the future is that there will no longer be engineers or architects who don’t have a basic fundamental knowledge about privacy. Basic privacy knowledge will be and is already a requirement for certain roles at many companies. Because of this, teams will completely change from now into the future as these expectations become mainstream and new hybrid roles are introduced.
3. Focus on collaboration and allow teams adequate time to work on projects together. Often, teams are pushed to deliver something quickly and compliance is expected, but they are not given enough time to align and combine their knowledge before something is brought to market. This can sometimes result in a costly cleanup effort if the project is built in a non compliant way. While allowing teams more time to collaborate and share knowledge at the beginning of a project is a big investment and can be time consuming, the cleanup work for a non compliant project will end up costing more.
It is not efficient for siloed teams to try to work together quickly. So, companies should plan for more time ahead of a project to discuss and evaluate compliance needs properly when building data architecture. In context, this means that if there is a big project scoped for 6 months, allow 2-3 months prior to starting the project for the necessary teams to collaborate. Teams need time to really deep dive into the regulatory space and create a basic understanding for what the privacy implications might be for any given project and what the basic requirements need to be fulfilled in order to build a resilient platform.
4. Make compliance the first step in a project, not the last. Compliance should start at the beginning of any idea and the privacy team should be brought in at the beginning of any new project. When starting a new project, time and effort should be invested to deep dive into privacy considerations and understand how to build a data architecture that is compliant. In many instances, teams build a solution, then go into a privacy vetting process, realize it’s not compliant, and are then surprised it’s blocked by privacy and legal. So, if companies can flip that scenario and give privacy and legal enough time to understand privacy considerations at the beginning of a project, they might end up going to market faster at the end of the day.
At Tealium, we have adjusted our approach to start with compliance and it has helped break down silos tremendously. When a product owner or engineer tech lead has an idea, someone from the privacy team in a hybrid role immediately goes into privacy discovery and partners with them to understand their ideas, review their research and flag possible implications. This approach works for us because before we even kick off an idea to a larger audience, we already know where possible risks might lie at a very early stage. It costs a little bit of time and effort, but it reassures the approach is the right one instead of doing a trial and effort, which is not something that should ever be tried with privacy.
Additionally, we acknowledge that an idea might be really good, but, if technical feasibility is the only aspect being considered, a company will end up in a non compliant state and have to retrospectively rework privacy into the development process again. Simply asking if something is compliant is the wrong approach. You should be asking if it’s the right thing to do. Just because something is compliant doesn’t mean we actually want to pursue it-maybe we can go beyond that. When developing anything, we always ask ourselves if it’s the right approach for our users.