There’s no disputing the impact of the growing usage of Artificial Intelligence throughout the enterprise. Organizations are becoming more analytically inclined, automation is rampant, and business users are empowered to accomplish more at a greater scale than they previously could.
Nonetheless, there’s another side to the pervasive deployment of cognitive computing technologies throughout the data ecosystem, particularly in terms of the mounting ease, accessibility, and utility of advanced analytics. The increasing demand for predictive insight—and the data required to facilitate it—has very real repercussions in terms of data privacy and regulatory compliance which, if not properly addressed, can restrict AI’s use for organizations.
Many firms are attempting to balance the data demands for AI with what Privacera SVP of Marketing Piet Loubser termed the “let’s stay out of trouble side of things. Meaning, there’s compliance, regulatory things, and internal controls. As much as we think externally of regulations from on top, the majority of organizations have much more stringent things going on inside their four walls.”
Violations of any of the concerns described by Loubser result in expensive legal and regulatory penalties with the potential to undermine any value gained from advanced analytics. Organizations are struggling to grant governed, secure access to data at the speed of contemporary business without overburdening IT teams with an inordinate amount of requests in what’s become a self-service, data-driven world.
Addressing these issues calls for a new approach, one contingent upon the necessity of AI to the modern enterprise, in which the approach solidifies distributed data governance and access with the fortifications of traditional centralized methods. The delegated data governance framework counterbalances each of these needs to avail organizations of the sundry of sources necessary for meaningful AI with unassailable control of accessing them.
According to Privacera CEO Balaji Ganesan, when properly implemented with the correct framework, this model “automates data discovery and workflows with central reporting and auditing capabilities while data stewards, who are versed in regulations and business requirements, adapt centralized policies and processes for individual business lines.”
Flexible, Governed Access
The delegated data governance paradigm achieves two remarkable feats. It frees IT teams unfamiliar with the specifics of data and their business use cases from serving as a centralized governance gatekeeper for sources, which prolongs time to value in time sensitive cognitive computing deployments. It also creates timely, decentralized access to data sources with a model in which data stewards, familiar with data and their business value, provision access as needed.
The first of these achievements resolves the time-honored issue with conventional central access approaches in which “you create this whole data lake and we can process massive amounts of data, billions of billions of records for a mass of people, and we say the only access is this little thin straw,” Loubser observed. “By delegating you’re opening this up and we now have 20 straws, one for every department, every group.”
As Loubser indicated, the delegated model hinges on local data stewards well versed in business goals granting data access to their specific business units. However, that access is still based on centralized data governance, data privacy, and regulatory compliance policies that are locally enforced for more time-sensitive access by those familiar with data: the stewards.
Calculated Advantages for Citizen Data Science
The boons of this approach are manifold, assisting nearly every level of personnel involved with it. IT teams get the benefit of no longer being what Loubser termed “the organization of no”. They’re no longer in the data path of data scientists, business analysts, and end users—or in a position in which they protract time to access because of security concerns. However, as Loubser mentioned, IT still “does what it’s good at”, which is configuring any variety of cloud-based or on-premise tools and platforms for such users to process or analyze data.
The decentralized governance model behooves governance personnel by enabling them to still implement centralized policies. Moreover, Loubser mentioned IT and other governance personnel can still monitor data access through “a centralized pane of glass” to review within distributed source systems in real time.
The ultimate triumph, of course, is reserved for the beneficiaries of AI—the end users and businesspeople gaining from the democratization of data science this paradigm produces. “We now find data scientists in business groups,” Loubser revealed. “Marketing could have a data scientist. Embedded in the organization you can have your own data engineers.” Plus, the citizen data scientist phenomenon shows few signs of abating, and is perhaps the ultimate expression of the democratization of this field.
“Some [vendors] are saying the days of data science being the domain of the high end PhD in statistics is over,” Loubser confirmed. “We want to empower the citizen data scientists and BI analysts. There are thousands of these people throughout organizations. Maybe there are 10 of the high end data scientists.”
The delegated governance model supports this trend of self-service analytics, citizen data scientists, and the burgeoning clamor for AI throughout business units in nearly each aspect of the enterprise. It does so by combining the best facets of the traditional centralized approach to data governance with decentralized capabilities in source systems. Subsequently, AI curators and end users get timely access to data for insights that help them do their jobs better than they otherwise could.
Featured Image: NeedPix
Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance, and analytics.