The foundations of trust are at challenging bounds across society at this time. There is no doubt that technological developments in the public sector enhance communication but compared to 10 years ago, the rate of flow of knowledge has increased incredibly today. Furthermore, artificial intelligence revolutionizes the way we communicate and do business.
The impressive desire for prospective and retroactive openness develops when it becomes unpredictable how such procedures make judgments. Future susceptibility educates people about data protection and the functioning of its system in public records.
It outlines how the AI system generally makes choices. Progressive transparency may thus be regarded as a method of accountability of citizen rights.
However, are these advancements consistent with the fairness and transparency requirements under the GDPR?
O’Neill in Trust, Trustworthiness and Accountability clarifies that,
“IN THE USA NEW TRUCK-EXCLUSION ROADS ARE BEING STUDIED FOR CONSTRUCTION IN PARALLEL TO MAJOR INTERSTATE HIGHWAYS TO RELIEVE CONGESTION. I THINK THIS WILL BECOME A REALITY – AND, WHEN IT DOES, TRUCKS CAN START TO COOPERATE WITH EACH OTHER FOR GREATER FUEL EFFICIENCY AND LOWER EMISSIONS. THIS IS THE PLATOON IDEA, AND IN ITS FULLEST FORM YOU HAVE ROADWAYS FULL OF NOTHING BUT ROBOTIC TRUCKS. THIS WILL BE VERY MUCH WITHIN OUR REACH WITHIN TEN YEARS AND THE IMPLEMENTATION ISSUES REVOLVE SORELY AROUND THE BUSINESS CASE, ROAD FACILITIES, AND SUCH LIKE.”
Ethical Recognition of Transparency as a Relational Concept
Upon a detailed examination of data protection legislation transparency, especially in the context of the GDPR. It established legal transparency requirements and ethical underpinnings in the GDPR, revealing crucial relationships between transparency, informed consent, and the special concept of autonomy of individuals and meaningful human agencies through public records.
In this perspective, system developers should notify users of the AI-based choices and the underlying logic. The GDPR sets out how data controller information should be made accessible for the user.
Felzmann et al., in Transparency you can trust: transparency requirements for Artificial Intelligence between legal norms and contextual concerns, says,
“Reliance on democratic and legal forms of accountability which operate from outside of organizations themselves is particularly relevant to achieve effective accountability of organizations towards their service users, given their comparative lack in power. The state’s effectiveness in ensuring its citizens’ rights through means of regulation and legislation, such as the GDPR, and associated enforcement activities, grounds not just the state’s own trustworthiness but will also determine whether citizens can trust transparency expressions of service providers. In order for the GDPR transparency requirement to fulfill this trustworthiness function, greater clarity will need to be developed regarding what constitutes appropriate implementations of transparency.”
However, business and IT executives have a greater responsibility to deliver data openness and trustworthy data than to comply with rules like GDPR or to limit data gathering.
Privacy and Trust - Maintaining the Balance Between the Two
The objective is to create a law on privacy that protects people from the use, but without excessively constraining AI research or encroaching on data protection regulations in complex social and political areas, of personal information in AI.
The AI discussion in the context of the debate about privacy frequently raises the limitations and failures of AI systems, such as predictive policy, which could disproportionately affect minorities, or the failed experiment of Amazon with a recruitment algorithm, which replicated the existing unreasonably male employees of the company.
“Addressing the algorithm’s discrimination poses fundamental issues about the extent of privacy law. For automated decision making that is contrary to the interests of the individual concerned, the use of personal information on these characteristics, either openly, or – more often or less clearly – via proxies implies privacy interest in regulating the usage of information.”
The efficiency of transparency for AI is limited. For instance, it is up to users to comprehend how advanced AI works, even when AI is transparent.
Furthermore, disclosing algorithms and even how they function might leave data vulnerable to hacker attempts which can handle the algorithm for unfair reasons.
The GDPR must also take reasonable measures to reduce the risk to user data by the controller and the processor. Risks to data confidentiality can include unauthorized:
- Transfer, and
To secure the private data of their clients, AI developers must emphasize data protection as more data makes the analysis more granular. Transparency of AI should not entail data becoming available in public records, and AI users might prevent this by implementing suitable protections.
Pioneers of the AI Interplay - EU
In order to work on Ethical Guidelines for Trustworthy AI, the European Commission has set up a high-level expert group on artificial intelligence. Internationally, the United Nations established an AI-Global Governance Platform, as part of the Secretary-General's Strategy on New Technologies, to investigate the global policy issues of AI.
According to a whitepaper from Deloitte,
“For privacy issues, organizations must adhere to GDPR regulation, the European legal framework for how to handle personal data. But there is nothing similar for ethics in AI, and so far, it looks like there will only be high-level guidelines that leave a lot of room for interpretation. Therefore, there is a lot of responsibility for companies, organizations, and society to make sure we use AI ethically.”
Furthermore, the EU has set out plans to establish a central database for high-risk AI systems after a leak of some documents which “several AI implementations deemed as high risk, covering the use of AI in prioritizing dispatch of emergency first-response services, assigning people to educational and vocational training institutions, as well as several systems for crime detection and those used by judges.”
Failure to do so would incur administrative fines “up to €20m, or 4% of the offender’s total worldwide annual turnover for the preceding financial year.”
After successfully implementing the EU’s plans, the system would require AI system suppliers to provide an in-depth framework and information about the algorithms they use and the choices and assumptions they would infer. Hence, enhance trustworthiness and reduce privacy issues that creep in after embedding such algorithms in enterprise applications.
This revolutionary step taken by the EU will function as a challenge for the world that still relies on AI systems that manipulate human behavior through indiscriminate surveillance. With these essential steps, it will be obligatory for AI systems to disclose information about the data models, algorithms, and test databases for verification.
Isn’t this a much-needed step that sets the bar high for the rest of the world to take a closer look at their government regulations?
Setting the Pace for the Future
The links between persons, their data, and how data is utilized and readily understood and handled by companies and governments are at the heart of this shifting environment of artificial intelligence. It's still premature, and it's only time for people to see how quickly they can take a deep dive into confidentiality through the applications using their data.
These agencies may be responsible for purchasing, managing our online contacts, or identifying how our information is utilized.
It would be interesting to look at what the future holds for the still controversial scope of AI.