You are here
Balancing AI and Security: A Jordan Imperative
Jan 27,2025 - Last updated at Jan 27,2025
Jordan on its attempt to digitize government services is moving expeditiously to be at the forefront of digital transformation in the Middle East, with initiatives like the Jordan Vision 2025 and the National ICT Strategy aimed at leveraging technology to drive economic growth and improve public services. However, the risks associated with data leakage in AI tools could undermine these efforts. If sensitive government data is exposed, it could stall progress in key sectors such as e-governance, Tax, healthcare, and education, where AI is increasingly being deployed.
A new report by Harmonic Security reveals alarming trends in data leakage through generative AI (GenAI) tools. The report, titled "From Payrolls to Patents: The Spectrum of Data Leaked into GenAI," highlights how sensitive information—ranging from customer data to proprietary code—is being inadvertently shared with AI platforms like ChatGPT, Copilot, and Gemini. For Jordanian government organizations leveraging AI tools, these findings underscore significant legal, security, and competitive risks.
For example, in e-governance, AI tools are used to streamline citizen services, such as processing applications for permits, licenses, and social benefits. If sensitive citizen data is leaked, it could lead to identity theft, fraud, and other forms of cybercrime, eroding public confidence in digital services. Similarly, in healthcare, where AI is used for patient data analysis and diagnostics, a data breach could compromise patient privacy and violate medical confidentiality laws.
According to the report, 8.5% of prompts entered into GenAI tools contain sensitive data. This includes customer data (45.77%), such as billing information and authentication credentials, employee data (26.83%), like payroll and personally identifiable information (PII), and even legal and financial data (14.88%), including mergers and acquisitions details. Alarmingly, 63.8% of ChatGPT users relied on the free tier, which often lacks robust security features and may use input data to train AI models.
For Jordanian government entities, which increasingly rely on AI for tasks like document summarization, translation, and data analysis, these findings are particularly concerning. The inadvertent exposure of sensitive data highlight the needs for AI GUIDELINE usage or enact regulation for implantation of AI tool because it could lead to breaches of confidentiality, regulatory violations, and loss of public trust. For instance, if sensitive citizen data or internal government communications are leaked, it could result in legal repercussions under Jordan’s data protection laws and damage the government’s credibility.
One of the key challenges in addressing data leakage is the lack of awareness among employees about the risks of sharing sensitive information with AI tools. The Harmonic report emphasizes the importance of user education as a critical component of AI governance. For Jordanian government organizations, this means implementing comprehensive training programs to educate employees on:What constitutes sensitive data. Also, Training should cover how to use AI tools responsibly, including the importance of using enterprise-grade versions and avoiding free-tier tools that may lack adequate security controls.
Additionally, Employees should be trained to identify and report attempts to manipulate them into sharing sensitive data with unauthorized parties.Therefore, the rapid adoption of AI tools necessitates a more robust regulatory framework. Also, the government could consider:Mandating Data Protection Impact Assessments (DPIAs): Requiring organizations to conduct DPIAs before deploying AI tools to identify and mitigate potential risks.
Another point is Promoting Transparency in AI Usage. These step Requiring organizations to disclose how they use AI tools and what measures are in place to protect data. Farther, Jordanian Authorities should collaborate with technology companies to develop secure AI solutions tailored to the needs of government organizations.
The adoption of AI tools offers immense potential for Jordanian government organizations to enhance efficiency, improve service delivery, and drive innovation. However, as the Harmonic report highlights, this potential comes with significant risks. Data leakage through AI tools could lead to legal liabilities, security breaches, and reputational damage, undermining Jordan’s digital transformation efforts.
To address these challenges, Jordan must strike a balance between embracing AI and safeguarding sensitive data. This requires a multi-faceted approach that includes stronger regulatory frameworks, robust security measures, and comprehensive employee training. By taking proactive steps to mitigate the risks of data leakage, Jordan can continue to lead the region in digital innovation while protecting the privacy and security of its citizens and organizations.
Add new comment