Since early 2025, the EU AI Act has required organizations to demonstrate that employees understand AI, recognize its risks, and can act responsibly when using it. Companies failing to comply face substantial fines, reputation damage, and loss of customer trust. By February 2, 2025, every organization developing or using AI systems must ensure their employees are “AI-literate.”
The question is: is your organization there yet?

Beyond compliance: emerging IT architecture risks
Responsible and trustworthy AI must be realized through action, not just principles and promises. In a recent open letter, JP Morgan’s Chief Information Security Officer highlighted the dangers of the ongoing AI solutions arms race. The inherent risks of AI—such as bias, hallucinations, complacency, and over-reliance—are already well known.
But a new, more insidious threat is emerging: the potential breakdown of IT architecture security as companies rush to adopt third-party AI SaaS solutions. These tools are expanding rapidly, evolving to solve niche but important problems across many sectors. The time savings offered by summarization and optimization tools make them highly appealing.
Yet this is exactly where caution is needed. Organizations must understand how these tools operate, what types of access and authorizations they require, and where data is being sent. Even trusted CRM, ERP, or ATS providers are integrating third-party tools into their services.
According to Patrick Opet of JP Morgan, highly integrated IT architecture can cause a “collapse [of] authentication (verifying identity) and authorization (granting permissions) into overly simplified interactions, effectively creating single-factor explicit trust between systems on the internet and private internal resources. This architectural regression undermines fundamental security principles that have proven durability.” Security incidents can then ripple through supply chains, making it essential to scrutinize SLAs and vendor relationships more closely.
In order to mitigate these risks, it is essential that AI literacy is embedded in organizations, thereby scrutinizing AI tools before ushering them into the IT landscape.
What exactly is AI literacy?
AI literacy encompasses the skills, knowledge, and understanding required to work responsibly with AI systems. This includes:
- technical understanding: basic awareness of how AI systems function
- critical AI thinking: ability to question outputs and recognize limitations
- ethical awareness: understanding the social impacts and moral implications
- data consciousness: recognizing how data quality affects AI performance
- risk assessment: identifying potential harms from AI implementation
It is all about building a culture of responsible AI.
The level of literacy required depends on individual roles and the risk level of AI applications used. An employee using ChatGPT for ideation needs different literacy than someone implementing AI for customer-facing decisions.
Who must comply? (hint: likely your organization)
The AI Act mandates that all providers and users of AI systems ensure AI literacy among:
- full-time employees
- independent contractors working on your behalf
- anyone using AI within your organizational context
This requirement applies across all organizational levels and extends to third-party vendors utilizing AI on your behalf.
Article 4 of the AI Act specifically calls for providers and deployers of AI systems to ensure, as far as possible, that staff and users involved in operating these systems have adequate AI literacy, considering their background, training, and the context of use.
Yet it remains unclear how firms can operationalize this goal, with further guidance expected in August. It is still to be seen if AI literacy will be enforceable or merely a recommendation that can be realized voluntarily. Regardless, proactive organizations are already taking steps to ensure compliance.
The hidden security risks of rapid AI adoption
Organizations that delay building AI literacy face multiple threats:
- regulatory penalties under the EU AI Act
- security vulnerabilities from improper AI integration
- reputational damage from AI misuse
- competitive disadvantage as AI-literate competitors move ahead
- lost opportunity as employees underutilize AI capabilities
A particular challenge is for service managers and IT experts to reach a shared level of AI literacy, enabling them to effectively question SLAs and negotiate with vendors confidently. Without proper AI literacy across your IT and service management teams, these risks may go undetected until it’s too late.
Four critical steps to building AI literacy
Following recommendations from regulatory bodies like the Dutch Authority for Personal Data Protection (Autoriteit Persoonsgegevens), organizations should implement a structured approach:
1. Identify current AI usage
- inventory all AI systems in your organization
- map associated risks and opportunities
- establish baseline literacy through employee assessment
- document roles and responsibilities
2. Set clear objectives
- determine required literacy levels based on risk profiles
- establish role-specific knowledge requirements
- create measurable benchmarks for success
- prioritize high-risk applications
3. Implement training and awareness
- develop multi-tiered education programs
- foster interdisciplinary collaboration
- create specialized training for high-risk applications
- integrate AI literacy into onboarding processes
4. Evaluate and adapt
- regularly test literacy levels
- audit AI usage compliance
- adjust training based on emerging technologies
- document progress for regulatory evidence
The critical role of organizational culture
Even with comprehensive training and policies in place, the success of responsible AI depends heavily on organizational culture. A workplace that encourages transparency, ethical reflection, and open dialogue is essential for sustaining responsible AI use.
At the core of an effective AI literacy strategy must be interdisciplinary collaboration. Making AI more understandable requires different types of expertise:
- developers should be trained not just in building systems but also in understanding their impact on society
- leaders need sufficient technical and data knowledge to make informed decisions
- end-users need practical understanding of AI limitations and proper usage
Moving beyond compliance to strategic advantage
Organizations that view AI literacy merely as a compliance exercise miss the bigger opportunity. True AI literacy empowers employees to:
- make better decisions about which AI tools to adopt
- identify innovative applications for AI within their domains
- catch potential issues before they become liabilities
- contribute to a continuously improving AI governance framework
How we can help
Don’t wait until regulators come knocking. Contact us to have a chat about making your organization ready and skilled to work with AI and securing your organization’s AI-powered future.
We have proven implementation results in helping companies with their AI strategy, AI governance and overall compliancy when implementing responsible AI.
Error: Contact form not found.