AI Risk Assessment Framework#
Purpose#
This chapter provides a structured, repeatable process for evaluating the risks of adopting AI tools in your organization. It is designed for IT managers, governance officers, and CTOs who need to make informed decisions — not hype-driven ones.
The 5-Domain Risk Model#
AI adoption risks cluster into five domains. Each must be assessed independently before making a go/no-go decision.
1. Data Sovereignty & Privacy#
Core question: Where does your data go, and who can access it?
| Risk Factor | Questions to Ask | Red Flags |
|---|---|---|
| Data residency | Where are the AI provider's servers? Which jurisdiction? | Data processed outside EU without adequacy decision |
| Data retention | Does the provider store your prompts/responses? For how long? | "We may use your data to improve our models" |
| Training opt-out | Can you opt out of your data being used for model training? | No clear opt-out mechanism or buried in ToS |
| PII exposure | Will employees paste personal data, customer info, or credentials into prompts? | No DLP (Data Loss Prevention) controls |
| Subprocessors | Does the provider use third parties to process your data? | Opaque subprocessor chain |
FLOSS advantage: Self-hosted open-source models (Ollama, vLLM) eliminate data residency and retention risks entirely. The trade-off is infrastructure cost and maintenance.
GDPR checklist:
- [ ] Data Processing Agreement (DPA) signed with provider
- [ ] Legitimate basis for processing identified (consent, legitimate interest, contract)
- [ ] Data Protection Impact Assessment (DPIA) completed if high-risk processing
- [ ] Records of processing activities updated
- [ ] Data subject rights process defined (access, deletion, portability)
2. Output Reliability & Liability#
Core question: Can you trust the output, and who is responsible when it's wrong?
| Risk Factor | Questions to Ask | Red Flags |
|---|---|---|
| Hallucination rate | How often does the model generate plausible but false information? | No benchmarks or transparency on error rates |
| Domain accuracy | Has the model been evaluated on your specific domain? | Generic model applied to specialized domain without validation |
| Decision scope | Are AI outputs advisory or automated? | AI making autonomous decisions without human review |
| Liability chain | If the AI gives wrong advice and it causes damage, who is liable? | Provider ToS disclaim all liability |
| Audit trail | Can you trace how a specific output was generated? | No logging, no explainability |
EU AI Act implications:
- High-risk AI systems require human oversight, transparency, and risk management
- General-purpose AI models must provide technical documentation
- Providers must report serious incidents
3. Security & Attack Surface#
Core question: What new attack vectors does this tool introduce?
| Risk Factor | Questions to Ask | Red Flags |
|---|---|---|
| Prompt injection | Can external content manipulate the AI's behavior? | Tool-enabled AI reading untrusted content without sanitization |
| Supply chain | Are model weights, plugins, or dependencies verified? | Downloading models from unverified sources |
| Credential exposure | Does the AI tool need API keys, tokens, or access to internal systems? | Broad permissions, no principle of least privilege |
| Network exposure | Does the tool require internet access or open ports? | Cloud-only service for sensitive internal data |
| Insider threat | Can employees use the AI to exfiltrate data or bypass controls? | No monitoring of AI tool usage |
Mitigation strategies:
- [ ] Sandboxed execution environment for AI tools
- [ ] Network segmentation (AI tools on separate VLAN)
- [ ] Prompt injection testing before deployment
- [ ] Regular security audits of AI tool configurations
- [ ] Monitoring and alerting on unusual AI tool usage patterns
4. Cost & Sustainability#
Core question: What will this actually cost, and is it sustainable?
| Risk Factor | Questions to Ask | Red Flags |
|---|---|---|
| Token economics | What is the per-token cost? How many tokens per typical task? | No cost visibility or unpredictable billing |
| Scaling costs | How does cost grow with usage? Linear? Exponential? | "Unlimited" plans with hidden throttling |
| Lock-in | Can you switch providers without losing functionality? | Proprietary API with no standard alternative |
| Hidden costs | Infrastructure, training, maintenance, security overhead? | Only measuring API costs, ignoring total cost of ownership |
| Energy impact | What is the environmental footprint? | No transparency on energy consumption |
Cost governance framework:
- [ ] Monthly cost ceiling defined and monitored
- [ ] Model tiering strategy (expensive models for complex tasks, cheap for routine)
- [ ] Local/self-hosted option evaluated for high-volume tasks
- [ ] Exit strategy documented (what happens if you leave this provider?)
- [ ] Total Cost of Ownership (TCO) calculated including staff time
5. Organizational Readiness#
Core question: Is your organization ready to use AI responsibly?
| Risk Factor | Questions to Ask | Red Flags |
|---|---|---|
| Skills gap | Do your teams know how to use AI tools effectively and safely? | No training plan |
| Change resistance | Will employees adopt, resist, or misuse the tools? | Mandated adoption without consultation |
| Process integration | How does AI fit into existing workflows? | AI bolted on without process redesign |
| Governance structure | Who decides what AI tools are approved? Who reviews outputs? | No governance body or process |
| Ethical alignment | Does the AI use align with your organization's values? | Using AI to replace rather than augment people |
Quick Assessment Scorecard#
Rate each domain 1-5 (1 = high risk/unready, 5 = low risk/ready):
| Domain | Score (1-5) | Notes |
|---|---|---|
| Data Sovereignty & Privacy | ||
| Output Reliability & Liability | ||
| Security & Attack Surface | ||
| Cost & Sustainability | ||
| Organizational Readiness | ||
| Total | /25 |
Interpretation:
- 20-25: Green light — proceed with standard monitoring
- 15-19: Amber — proceed with specific mitigations for weak areas
- 10-14: Red — significant risks need addressing before adoption
- Below 10: Stop — fundamental issues must be resolved first
Next Steps#
After completing the risk assessment:
- Document findings in the Vendor Evaluation Scorecard
- Draft or update your Acceptable Use Policy
- Define cost controls using the Cost Governance framework
- Present findings to stakeholders with a go/no-go recommendation