Skip to content
MVPeople Group Logo
MVPeopleGroup
AI & LLM Security Specialist
AI & LLM Security

AI & LLM Security Specialist Hire

Artificial intelligence is transforming organisations, but also introduces entirely new security risks. Adversarial attacks, prompt injection and data poisoning require specialists who understand both cybersecurity and machine learning. MVPeople Group delivers AI security professionals who secure your AI systems and LLM applications.

AI security: a new frontline in cybersecurity

The rapid adoption of AI and large language models opens an entirely new chapter in cybersecurity. Traditional security methods are insufficient for the unique threats that AI systems bring. Adversarial machine learning demonstrates that models can be misled with invisible perturbations, while prompt injection enables attackers to manipulate LLMs beyond their intended instructions.

Beyond technical vulnerabilities, governance challenges play an increasingly important role. The EU AI Act imposes obligations on organisations that develop or deploy AI systems, particularly for high-risk applications. Risk classification, transparency, bias monitoring and human oversight are becoming legal requirements. Organisations need specialists who combine these technical and regulatory aspects.

The OWASP LLM Top 10 identifies the most critical vulnerabilities in LLM applications: from prompt injection and insecure output handling to training data poisoning and model denial of service. The NIST AI Risk Management Framework provides a structured approach for identifying, assessing and mitigating AI risks. ISO 42001 lays the foundation for AI management systems.

MVPeople Group responds to this rapidly growing need. Our network includes AI security engineers who secure ML pipelines, LLM red teamers who test AI applications for vulnerabilities, AI governance specialists who help organisations comply with the EU AI Act and AI risk analysts who map and mitigate risks.

AI & LLM security profiles we deliver

AI Security Engineer

Secures AI systems and machine learning pipelines against adversarial attacks, data poisoning and model theft. Implements security controls for training data, model endpoints and inference infrastructure.

ML Security Researcher

Researches vulnerabilities in machine learning models and develops defence mechanisms. Conducts adversarial robustness testing and analyses risks of model inversion, membership inference and evasion attacks.

AI Governance Specialist

Develops and implements AI governance frameworks in accordance with the EU AI Act and ISO 42001. Advises on risk classification, transparency requirements, bias monitoring and responsible AI use within organisations.

LLM Red Teamer

Tests large language models for vulnerabilities such as prompt injection, jailbreaking and data leakage. Conducts red team assessments on AI applications and develops guardrails and output filtering mechanisms.

AI Risk Analyst

Analyses and assesses risks of AI systems in the areas of security, privacy, bias and reliability. Prepares risk assessments in accordance with the NIST AI Risk Management Framework and guides mitigation measures.

Certifications in our network

CISSPAI/ML SpecialisationGoogle Cloud MLAWS ML SpecialtyISO 42001

Frequently asked questions about AI & LLM Security

Why is AI security a separate discipline?

AI systems introduce fundamentally new attack vectors that are not covered by traditional cybersecurity. Adversarial attacks can mislead models with invisible perturbations, data poisoning can corrupt training data, and prompt injection can manipulate LLMs into performing unauthorised actions. These threats require specialists who deeply understand both cybersecurity and machine learning.

What is prompt injection and why is it dangerous?

Prompt injection is an attack technique where malicious instructions are inserted into the input of a large language model, aiming to make the model deviate from its original instructions. This can lead to data leakage, unauthorised actions or bypassing content filters. It is comparable to SQL injection, but for AI systems. Effective mitigation requires a combination of input validation, output filtering and architectural measures.

What does the EU AI Act mean for Dutch organisations?

The EU AI Act classifies AI systems based on risk: from minimal to unacceptable. High-risk AI systems must comply with strict requirements regarding risk management, data governance, transparency, human oversight and cybersecurity. Dutch organisations that deploy or develop AI must classify their systems and achieve compliance. Our AI governance specialists guide this process from classification to implementation.

Which frameworks are relevant for AI security?

The NIST AI Risk Management Framework (AI RMF) provides a structured approach for identifying and mitigating AI risks. The OWASP LLM Top 10 describes the ten most critical vulnerabilities in large language model applications. ISO 42001 is the international standard for AI management systems. Additionally, the EU AI Act sets legal requirements for high-risk AI systems. Our specialists combine these frameworks into a pragmatic security approach.

How does AI red teaming differ from traditional red teaming?

Traditional red teaming tests the security of networks, systems and applications. AI red teaming specifically focuses on breaking AI models: prompt injection attempts on LLMs, adversarial examples against classification models, data extraction from models and bypassing safety guardrails. This requires a unique combination of offensive security skills and deep knowledge of machine learning architectures.

How quickly can an AI security specialist start?

AI security is a rapidly growing but still relatively small field. Availability varies significantly by profile: AI governance specialists and AI risk analysts are more broadly available than specialised ML security researchers or LLM red teamers. We typically present suitable candidates within 5 to 15 working days. Contact us for an estimate based on your specific requirements.

Does MVPeople also deliver AI security assessments?

Through our MVProjects service line, we deliver specialists who conduct AI security assessments, LLM penetration tests and AI governance reviews. This includes adversarial robustness testing, prompt injection assessments, AI risk classification in accordance with the EU AI Act and setting up AI governance frameworks according to ISO 42001.

Need an AI security specialist?

From LLM red teamers to AI governance specialists: we deliver the professionals who secure your AI systems and ensure compliance.