VividRecruitment.

AI & Trust

How we use AI responsibly

AI is a powerful tool — but it's only as good as the principles guiding it. Here's how we approach AI at Vivid.

Human-in-the-loop

AI provides insights and recommendations. Humans make every hiring decision. No candidate is ever rejected by an algorithm alone.

Transparency

Every AI score and recommendation comes with an explanation. You can see what factors contributed and how they were weighted.

Data protection

Candidate data is processed securely and in compliance with POPIA. We never sell or share personal data with third parties.

Bias awareness

We actively work to identify and mitigate bias in our AI models. Structured evaluation criteria reduce the impact of subjective human bias.

Capabilities

What AI does — and doesn't do

Clear boundaries help build trust. Here's exactly what our AI handles and where the line is drawn.

What AI does

  • Transcribe and analyze video interview responses
  • Parse and structure CV data into consistent formats
  • Score candidates against defined job criteria
  • Identify patterns in communication, confidence, and motivation
  • Rank and shortlist candidates based on objective measurements
  • Flag potential inconsistencies or concerns for human review
  • Generate readable summaries and comparison reports

What AI does not do

  • Make hiring or rejection decisions
  • Access candidate data outside the Vivid platform
  • Use demographic information in scoring
  • Replace human judgment or intuition
  • Guarantee accuracy — all outputs should be verified
  • Train on your candidate data without consent
  • Operate without human oversight

Honesty

Limitations and bias

No AI system is perfect. We believe in being upfront about limitations.

AI can be wrong. Language models and scoring algorithms can misinterpret responses, miss context, or produce inconsistent results. Every AI output should be reviewed by a human before acting on it.

Bias can exist in training data. Despite our efforts to mitigate bias, AI models trained on historical data may reflect patterns that disadvantage certain groups. We continuously audit and improve our models.

Cultural context matters. Communication styles vary across cultures and languages. Our models are calibrated for South African English but may not fully account for all linguistic and cultural nuances.

AI is a tool, not a decision-maker. We designed Vivid products to augment human decision-making, not replace it. The final hiring decision always rests with your team.

Ready to modernize recruiting?

Explore our products and see how video interviews and AI-powered CV intelligence can transform your hiring process.