RESPONSIBLE AI REPORT

OWEN NYABICHA.

Case 1: Hiring Bot Screening Job Applicants

🔍 What’s happening

A company uses an AI-powered hiring system to screen job applicants. The AI analyzes CVs and work history to decide who moves forward in the hiring process. The goal is to save time and reduce human workload. However, the system consistently rejects more female applicants, especially those with career gaps.

⚠️ What’s problematic

The AI has learned bias from historical hiring data where uninterrupted career paths were favored. This disadvantages women, caregivers, and others who take career breaks. The system also lacks transparency, since applicants are not told why they were rejected and the company may not be aware of the bias.

🛠️ One improvement idea

Audit and rebalance the training data so career gaps are evaluated fairly. Add human review for edge cases and provide clearer explanations of how AI decisions are made.

Case 2: School Proctoring AI Flagging Students

🔍 What’s happening

A school uses AI-based proctoring software during online exams. The system tracks eye movement and facial behavior to detect cheating. Students who look away frequently are flagged for possible misconduct.

⚠️ What’s problematic

The AI assumes there is only one “normal” way to focus. Neurodivergent students are flagged more often, leading to unfair accusations and emotional stress. There are also privacy concerns due to constant monitoring without clear accountability.

🛠️ One improvement idea

Treat AI detections as signals rather than final decisions. Require human review for all flags and provide accommodations or opt-out options for neurodivergent students.

đź§  Final Verdict

These cases show how AI can unintentionally reinforce bias when fairness, transparency, and accountability are overlooked. Responsible AI requires thoughtful data design, human oversight, and respect for individual differences.

AI isn’t malicious — it learns from us. That’s why responsible design matters.