Alright, let’s talk about something that’s been nagging at me, something that keeps popping up in my Slack channels and late-night thought spirals: the slow creep of AI into our decision-making processes, specifically when it comes to hiring. Not the big, obvious stuff like resume parsing – we’ve all seen that – but the subtle, almost invisible ways algorithms are starting to shape who gets a shot, and what that means for our agency, both as individuals and as a society.
It’s March 21, 2026, and the job market is… weird. We’ve got more tools than ever to connect people, more data to analyze skills, and yet, I keep hearing stories that make me scratch my head. And it always seems to circle back to the ‘AI-powered’ systems that promise efficiency, fairness, and the mythical ‘best fit.’ But what if these systems, in their relentless pursuit of optimization, are actually eroding our ability to make truly human decisions, and in doing so, are fundamentally changing what we value in a colleague?
The Invisible Handshake: When Algorithms Decide Who You Meet
Remember when you’d send a resume, and maybe, just maybe, a human would actually skim it? Or when a hiring manager might take a chance on someone with a non-traditional background because they had a good feeling? Those days feel like ancient history sometimes. Now, before your resume even reaches a human eye, it’s often been through multiple layers of algorithmic scrutiny.
My friend Sarah, a brilliant graphic designer, was telling me about applying for a senior role at a well-known tech company. She’s got a portfolio that would make your jaw drop, years of experience, and a track record of innovation. But she couldn’t even get an initial interview. After some digging, she found out the company uses an AI system that prioritizes candidates whose past job titles and company names closely match a predefined list. Sarah had worked for a few smaller, niche agencies, and despite her stellar work, the algorithm simply didn’t “see” her experience as relevant enough.
This isn’t about being anti-AI. I use AI tools daily to help me research, organize, and even draft initial outlines for my articles. They’re fantastic for augmenting human intelligence. But when AI moves from augmentation to outright decision-making, especially in areas as critical as someone’s livelihood, we need to be incredibly careful. Because what looks like efficiency on a spreadsheet can be a profound injustice in real life.
The Problem with Proxy Metrics: What AI Really “Sees”
The core issue, as I see it, is that these hiring AIs don’t understand “potential” or “nuance.” They operate on proxy metrics. They’re trained on historical data, which inherently carries biases from the past. If a company historically hired people from a certain university or with a specific set of keywords on their resume, the AI learns to prioritize those attributes. It’s like teaching a child to only recognize apples by showing them red apples, and then being surprised when they don’t identify a green apple as an apple.
Let’s say a company wants to hire for “innovation” and “creativity.” How does an algorithm measure that? It can’t truly understand a portfolio piece’s impact or the spark in a candidate’s eyes during an interview. Instead, it looks for proxies: “Number of patents filed,” “keywords like ‘disruptive technology’ or ‘design thinking’ on a resume,” “degrees from top-tier design schools.” These aren’t inherently bad metrics, but they are *limited* and can exclude genuinely new people who don’t fit the mold.
Think about it: Steve Wozniak probably wouldn’t have impressed an AI looking for traditional academic credentials in his early days. Nor would someone like Maya Angelou, whose “experience” defied easy categorization. We risk building a future where only those who perfectly align with past successes are deemed worthy of future opportunities.
Beyond Keywords: Reclaiming Human Agency in Hiring
So, what do we do about this? I’m not suggesting we throw out all AI tools. That’s unrealistic and, frankly, silly. But we need to be more intentional about where and how we deploy them, and critically, where we draw the line and insist on human judgment.
Practical Step 1: Audit Your Algorithms (Like, Really Audit Them)
If you’re a hiring manager or in HR, you need to understand exactly what your AI tools are doing. Don’t just trust the vendor’s marketing materials. Ask difficult questions:
- What data was this model trained on?
- What are the top 5 features or metrics it prioritizes?
- How are biases addressed or mitigated in the training data and the model itself?
- What is the false positive/false negative rate for your target candidate profile?
And then, crucially, you need to test it. Run a small experiment. Take a batch of resumes that were rejected by the AI but were historically strong hires for your company. Manually review them. See what the AI missed. You might be surprised.
Here’s a simplified Python example of how you might start to audit for keyword bias. Imagine your internal system uses a basic keyword matching algorithm to filter initial applications. You could write a script to analyze the keyword density of successful hires versus rejected candidates for specific roles.
import pandas as pd
from collections import Counter
import re
def get_keywords(text):
# A very basic keyword extractor for demonstration
# In reality, this would be much more sophisticated (NLP, embeddings, etc.)
return re.findall(r'\b(?:python|java|aws|azure|leadership|creativity|agile|scrum)\b', text.lower())
# Sample data (in a real scenario, this would come from your HR system)
successful_hires_resumes = [
"Experienced Python developer with strong AWS skills and leadership qualities.",
"Java architect, led agile teams, deep knowledge of azure and scrum.",
"Creative problem-solver, python expertise, good leadership experience."
]
rejected_candidates_resumes = [
"Excellent C++ programmer, some experience with database design.",
"Front-end developer, JavaScript, React, a little bit of python.",
"Project manager, strong communication, but no direct tech keywords."
]
# Process resumes
successful_keywords = []
for resume in successful_hires_resumes:
successful_keywords.extend(get_keywords(resume))
rejected_keywords = []
for resume in rejected_candidates_resumes:
rejected_keywords.extend(get_keywords(resume))
# Analyze frequency
successful_counts = Counter(successful_keywords)
rejected_counts = Counter(rejected_keywords)
print("Keyword Frequencies for Successful Hires:")
print(successful_counts)
print("\nKeyword Frequencies for Rejected Candidates:")
print(rejected_counts)
# Identify keywords with significant discrepancies
print("\nKeywords more prevalent in successful hires:")
for keyword, count in successful_counts.items():
if count > rejected_counts.get(keyword, 0) * 2: # Arbitrary threshold for "significant"
print(f"- {keyword}: {count} (Successful) vs {rejected_counts.get(keyword, 0)} (Rejected)")
This simple script would highlight if your successful hires disproportionately contain certain keywords, and if your rejected pile lacks them. This is a rudimentary example, but it illustrates the idea of actively looking at what the system prioritizes and whether that aligns with your actual hiring goals.
Practical Step 2: Redefine “Fit” and “Success” for the Algorithm
If you’re going to use AI for initial screening, you need to feed it truly solid and diverse data. This means actively curating your “successful candidate” profiles to include people who succeeded through unconventional paths. It means expanding the definition of “relevant experience” beyond direct job titles.
Instead of just feeding it resumes of past hires, feed it data points that reflect your company’s values: contributions to open-source projects, volunteer work demonstrating leadership, personal projects showcasing ingenuity, or even specific challenges overcome. This is harder to quantify, I know, but if we don’t try, we’re just perpetuating the status quo.
Practical Step 3: Mandate Human Review at Key Stages
This is perhaps the most critical. AI should serve as a filter, not a final arbiter. I advocate for mandating human review at specific points in the hiring pipeline. For example:
- The “Near Miss” Pile: Every AI system should have a mechanism to flag candidates who didn’t quite meet the primary criteria but scored highly on secondary, more qualitative metrics (e.g., strong portfolio, compelling cover letter, diverse experience). A human should review these.
- Diversity Check: Before extending offers, have a human panel review the demographic makeup of the final candidates. If it’s too homogenous, revisit the earlier stages and ask why. Was the AI inadvertently filtering out certain groups?
- The “Wildcard” Slot: Dedicate a percentage of your interview slots (even 5-10%) to candidates who don’t fit the algorithmic mold but were identified by a human reviewer as having high potential or a unique perspective.
Here’s a conceptual example of how you might implement a “wildcard” review process using a simple database query, assuming your AI assigns a ‘score’ and a human reviewer can tag a candidate as ‘wildcard_potential’.
-- SQL query to identify candidates for human "wildcard" review
-- This assumes your AI assigns a numeric 'ai_score' and a human can add a 'wildcard_tag'
SELECT
candidate_id,
candidate_name,
ai_score,
human_notes
FROM
applications
WHERE
(ai_score BETWEEN 0.6 AND 0.75) -- Candidates who were "almost there" according to AI
OR
wildcard_tag = TRUE -- Candidates specifically flagged by a human reviewer
ORDER BY
ai_score DESC, candidate_name;
This query would pull candidates who were borderline according to the AI, giving a human reviewer a chance to re-evaluate, alongside any candidates a human specifically marked for a second look, regardless of their AI score. It’s about creating intentional friction points for human judgment to step in.
The Future of Agency in Hiring
The goal isn’t to stop progress. It’s to ensure that progress serves human flourishing, not just corporate efficiency metrics. When AI makes hiring decisions without sufficient human oversight, we risk creating a self-perpetuating cycle of conformity. We lose the serendipity, the gut feelings, the ability to take a chance on someone who might just redefine what “success” looks like for our organization.
Our agency in hiring isn’t just about the hiring manager’s ability to pick a candidate. It’s about the candidate’s agency to present their full self, to have their unique qualities seen, and to not be filtered out by a black box that doesn’t understand the messy, beautiful complexity of human potential.
Let’s use AI to make our processes smoother, to handle the grunt work, but let’s fiercely protect the human element in the decisions that truly matter. Because in the end, it’s not just about filling a role; it’s about building teams, fostering culture, and shaping the future of work itself.
Actionable Takeaways:
- Demand Transparency: Don’t blindly accept vendor claims. Understand the data and logic behind your hiring AI.
- Define Your Values: Explicitly articulate what “good fit” and “potential” mean for your organization beyond keywords, and try to incorporate those into your AI’s training or evaluation criteria.
- Implement Human Review Gates: Create mandatory points in your hiring pipeline where human judgment overrides or augments algorithmic decisions, especially for “near miss” candidates or diversity checks.
- Test and Iterate: Continuously audit your AI’s performance against real-world outcomes and adjust its parameters or your review processes accordingly.
- Champion the Wildcard: Actively seek out and advocate for candidates who don’t fit the mold but demonstrate exceptional potential or unique perspectives.
Related Articles
- AI agent reducing operational overhead
- AI Alignment Basics: Practical Tips, Tricks, and Examples for Responsible AI Development
- Im Exploring the Future of Work at agntzen.com
🕒 Last updated: · Originally published: March 21, 2026