AI Doesn't Have to be the Enemy.
But Right Now? It Is.
The Truth About AI Bias? It's Personal.
I didn't start this work because I love technology. I started it because I saw what happens when technology doesn't love us back.
I've watched algorithms reject qualified candidates because their names "sound wrong." I've seen AI recruiting tools filter out women before a human ever sees their resume. I've documented how predictive policing software turns zip codes into verdicts.
And I got tired of watching while children's futures got deleted by code that nobody bothered to check.
So, I Built Something About It.
The Guardian Protocol isn't theory. It's not a framework you frame and hang on the wall. It's a practical, implementable system for finding bias in AI, fixing it, and making sure it doesn't come back.
I've used it to help organizations across industries discover the blind spots in their systems. The results? Uncomfortable conversations followed by meaningful change.
What I Actually Do
I help you see what you can't see. Your AI systems are making decisions right now. Some of those decisions are wrong. Some are harmful. Most are invisible until someone like me shows you where to look.
I help you fix what's broken. Audits without action are just expensive paperwork. I work with your teams to implement changes that actually matter.
I help you build better from the start. Prevention is cheaper than reputation repair. I train teams to design AI that doesn't need fixing later.
Why Me?
Because I don't do corporate theater. I don't soften hard truths to make stakeholders comfortable. I tell you what's actually happening in your systems, and then I help you do something about it.
How We Work Together
I don't believe in one-size-fits-all solutions. Your AI bias problem is as unique as your organization. But the work always starts with the same question: What's actually happening in your systems right now?
The Guardian Protocol Audit
What it is: A comprehensive examination of your AI systems to identify where bias lives, how it got there, and what it's costing you.
What you get:
- Complete bias assessment across your AI decision-making systems
- Documentation of specific instances where algorithms disadvantage people
- Risk analysis (legal, reputational, operational)
- Prioritized recommendations for immediate action
- Executive summary for board-level conversations
Who it's for: Organizations using AI for hiring, lending, customer service, product recommendations, or any decision that affects people's opportunities.
Timeline: 4-6 weeks depending on system complexity
What happens next: You'll know exactly where you stand. No corporate jargon. No softening of hard truths. Just clear data about what's working and what's hurting people.
Speaking & Workshops
What it is: The same energy that filled the room at Black Tech Fest, customized for your conference, company, or leadership team.
Topics include:
- "The Digital Revolution Against AI Oppression" (Keynote)
- "Breaking Code: How to Find Bias Before It Finds You" (Workshop)
- "AI Ethics Isn't Optional Anymore" (Executive Briefing)
- "Building Fair AI: A Technical Deep Dive" (Developer Training)
Who it's for: Anyone ready to stop talking about AI bias and start doing something about it.
What people say:
"Powerful. Eye-opening. Sharp and urgent."
"People were sitting on the edge of their seats. They left educated and empowered."
"This isn't just technical. It's existential."
Implementation & Training
What it is: After the audit, after the workshop, after everyone agrees something needs to change, this is where change actually happens.
What we do together:
- Train your teams to recognize bias in real time
- Implement Guardian Protocol frameworks into your development process
- Build accountability systems so bias doesn't creep back in
- Create documentation for compliance and governance
Who it's for: Organizations serious about fixing the problem, not just checking a box.
Timeline: 3-6 months depending on scope
What's different: I don't hand you a 200-page playbook and disappear. I work alongside your teams until the new way of working becomes the only way of working.
I help organizations find the bias hiding in their algorithms before it destroys someone's future.
Every day, AI makes decisions about who gets hired, who gets loans, who gets opportunities. And every day, those decisions carry the fingerprints of human prejudice baked into code.
I don't do polite conversations about "algorithmic fairness." I do the uncomfortable work of showing you exactly where your systems are failing people. Then I help you fix it.
Because talking about the problem isn't enough.
It's time to break some code.
Feedback from BTF 2025:
"She delivered a powerful session. The statistics behind AI and bias were eye opening." - Black Tech Fest Attendee "Her message was sharp and urgent: bias in AI isn't just technical, it's existential." - Black Tech Fest Attendee "People were sitting on the edge of their seats, engaged, and left educated and empowered." - Dion McKenzie, Founder, Black Tech Fest