Case StudyGrade: A
When AI Meets Ethics
I built an AI ethics consultant to navigate a hospital's moral dilemma. Then I argued with its recommendations.
- Course
- Business Ethics
- Code
- LRMS1-UC
- Institution
- New York University
- Completed
- December 2025
Scenario
A 450-bed Chicago hospital. One uncomfortable question.
100,000 patients a year. An access satisfaction score sitting at 60%. Leadership wanted it higher, and they had a proposed solution in mind:
“Should we limit appointments for uninsured patients so that insured patients, especially Medicare patients whose satisfaction scores affect our funding, can be seen faster?”
Restrict how many appointments uninsured, self-pay patients could book. Lift the numbers that mattered to the board. The Director of Patient Access pushed back. It felt wrong. But “feeling wrong” isn't an argument in a boardroom.
I was asked to find one.
Approach
Engineering an AI ethics consultant from scratch
Instead of analyzing the case alone, I wanted to test a question that's becoming increasingly urgent: can AI help humans make better ethical decisions, or does it just automate our biases faster?
I built a persona from the ground up: Dr. Amina Solberg, a virtual healthcare ethics consultant on Claude Opus 4.5.
Persona architecture
Identity: Dr. Amina Solberg
AI Healthcare Ethics & Innovation Consultant
Expertise Layers:
├── Bioethics Foundation
│ ├── Autonomy (patient choice)
│ ├── Beneficence (promote well-being)
│ ├── Non-maleficence (avoid harm)
│ └── Justice (fair distribution)
│
├── Business Ethics Integration
│ ├── Stakeholder theory
│ ├── Corporate social responsibility
│ └── 7-step decision-making model
│
└── Healthcare Operations
├── Scheduling systems
├── Triage protocols
└── AI/data ethics in clinical workflowsWhat the AI got right
Where Dr. Solberg earned her keep
It reframed the problem
Leadership had framed this as uninsured patients vs. insured patients.
Dr. Solberg reframed it as: Is the access problem actually caused by uninsured patients, or by systemic capacity issues?
That matters. The proposed solution assumed uninsured patients were “crowding out” others. But what if the real problem was inefficient scheduling, high no-show rates, or insufficient clinic hours?
It generated creative alternatives
Instead of accepting the binary choice, the AI produced seven options:
- A
- Implement the restriction (cap uninsured appointments)
- B
- Maintain status quo
- C
- Expand capacity (hours, providers, rooms)
- D
- Improve scheduling efficiency (no-show management)
- E
- Create differentiated access pathways
- F
- Proactive insurance enrollment assistance
- G
- Hybrid approach combining D, C, and F
The AI recommended Option G: addressing the problem without discrimination.
It connected principles to stakeholders
- Justice
- Uninsured patients lose access based on ability to pay, not clinical need.
- Non-maleficence
- Delayed care leads to disease progression, ER overcrowding.
- Autonomy
- Restricting appointments removes patient choice.
- Beneficence
- Leadership's goal helps some patients at the expense of others.
Where I disagreed
The blind spots in its reasoning
Blind spot
Coercion risk
The AI recommended proactive insurance enrollment assistance: helping uninsured patients sign up for Medicaid or marketplace plans. On the surface, this seems beneficial. I pushed back:
“Offering enrollment assistance could unintentionally pressure patients into coverage decisions that don't align with their financial situation or values.”
Blind spot
Feasibility optimism
Dr. Solberg placed significant weight on capacity expansion. The problem? Hospitals can't expand overnight.
“Relying too heavily on expansion reflects optimism rather than a realistic, evidence-based step.”
Recommendation
Reject the restriction
Limiting access based on insurance status violates justice and risks non-maleficence. Instead, a four-stage path:
Immediate
Scheduling audit
Identify actual causes of access delays; implement no-show management and standby lists.
Short-term
Clinical triage
Prioritize urgent cases regardless of payer status.
Medium-term
Capacity evaluation
Evaluate expansion based on audit findings.
Ongoing
Quarterly reviews
Track satisfaction scores, wait times, and payer mix trends.
Takeaway
What the project taught me
AI as scaffold, not authority
Dr. Solberg helped me see the problem from multiple angles and generate options. But AI can't assess real-world feasibility or detect subtle coercion.
The human role isn't going away
The most valuable part wasn't the AI's output. It was my critique. Knowing when to agree, push back, and synthesize: that's the skill that matters.
Ethics is operations
The best solutions aren't “more ethical” in a pure sense. They're ethically sound and practically achievable.