On his first day at North Harbor Insurance, Leo was 15, quiet, and convinced he would spend the summer renaming files and updating spreadsheets.
By week two, he was pairing with a senior engineer to build a tiny agent that reviewed claim-form validation rules before code reached production.
Not an AI chatbot demo. A real engineering workflow.
Why this mattered to our team
Insurance software is detail-heavy. A missing edge case in a claim flow can delay payouts, trigger compliance issues, or flood support queues.
Our team had solid tests, but we still lost time on repetitive checks:
- Are field requirements consistent between UI and API?
- Did this change break state-level rules?
- Are we using the same error language across products?
Leo helped us approach these checks in an agentic way: define goals, define constraints, then let tools execute repeatable steps with human review gates.
What he learned first
We started with one principle: agentic engineering is not “let AI code everything.” It is “design reliable loops where software can do bounded work autonomously.”
His first assignment had three parts:
- Map the claim intake flow from form submission to policy validation.
- List failure modes we repeatedly saw in pull requests.
- Build a small script that compared validation rules between frontend schema and backend contracts.
The script was simple, but the outcome was immediate: reviewers stopped finding the same mismatch manually.
The turning point
In week four, Leo proposed adding an “explain diff” step: when rules drifted, the agent produced plain-language output for reviewers.
Instead of this:
requiredFields.homeOwner changed from [name, address] to [name]
Reviewers got this:
- “Address is no longer required for homeowner claims in UI, but API still requires it. This can block submission after client-side pass.”
That single improvement reduced review time and helped non-engineering stakeholders understand risk faster.
How we kept it safe
Because this was insurance, we set hard boundaries:
- No autonomous production writes.
- No policy-decision logic generated without explicit approval.
- Every agent result linked to source files and tests.
- Human sign-off required before merge.
Leo learned a key lesson early: autonomy without traceability is just hidden risk.
What we shipped by the end of the internship
By the end of summer, the intern project had become a team utility used in every claims-related pull request.
We saw:
- fewer validation regressions
- faster review cycles
- clearer audit trails for compliance conversations
And Leo left with something better than a portfolio bullet: he learned how to engineer systems where humans set direction and agents handle disciplined execution.
For a 15-year-old intern, that is a strong start.
For our insurance team, it was a reminder that good agentic software engineering is mostly about clarity, constraints, and feedback loops.