Work with Ben

Asynchronous Interview & Professional Overview

Professional Background

Tell me about your professional experience and career trajectory.

I have 10 years of software engineering experience across multiple domains. I started in web development and digital media production, then moved into full-stack development at startups, and most recently transitioned into QA automation engineering.

Currently, I work as a QA Automation Engineer II for a government software contractor, where I engineer and maintain automated regression test suites and lead AI-enhanced development practices for the team. I'm actively migrating a legacy VB.NET Selenium codebase into a modern C# adapter pattern that implements both Selenium and Playwright, while building the AI governance framework—version-controlled Rules, Profiles, and Prompts—that enables my team to ship higher-quality test code faster. Before that, I spent over two years at Menuat building digital menu solutions—developing hundreds of projects from design to deployment, handling everything from frontend development to client support and on-site installation.

My role has evolved beyond individual contribution into technical leadership for AI adoption. I've drafted and iterated on the shared AI configuration artifacts our team uses daily, built a custom documentation application for rapid codebase onboarding, and trained teammates on effective AI-assisted workflows. This breadth of experience—from design through testing, from individual scripts to team-wide standards—helps me build more reliable systems and anticipate issues before they compound.

What draws you to roles involving AI and human-computer interaction?

I've always been interested in how people interact with technology. My university research focused on "User Interface Considerations for Emerging Input Device Technologies"—exploring how new input methods change interface design. That curiosity has evolved into a focus on AI-enhanced interfaces.

I created Promptfolio.dev to showcase practical applications of prompt engineering, demonstrating how custom GPTs can be architected for specific use cases. My AI Lab experiments with conversational interfaces and explores how AI can enhance rather than replace human capabilities.

I'm drawn to this space because we're at an inflection point where the quality of AI integration directly impacts user outcomes. Building these interfaces well requires understanding both the technical possibilities and the human factors—which is exactly where my background positions me.

What type of role and environment would be the best fit?

I'm looking for a fully remote position where I can contribute to AI-first products or thoughtfully integrate AI into existing systems. I thrive in environments that value both technical depth and practical problem-solving—teams that care about why something works, not just that it works.

Ideally, I'd work with a team that documents decisions well and communicates asynchronously. I'm energized by companies tackling complex problems where my experience bridging development, QA, and AI integration can make a real impact.

Roles that interest me include: Senior QA Automation Engineer, AI/ML Application Developer, Full-Stack Engineer with AI focus, or Technical Lead positions where I can help teams build reliable AI-enhanced systems.

QA Automation & Testing

How do you approach building an effective automated testing strategy?

I follow the testing pyramid principle: comprehensive unit tests for business logic, integration tests for API contracts and component interactions, and strategic end-to-end tests for critical user journeys. The key is knowing what to test at each level.

In my current role, I maintain large regression suites and focus heavily on test reliability—proper wait strategies, isolated test data, and clear diagnostic output when tests fail. Flaky tests erode confidence, so I invest time upfront making tests deterministic. My weekly cadence includes regression triage, diagnosis, and resolution across the full automated suite, alongside ongoing script maintenance, optimization, and enhancement.

I'm currently leading a significant migration project: converting a VB.NET Selenium test codebase into C# using an adapter pattern that implements both Selenium and Playwright drivers. This gives the team framework flexibility while preserving test logic investments. I also convert manual test paths into automated scripts, expanding coverage systematically rather than reactively.

I use Page Object Models for maintainability and design test architecture that scales as the application grows. Tests should be treated as production code with the same attention to clarity and maintainability.

How are you leading AI-enhanced automation practices on your team?

I took the initiative to draft the Rules, Profiles, and Prompts that my QA Automation Engineering team now uses as version-controlled, first-class artifacts—iterated alongside the test code, not buried in a wiki. These configurations standardize how we interact with AI tools across the team, ensuring consistent output quality whether someone is writing new scripts, triaging regression failures, or reviewing generated code.

I built a custom documentation application that consolidates the auto-generated codebase documentation we produce, giving new team members a single interface for rapid onboarding to both the codebase and the AI tooling that supports it. This has significantly compressed the ramp-up time for teammates joining the project.

The practical impact is measurable: our weekly regression test triage, diagnosis, and resolution cycles are faster and more consistent. Script maintenance and enhancement work benefits from shared prompt patterns. The VB.NET to C# migration I'm leading uses these same AI-assisted practices to accelerate the conversion while maintaining quality standards. It's exactly the kind of "full-package AI-enhanced engineer team level-up" that enterprise organizations need right now.

How has your QA background influenced how you write and review code?

QA experience has made me think defensively about code. I naturally consider edge cases, error states, and failure modes while writing—not just the happy path. When I review code, I ask: "How will this fail? How will we know when it fails? How will we debug it?"

I write code with testability in mind: proper dependency injection, clear interfaces, and observable behavior. This isn't about writing more code—it's about writing code that's easier to verify and maintain.

I've also developed a habit of implementing comprehensive logging from the start. When something goes wrong in production, having the right diagnostic information available makes the difference between a quick fix and hours of investigation.

Technical Approach

How do you integrate AI tools into your development workflow?

I operate across two complementary contexts. In enterprise, I've taken the lead on implementing quality controls for AI-assisted development at scale—drafting and iterating on version-controlled Rules, Profiles, and Prompts that my QA automation team shares as first-class artifacts. These aren't throwaway notes; they're iterated, reviewed, and version-controlled alongside the test code they support. I've also built a custom documentation application that consolidates auto-generated codebase documentation for rapid team onboarding.

At the team level, this means training teammates on effective AI-assisted workflows, establishing governance patterns for tools like Amazon Q Developer, and creating reusable prompt patterns that improve code quality while reducing wasted review cycles. It's the full package: not just using AI personally, but systematically leveling up the entire team's capability.

Outside work, I pioneer with cutting-edge AI-powered CLI development tools. I pair extensively with Claude Code and Codex—both for significant durations—building production-grade projects, exploring agentic workflow interfaces, and pushing the boundaries of what's possible with human-in-the-loop AI development. This portfolio site, its testing infrastructure, and the AI Lab products are all built through these intensive pairing sessions.

The key insight: AI integration isn't a feature you add—it's an engineering practice you build. The teams that win are the ones with shared standards, reproducible patterns, and leadership that understands both the technology and the human factors.

What's your approach to building maintainable full-stack applications?

I prioritize simplicity and clarity over cleverness. Clean separation of concerns, consistent patterns, and comprehensive documentation make codebases maintainable long-term. I've seen too many projects become unmaintainable because of over-engineering or inconsistent approaches.

My stack typically includes TypeScript for type safety across the full stack, React or Next.js for frontend, Node.js for backend, and Firebase or similar services for infrastructure. I choose tools based on project needs, not trends.

Accessibility and responsive design are built in from the start, not added later. Performance optimization focuses on what actually impacts users—proper caching, lazy loading, and critical path optimization. I measure before I optimize.

Tell me about your WebXR and immersive technology experience.

My WebXR projects explore how traditional interfaces translate to 3D space. The VR Menu Showcase and WebXR Playground demonstrate spatial navigation and interaction patterns using A-Frame and Three.js.

What fascinates me about immersive technologies isn't just the visual spectacle—it's rethinking fundamental interaction models. How do you design for comfort? How do gestures replace clicks? How do you maintain accessibility when the interface is spatial?

I approach these projects with progressive enhancement: core functionality works in 2D, then spatial features layer on top. This ensures broad accessibility while exploring what's possible in immersive contexts.

Collaboration & Communication

How do you work effectively with remote and distributed teams?

Effective remote work requires intentional communication. I default to async communication with clear, complete messages that don't require back-and-forth. Written documentation captures decisions so they're searchable and shareable.

I've worked across time zones and learned to plan work so no one is blocked waiting for someone else to wake up. Critical discussions get scheduled sync time; everything else flows through documented async channels.

I also prioritize visibility—regular updates on progress, proactive communication about blockers, and clear status on work in progress. Trust in remote teams comes from consistency and reliability.

How do you adapt communication across different roles and audiences?

I adjust technical depth based on audience. With product managers, I focus on user impact and business outcomes. With designers, I discuss feasibility and interaction possibilities. With QA, I emphasize testability and edge cases. With engineers, I dive into implementation details.

Visual aids help bridge gaps—architecture diagrams, flowcharts, and annotated screenshots communicate complex ideas more effectively than long written explanations. I create these proactively rather than waiting to be asked.

Most importantly, I listen actively and ask clarifying questions. Misunderstandings are expensive; taking time to confirm understanding upfront saves significant time downstream.

Let's Build Something Together

I'm actively seeking remote opportunities in AI-enhanced development, QA automation, or full-stack engineering. Let's discuss how I can contribute to your team.