A practical account of what worked, what broke, and what we observed while helping a client migrate a decade-old Laravel application to a greenfield system using an AI coding assistant.
Background
Every engineering team has one: a system that works, that everyone depends on, but that carries the weight of years of accumulated decisions, workarounds, and "we'll fix it later" comments. Our client's was an exam registration management system: a Laravel monolith with no migrations, no consistent architecture, and a codebase that had grown organically over many years.
Rather than a full manual rewrite, we ran an experiment: could we leverage an AI coding assistant to drive the bulk of the porting effort to a modern, well-structured Laravel application? This is the story of that experiment.
Preparation: Setting the AI Up for Success
The most important thing we did before writing a single line of new code was invest in context. An AI assistant is only as good as the information it has access to. We leaned on that work heavily.
We decided to use GitHub Copilot with Opus 4.6 model for most of the heavy work of planning implementing large groups of files. When a simple change was involved like making a change in a single file or adding a relatively simple thing, Sonnet 4.6 model was used.
Preparing the New Project
A fresh repository was created using a standard modern Laravel project which uses React Typescript SPA app. Laravel serves as an API for the React app so this is a substantial change from the old project which is in standard Blade templates style. The AI generated a second AGENTS.md for this project, capturing the conventions of the new architecture rather than the old one.
Documenting the Legacy System
The legacy codebase was placed in a .gitignored folder within the new repository. This gave the AI a clean, accessible source to read from during porting without it ever being committed or deployed.
The old system had no migrations: the database schema lived only in the running database and in the heads of the people who built it. We started by asking the AI to traverse all Eloquent models and relationships and generate a DDL SQL schema from them. This became the source of truth for the new database.
We also asked the AI to read through the entire old project and produce an AGENTS.md file - a structured document covering project layout, conventions, and how key things were built. This file became the primary context fed into every subsequent prompt, avoiding the need to re-explain the project each time, and we renamed it to PROJECT.md.
The Porting Plan
We asked the AI to produce a detailed, itemized porting plan written to a PORTING_PLAN.md file. This plan was then enriched with the new architecture's conventions - including code examples for DTOs, Repositories, Search classes, controllers, and OpenAPI documentation.
Because the plan grew large and unwieldy, we had the AI split it into a porting/ directory with one file per concern, ordered numerically, with a README acting as an index:
- 01 – Database migrations
- 02 – Eloquent models
- 03 – Enums
- 04 – DTOs
- 05 – Search classes
- 06 – Repositories
- 07 – Services
- 08 – API schemas
- 09 – Controllers
- 10 – Routes
- 11 – Middleware
- 12 – Authentication
- 13 – Email system
- 14 – Database seeders
- 15 – Frontend pages and components
- 16 – Tests
Each file was a self-contained task list. Deprecated integrations were moved to a not-ported/ folder rather than deleted, in case they became relevant later.
The Porting Process
Database and Models
Database migrations were the most straightforward phase. The AI read the DDL schema and produced two migrations as requested: one for the main schema (all tables and indexes in a single file) and one for database views. Straightforward, clean, no surprises.
Eloquent models were done in batches to maintain control over what was generated. After each batch, the output was reviewed for correct relationships and field definitions before moving on.
Enums: The First Stumble
Enums exposed an early lesson in prompt precision. The original instruction said to use string-backed enums where possible, falling back to numeric values only when a database column demanded it. When we later asked the AI to migrate those instructions from the porting plan into AGENTS.md, it rephrased the rule subtly - dropping the exception - and subsequently converted all enums to strings, including ones that needed numeric values.
The correction was quick, but it illustrated something important: the AI will follow what is written, not what you meant to write. Precision in AGENTS.md is not optional.
First Pivot: Grouping the Request Stack
The porting plan originally treated DTOs, Search classes, Repositories, Services, Controllers, and Routes as separate sequential steps. In practice, none of these can be implemented in isolation - they are all part of a single request lifecycle.
We pivoted. The AI was asked to enumerate all controllers from the legacy system, group them by complexity, then port each group as a complete vertical slice: route → controller → service → repository → DTO/search class.
The AI produced a well-structured grouping:
- Simple read-only lookups - no DTOs or services required, just repository and controller
- Admin management - CRUD with validation, user state, and business rules
- Core domain - complex entities with interdependencies, full stack required
- Intentionally skipped - legacy integrations and session-based flows replaced by JWT
The simple group ported cleanly and appeared in the OpenAPI docs on the first attempt. The more complex groups required back-and-forth, particularly around user state.
Second Pivot: Authentication Architecture
The old system used a custom OAuth server for user authentication. The new system was being built to integrate with a shared account portal: but that portal was not yet production-ready.
Rather than block progress or implement something temporary, the client made a deliberate architectural decision: reimplement the OAuth flow from the legacy system within the new project as a transitional authentication approach. This required changes across the backend and was the first point where frontend work became necessary as well.
Third Pivot: Accumulated Technical Debt
As the AI worked through the more complex controllers, services, and repositories, a pattern emerged. Functions were being named poorly. Architectural constraints that had been explicitly defined - thin controllers, no business logic in repositories - were starting to erode. The AI was picking up patterns from the old code and carrying them forward rather than enforcing the new conventions.
We stopped the porting work entirely, asked the AI to review everything it had created so far, and fix all violations in a single pass. The refactor was effective and brought the codebase back to the defined architecture. We also asked AI to document the convention changes in AGENTS.md for future prompts.
The lesson: accumulated drift is easier to fix in one targeted pass than to chase across many individual corrections.
Middleware and Review
After the controller phase, the AI was asked to review what had been ported against the original plan and mark completed items. This prevented redundant work in later phases.
During this review, an access control problem surfaced: admin and candidate endpoints had been grouped together in some controllers, making it impossible to apply the right middleware. The AI split these into properly separated controller classes and updated the porting documentation accordingly.
Email System
Email was not straightforward. The AI successfully ported the email templates but failed to wire up the actual email-sending calls across the application. This required several explicit correction cycles - identifying each call site, verifying the correct service method, and confirming the wiring was correct.
Porting the Frontend
Frontend was the most complex part of the port. The legacy system used server-rendered Blade views; the new system used React with a shared component library already produced for a different client project.
Approach
The AI was first asked to analyze and document the React project structure before touching any code. That document then drove all subsequent porting decisions.
The original porting order - candidate pages, then admin pages, then shared components - was reversed after review. Shared components must exist before the pages that use them. The final order: shared components → admin pages → candidate pages.
Issues Encountered
Icon system - The AI carried over the legacy icon approach rather than adopting the new project's icon system, resulting in widespread missing icon errors across the UI. The fix was to create a proper icon plugin for the build tool that included the required icon set.
Global vs. per-page state - The AI created a global state store for a context selection used throughout the registration flow. While this mirrored the old system's approach, it introduced subtle bugs - selecting a context on one page would silently affect state on another. Refactoring to per-page state resolved it.
Navigation permissions - The AI initially rendered the full navigation to all admin users. Access to navigation pages is controlled by a database table. It was corrected to load only the pages each user has been granted access to.
Registration wizard bug - A data field required during candidate registration was not being passed through the JWT token in the new authentication flow (in the old system it came through OIDC). The AI's first fix was to make the column nullable - a workaround rather than a solution. After being redirected to the old code and the new JWT approach, it implemented the correct fix.
Additional issues found during final review - Incorrect layout in lookup results, a bug that caused lookups to fail when adding a note, a validation error blocking record edits, a field parsing bug producing incorrect report output, a wrong ID lookup causing 404s on valid records, and a missing UI for editing agreement pages.
Security Review
After porting was complete, a full API security audit was run by AI. The AI reviewed all backend endpoints and returned a structured report.
Admin Page Access - Well Implemented
The admin page access system came out correctly. Every admin route was gated by per-page middleware checking a database pivot table. OR logic was correctly applied to routes shared across multiple page permissions. No issues found.
Candidate Data Isolation - Critical Gaps
Several high-severity IDOR vulnerabilities were found in candidate-facing routes. Routes parameterized by a user identifier had no ownership check - any authenticated user could pass any identifier. The frontend never exposed this because it read the value from the JWT, but the API itself was directly exploitable.
Additionally, several write endpoints were accessible to candidates when they should have been admin-only.
Remediation applied:
- Added ownership middleware that validates the route parameter against the authenticated user's identifier from the JWT
- Moved whitelist management endpoints behind admin authentication
- Moved candidate create/update endpoints behind admin authentication
- Added ownership validation to history and notes write endpoints
SQL Exposure
One endpoint executes SQL stored in the database to retrieve bulk email recipient lists. The AI flagged this and recommended replacing it with parameterized repository methods. Given that the SQL is not user-modifiable via the API, is admin-only, and SELECT-only, the client accepted the risk for this phase and noted it for future remediation. All other raw SQL usage in repositories used parameterized bindings with no user input reaching the SQL string. Exposure was removed from public endpoint responses and available only internally within the code.
Tests
After porting was complete, unit and integration tests were added. The AI reviewed the full backend and produced an ordered test implementation plan.
Unit tests were completed with minor issues, most resolved through targeted feedback. Integration tests required additional setup - database test containers and bootstrapping - which the AI implemented for the first suite, establishing a pattern for all subsequent ones.
Tests were implemented in parallel using agent sessions, which meaningfully reduced the time to completion. After all tests were in place, a subset began failing intermittently - flaky tests caused by test isolation problems. These required individual investigation to identify and fix.
Key Takeaways
Plan first, then split the work. AI does not perform well with large, unfocused context. A structured, numbered plan across multiple files lets you track progress and keep prompts targeted.
One thread per concern. Starting a new chat thread for each porting group keeps the context window small and the AI focused. Cross-contamination between unrelated tasks is a real problem.
Parallelize where you can. Copilot Agent Sessions allow multiple independent tasks to run simultaneously. Use this for any work without inter-dependencies. In our case this helped a lot with Unit and integration tests as they were mostly independent.
Review before accepting. The AI will frequently produce a plausible-looking but often architecturally wrong implementation when it lacks sufficient information. Reviewing output before accepting is part of the workflow, not an afterthought. Do occasional reviews of the whole implementation.
Keep AGENTS.md alive. Every time a new pattern, decision, or convention is established, update AGENTS.md. It is the memory that carries forward across all prompts. Without it, the AI reverts to its defaults.
Point at the old code. Placing the legacy codebase in a git-ignored folder and asking the AI to reference it directly - rather than describing what it should do - produced significantly better output. Showing beats telling.
Correct drift early. The AI will gradually absorb patterns from the code it reads and reflect those in its output rather than the conventions you defined. A periodic refactor pass is cheaper than chasing individual violations.
The experiment demonstrated that AI-assisted porting is viable for a project of this complexity, but it is not autonomous. It requires an engineer who understands the target architecture, can identify drift, knows when to pivot, and maintains the project context the AI depends on. The AI handled volume well. The judgment calls remained human.
Accelerate Your Career with 2am.tech
Join our team and collaborate with top tech professionals on cutting-edge projects, shaping the future of software development with your creativity and expertise.
Open Positions