PAWNPARSE
AI & CHESS SHEETS
When I first joined PawnParse, the product already had a powerful ML model, but the UI around it felt more like a debug tool than a product. My goal as a UX/UI designer was to turn that technical setup into a flow that a regular chess player or tournament organizer could actually enjoy using.
Background & Problem
PawnParse is a web application that converts photos of handwritten chess score sheets into structured PGN files. The product uses computer vision and machine learning to:
detect move entries on the sheet (bounding boxes)
recognize handwritten text inside each box
Before PawnParse, players, coaches, and organizers spent a lot of time manually digitizing games. Even with the model in place, early concepts showed a risk: the interface made the process feel more complex than it really was.
During initial discussions with the founder, I saw an early idea for the UI: a three-panel layout showing the original sheet, AI predictions, and final PGN side by side. On paper it looked powerful, but we both had the same concern — for a first-time user this could feel like sitting in front of an airplane cockpit.
The core UX problem became clear: users needed a guided, understandable journey from photo to PGN, not a “control room” of all states at once.
My Role
UX/UI Designer (freelance, working directly with the founder)
I was responsible for turning the raw concept and ML capabilities into a clear, step-based user experience:
Auditing the initial 3-panel idea and redesigning it into a guided, linear flow.
Defining the core user journey from upload to PGN export.
Designing key screens: upload, bounding box review, move correction, metadata editing, final export.
Adapting the UX to asynchronous backend processing (multiple AI phases).
Creating high-fidelity UI in Figma, including interaction states, error handling, and AI feedback patterns.
We worked in short feedback loops: I shared flows and screens with the founder, we discussed edge cases, and then refined the design based on both technical constraints and early user expectations.
UX Challenges & Insights
As we iterated on the concept, a few key UX challenges surfaced.
1.
The initial layout overwhelmed users
The three-panel idea tried to show everything at once: scan, AI predictions, PGN output. It offered maximum visibility, but minimum guidance. In early conversations it became clear that:
Insight: We didn’t just need a layout; we needed a guided flow.
2.
Asynchronous backend required clear states
The backend runs in multiple phases: Uploading the image → Detecting bounding boxes → Reading handwriting inside each box
From a UX perspective, this means that “loading…” is not enough. Users needed to know: what exactly is happening now, how much is left, what will happen next.
Insight: The interface had to visualize the process, not hide it behind a single spinner.
3.
Trust in AI depended on visible confidence
The model could be very confident about some moves and quite unsure about others. If we didn’t show this: users might trust wrong results, or distrust everything. We didn’t want AI to be a “black box”.
Insight: We needed a simple, visual way to show confidence, so users could focus their attention where it was really needed.
4.
Error correction had to happen in context
Fixing mistakes had to be: quick, local, and as close as possible to the original sheet. Jumping between screens or modes every time a user wants to correct a move would quickly become frustrating.
Insight: Editing should feel like a natural continuation of the recognition step, not a separate, heavy task.
UX/UI Solution
Based on these insights, I redesigned PawnParse into a step-based flow that matches how users naturally think about the task: from uploading a sheet → controlling what AI reads → polishing results → exporting a clean game.
Upload & File Dashboard
We start with a simple file dashboard:
Users upload one or multiple score sheets.
Each file has: a clear status (uploading, processing, ready for review), basic info, direct actions like “Review” or “Share”.
This screen doesn’t try to explain AI — it just answers a basic question:
It sets a calm, familiar starting point before we introduce any complexity.
Bounding Box Review & Correction
The next step focuses on what the AI will read:
All detected bounding boxes are shown on top of the original image.
Users can: add missing boxes, remove wrong ones, drag and resize boxes to better match handwritten moves.
When we first discussed this step, we realized that many errors come not from the recognition itself, but from incorrect or noisy input regions. Giving users control over bounding boxes turns them into partners of the AI, not passive observers.
Data Reading & Progress Feedback
Once the boxes are confirmed, we trigger text recognition. Instead of a generic spinner, the UI shows a horizontal progress tracker that reflects real backend stages: detecting → reading → ready.
This may seem like a small detail, but it noticeably changes how users feel:
They no longer wonder “Is it stuck?”
They understand that the system is moving through clear, logical phases.
In a tool that relies on AI, this kind of transparency is crucial for building confidence.
Move & Metadata Correction with Confidence Indicators
After recognition finishes, we move to correction in context:
All moves are listed in sequence as editable input fields.
Each move has a confidence indicator (e.g., a star rating or scaled visual marker).
Game metadata (players, date, event, location) is grouped in a dedicated editable section.
The design here solves two big problems at once:
Prioritization. Users don’t have to check everything equally. They can focus first on moves with low confidence.
Flow. Edits happen directly in the list, without jumping across screens or modes. This keeps the mental model simple: “scroll, scan, correct”.
This step is where users really feel that they are in control of the AI, not the other way around.
Game Review & PGN Export
The final step is about confidence in the output:
All moves are shown in a clean, table-like format consistent with PGN structure.
The original sheet remains accessible for quick comparison.
Users can: download the PGN file, or send the game directly to supported platforms (e.g., Lichess).
The goal was to make exporting feel like the natural conclusion of the process:
Outcome & Impact
Redesigning PawnParse into a step-based, transparent flow led to several key improvements:
30-40%
Faster onboarding for new users
Users no longer have to decode a 3-panel interface. The guided flow and clear statuses reduce confusion and help them reach a first successful export much faster. Based on founder feedback, onboarding time has decreased by approximately ≈30–40%.
Increased trust in AI results
By making AI confidence visible and moves easily editable, the product shifts from a “black box” to a collaborative assistant that users feel in control of.
Alexander Lind, Founder
“Oleksii quickly restructured our complex initial concept into a clear, step-based flow. He helped bring structure and consistency to a wide range of screens and made collaboration smooth and productive.”
Higher completion rates for full games
Progress indicators, clear steps, and confidence-based prioritization reduce drop-offs mid-process.
A polished MVP ready for demos and investment
The new UX allowed the founder to onboard beta users and run more convincing demos for stakeholders and potential investors








