PROJECT OVERVIEW
HackDavis is one of the largest collegiate hackathons in California where over 950 students, creators, and leaders come together for 36 hours to create for social good.
Throughout it’s history, the judging process has always been problematic and inefficient at HackDavis. Whether using physical pen-and-paper or digital Google forms, participants and judges expressed dissatisfaction with the chaotic experience each year.
Given this challenge, we were tasked with creating a digital product to streamline judging and improve coordination across a diverse range of users, ultimately enabling a smoother hackathon experience for all.
But enforcing this level of fairness is
— all within the rigid time constraints of the hackathon.
maintain
hard to



What’s the background?
HackDavis ensures fair evaluations by
and attention for every project.
emphasizing
equal consideration
Fairness is a top priority at HackDavis, with every project evaluated solely on its content and merit. With the variety of different project tracks, they are also encouraged to ask any clarifying questions to guide their understanding of each project.
The Emerging Problem
In reality, there is more than enough time to judge fairly. It’s just that much of it is wasted on avoidable inefficiencies and errors. Thus, participants are also left confused.
Miscommunication about which teams still needed judging often caused frequent delays. Judges occasionally asked irrelevant questions due to limited knowledge or a lack of clear instructions, further slowing the process. The overall complexity of the process, combined with the absence of a clear and accessible judging guide, created unnecessary confusion, suggesting that these inefficiencies directly contributed to leaving participants waiting in confusion and uncertainty without clear updates or timelines.
Therefore, we asked
How might we design an internal tool to streamline judging and improve coordination among judges, directors, and participants for a hackathon experience?
smoother
01
UNDERSTAND
To Begin Researching
1 > MLH LITERATURE REVIEW
I took a look at the official Major League Hacking judging plan guide to stay aligned with established hackathon practices.
2 > EMPATHIZE WITH INTERVIEWS
I conducted 8 interviews with judges and hackers to uncover pain points in their past experiences and understand both perspectives.
Reviewing MLH Guidelines
WAIT... ARE WE EVEN UP TO STANDARDS?
Well, yes and no. While HackDavis has adhered to some aspects of the MLH judging criteria—specifically the judging format, we discovered a significant gap: the participants’ experience was largely overlooked in the process.
Great! We’ve been implementing that method for each year since starting back in 2016.
"Our recommendation is always implementing a science-fair type of judging plan where the Hackers present their hackathon project."
How should judges score?
Uh-oh. It looks like HackDavis hasn’t been able to address this issue for quite some time...
“To avoid confusion, the key is to make sure hackers know exactly where you are in the schedule for judging.”
How should hackers stay engaged?
How was it like before?
THE OLD EXPERIENCE: GOOGLE FORMS
Next, I sought to better understand the judging experience in the past through 6 interviews with both judges and hackers. The most recent method of recording scores at last year's hackathon was Google Forms. However, judges struggled with ranking projects due to the form’s overly general structure, which made it feel complex and overwhelming to pick from the options. I started by breaking down the limitations of this generalized approach and how it impacted the judging process.

Empathizing through Interviews
HACKDAVIS JUDGING PAIN POINTS
To gather more insight into specific pain points, we interviewed 6 previous judges and hackers. Using an affinity map, we were able to group key overlapping problems into 4 main categories that would later guide us in our ideation.
1. POOR JUDGE SUPPORT
Judges struggle to quickly identify which teams they need to evaluate, leading to delays and unnecessary confusion.
2. DIFFICULT SCORING
Generalized categories and the lack of an accessible standardized scoring rubric make team rankings confusing and overly complex.
3. ELIMINATE BIAS
Participants were occasionally asked questions unrelated to their chosen track, revealing gaps in judges’ understanding and inconsistencies in scoring.
4. MISCOMMUNICATION
Participants were left confused and unprepared due to a lack of communication on when judges would arrive.
Reflecting on Research
VISUALIZING THE JUDGING AND HACKING WORKFLOW
With our users' needs identified, we began brainstorming solutions to address their pain points. Creating an information architecture tree helped me visualize and organize the overall workflow, while user flows provided insight into exactly what judges and hackers experience when taking specific actions.
1/ INFORMATION ARCHITECTURE
A comprehensive look at our brainstorming...

02
IDENTIFY & SYNTHESIZE
2/ USER FLOW
Exploring how our users would interact with it...

UI Ideation
HYPOTHESIZING THE DESIGNS
While beginning my explorative ideation phase, I kept the importance of shifting perspectives in mind to ensure my designs aligned with the project's diverse goals. This next phase focused on exploring a wide variety of new approaches to creating an efficient and streamlined product experience for both judges and hackers. To land the best option, my decisions were further evaluated using A/B testing feedback from 6 designers, leads, and prior judges.
CONCEPT #1: HACKER VIEW
First, let’s take a look at the different variations of the hacker view, which can be accessed after the initial landing page.

OPTION 1
Condensed
(Before)
OPTION 2
Detailed Sections
(After)
READABILITY
🔴 — Middle alignment and cluttered features make the content harder to read and navigate
🟢 — Clear alignment and distinct sections significantly improve readability and content organization
CLARITY
🟡 — Unclear labeling and outdated branding hinder clarity and reduce intuitive interactions
🟢 — Updated branding, clear titles, and actionable features like "Notify Judge" create a seamless experience
ENGAGEMENT
🟡 — Includes a progress indicator and interactive tips, but lacks depth and branding
🟢 — Adds a countdown timer, percent-based progress, and a slider for tips, providing for a more interactive experience
CONCEPT #2: JUDGE HUB
Next, I explored different ways to organize and streamline the judging hub, including all the resources judges may need.

OPTION 1
Project Cards
(Before)
OPTION 2
Total Projects
(Before)
OPTION 3
Combination
(After)
Accessibility
🟢 — All key information is immediately visible on the first screen, including table numbers and project tags
🟡 — Total projects are clear, but requires more clicking to view specific project details or table numbers
🟢 — Combines the visibility of Hub A with the project count from Hub B, presenting key information upfront
Navigation
🟡 — Carousel navigation is simple but may feel tedious for users trying to get an overview of all projects at once
🟡 — Requires users to click through more screens to access detailed project information, slowing navigation
🟢 — Balances simplicity and functionality by including a project carousel with a "view all projects" button
Layout
🟢 — Clear and straightforward layout with visible table numbers and project details
🟡 — Clean and minimal design, but lacks immediate context for projects (e.g., table numbers or detailed project descriptions)
🟢 — Combines the clarity of Hub A with the cleaner layout of Hub B, ensuring a polished and balanced user experience
CONCEPT #3: PROJECT PAGE
Then, I explored various approaches for the project page, focusing on how to present key project details clearly and intuitively for judges.

OPTION 1
Side-by-side
(Before)
OPTION 2
Top-down
(After)
SCANABILITY
🟡 — Requires more effort to scan sequentially, and visual density of overall spacing can feel too overwhelming
🟢 — Allows judges to scan projects more easily, focusing on one at a time to improve task flow. A/B tests overwhelmingly suggest that the top-down layout feels more intuitive for search and interaction
FEATURES
🔴 — Displays tags prominently, but lacks intuitive features for filtering or accessing additional tools like maps. Interaction options feel limited
🟡 — Includes a button linking to a map, improving usability for table navigation. Lacks filtering features like sequential ordering or tag-based sorting that could further enhance usability
LAYOUT
🟡 — Clear emphasis on project tags, but the layout can feel cluttered with two projects per row, making the project titles less prominent
🟢 — Prioritizes project names and accommodates longer titles to fit, with clear categorization and sufficient spacing
CONCEPT #4: SCORING FORM
Finally, I explored different ways to design the scoring page, focusing on helping judges navigate rubrics easily and provide clear feedback.

OPTION 1
Track Filters
(Before)
OPTION 2
All Tracks
(After)
ACCESSIBILITY
🟡 — Tabs help organize specific tracks but require switching and scrolling, which can feel tedious and are prone to being overlooked
🟢 — All scoring rubrics are displayed on a single page, allowing judges to evaluate everything at once without additional navigation
FEEDBACK
🟡 — The comment section is present but lacks clear guidance on addressing anti-cheating, making its purpose less actionable
🟢 — The rebranded comment section explicitly emphasizes anti-cheating, guiding judges to provide more useful and actionable feedback
INTUITION
🟡 — Tabs create a consistent structure but can feel unintuitive, and the organization of rubrics may cause some judges to miss key sections
🟡 — The single-page layout ensures visibility, but the potential of a longer format might overwhelm some judges despite improved clarity
03
IDEATION
Polishing the UI
Establishing a Unified Design System
To ensure visual and functional consistency across the judging platform, I created a comprehensive design system that standardized mobile typography, color, and components. This system not only streamlined development and improved scalability but also reinforced HackDavis' branding, balancing bold and high-contrast colors with deep navy shades for clarity and hierarchy. By creating reusable UI components, the system facilitated developer handoff and maintained design consistency across all judging interfaces. With this foundation in place, I implemented these guidelines into my high-fidelity wireframes, bringing the judging app to life with an intuitive and polished experience.

04
EXECUTION
Here’s how the judging app works...
Introducing the
Hacker's View

Interact with tips that help you prepare for presentations and deliver your pitch confidently to judges
Access the hacker hub to track progress on judging and stay updated in real-time
No judge yet?
Let us handle it

Use the 'No Judge Report' feature to notify directors if your project hasn’t been judged at least once when the timer expires
A cooldown timer ensures fair use while making sure no project is overlooked
Now introducing the
Judge's View

Log in securely to access your judging partner, a list of assigned projects, and map of table locations
The judging hub keeps everything you need for navigation and organization in one convenient place
Redesigned scoring made
simple

Score effortlessly with the judging rubric and layout map made readily accessible to guide your evaluations
Filter through your assigned projects and access the scoring form to provide detailed feedback
We learned that clearer scoring for judges and better communication for
hackers
enhances the efficiency and fairness of the judging process
REFLECTIONS,
REFLECTIONS.
As a sophomore with little project experience, I was both excited and intimidated to take on a two-month-long challenge with direct impact on a large-scale event, with multiple stakeholders involved and a tight deadline at that.
However, working on HackDavis taught me how to align research with real-world constraints to drive meaningful design decisions. Conducting A/B tests and user interviews deepened my understanding of how usability issues affect different groups of users, showing me how small refinements—like implementing a single-page layout to reduce navigation friction—could improve efficiency and cut down lost time. Collaborating cross-functionally with organizers pushed me to think beyond UI into system-wide problem-solving, while advocating for the ‘No Judge Report’ system required justifying design decisions with research-based evidence and ensuring ease of implementation for developers.
I'd like to extend a HUGE thank you to my team, the HackDavis directors, and our design lead for their continuous guidance and support along the way!

Shoutout to Team DE$IGN 2024! Froggies on top ^o^
AND THANK YOU FOR MAKING IT THIS FAR DOWN!