-
-
Assessment result page showing pie chart of correct answers and their categories
-
Generated score and feedback after user submits their answer
-
Multi-choice question type generated for user to select multiple possible answers.
-
Short input field question type expecting a short answer to the question
-
Possible answer for option list selected
-
Long input field question type expecting an elaborate answer to the question
-
Question hint revealed providing user with an idea of the answer.
-
First single-choice question for the assessment.
-
Opening the evaluate extension on the Gov.uk renters right act page. Shows title, page summary and model status.
Inspiration
Evaluate is the first dynamic Chrome extension designed to leverage Chrome’s built-in AI for assessing user knowledge. The project was inspired by the quiz and self-assessment features commonly found on EduTech platforms such as Udemy and Pluralsight. We wanted to bring a similar interactive learning experience directly into the browser.
What it does
Evaluate intelligently summarizes the entire text on a webpage, generates tailored assessment questions, and provides instant feedback on user responses. This process helps users identify knowledge gaps and guides them toward deeper learning and comprehension.
Evaluate uses four question types ( single-choice, multi-choice, long input field, and short input field ) to keep the assessment engaging for users. Each question also contains a hint for users struggling with getting the answer.
How we built it
The frontend of Evaluate was developed using React, Vite, and TailwindCSS. The extension handles Chrome AI edge cases, such as checking the availability of the built-in AI model, to ensure smooth operation across different Chrome browsers.
Under the hood, Evaluate integrates the Summarize and Prompt APIs from Chrome AI:
- Summarization: When launched, the extension summarizes the page content to give users a quick overview.
- Question Generation: Using the Prompt API, evaluate creates a set of questions based on the summarized content.
- Feedback and Scoring: After each user response, another prompt call evaluates the answer against the page content, providing a score, personalized feedback, and suggestions for improvement.
We configured the model temperature to produce responses that are both friendly and conversational, while maintaining output consistency through a well-defined JSON schema for all prompts.
Challenges we ran into
One major challenge was achieving output consistency from the AI model, since the extension’s interface relies entirely on model responses.
To address this, we introduced a structured JSON schema to define the desired output format and applied it via the responseConstraint in our prompt calls. This approach significantly improved reliability and data handling in the frontend.
Accomplishments that we're proud of
We successfully created a seamless user experience that covers the full workflow, from summarization to question generation and feedback, powered by Chrome AI. The “happy path” for users is now stable, interactive, and helpful for self-evaluation.
What's next for Evaluate
An extensive user testing is the immediate next step for Evaluate to make it more than just a fun project and solve real problems for internet users reading with the Chrome browser.
We will be evaluating the use of hybrid-prompting with Firebase for the storage of assessments and richer prompt responses. Additional UX improvements, such as settings for adding a timer to questions, difficulty levels, question count, and the ability to generate questions for specific sections within a page.


Log in or sign up for Devpost to join the conversation.