Time really flies, and I’m already at Week 12. This round, I focused on refining the QR scanner fixes implemented during the event, improving test coverage across modules, and experimenting with how the Gemini API responds to different error scenarios.
Polishing the QR Scanner Fix
Last week’s live event fix for the QR scanner worked perfectly, but before pushing it upstream, I spent some time cleaning up the code for better readability and maintainability.
I reorganized the decoding logic in asset_repo.dart and refined the UUID validation fallback in jwt_client.dart to make it more consistent with other parts of the system. These updates didn’t change functionality but ensured that the patch was production-ready and easier for others to review later.
The CI/CD Wake-Up Call
Right after pushing the fix, the CI/CD pipeline failed several tests. At first glance, it looked like something major broke, but it was actually because the tests were still based on the old UUID logic.
I had already written the updated tests locally but hadn’t pushed them together with the fix. That small oversight caused the entire suite to fail. Once I committed the updated tests, everything passed again.
While fixing the tests, I also took the chance to clean up the structure:
- Removed outdated helpers and co-located mocks inside the test files.
- Added more randomized test data for better realism.
- Wrote additional tests for functions like
queueResult()and timestamp logic.
Lesson learned: always push dependent test changes together with the implementation.
Expanding Test Coverage
I continued strengthening unit tests across multiple modules. With every green test, the system feels a little more reliable.
Experimenting with API Error Handling
I also discussed my current Gemini API error-handling logic with Dr. Shawn. He advised me to experiment more deeply: to call the API in different scenarios, observe all possible responses (not just the text), and document what actually comes back when something goes wrong.
The goal is to distinguish whether a problem originates from our app or from Google’s side. So this week, I started running small experiments and logging responses. So far, I’ve noticed that rate-limit errors tend to include keywords like “quota” or “exceeded” rather than “429”, which will help refine the detection logic later on.
Reflections & Insights
- A fix isn’t truly done until it’s clean and well-tested.
- CI/CD failures can be frustrating, but they catch what manual checks miss.
- Real understanding comes from experimenting, not assuming.
What’s Next
I’ll be working on the 429 Gemini error handling logic and investigating a bug where a performance entry under the Moderator page becomes unclickable after long-pressing to view its video.
0 Comments