Yesterday I was introduced to the world of Hackathons through Civica Digital's CodeIT18 event in Bristol. Teamed up with David Banks and Matthew Cantilion we had a pretty well rounded full stack team, even if it was our first time working together. Part of the prerequisites of the day was to choose a specific scenario, in which we decided to challenge ourselves with a facial recognition task. This is how the day played out for us.
08:00: Introduction to the seven teams, set up and breakfast. Free food is always welcome!
08:30: We're off! The first point of call was to come up with a plan. We quickly mocked up some screen designs and a user workflow using Balsamiq. Then when we were happy, we set up a GitHub repository, Angular 5 frontend and a .NET Core 2 back API. Perfect since Matt and I are Mac users allowing us to work on the backend too.
09:15: The skeleton of the front and backend parts is up and running. Snappy! Matthew has not used Angular before so he's having to pick it up quite fast but the CLI is proving invaluable for its speed. David's hit the first snag of the day. The police database API is not liking basic authentication. Matt and I will continue with the basic manual search functionality features required by the scenario, just in case we fail on the face detection section of our project.
10:00: David's fixed the authentication issue, who knew the authentication header is case sensitive! He's also got the searching functionality working with the police API so that's one key piece of functionality complete, as well as a basic form of authentication for our own API. Accountability with this type of system is key! Matt and I, on the other hand, have sorted out a login page and searching form. It's going well so far.
10:45: We now have JSON server up and running and a few services pulling back mock data for our search results on the frontend. Matt's now continuing to develop the page to display them. Image data is currently missing as it seems to be missing from the police API which provides a challenge. David's been working some magic and is now waiting to tie the front and back together. 11:30 has been agreed as the combining time.
11:30: Matt has finished the results page and is now ready to tie the frontend together with the back with David. I've also started on implementing the camera functionality within the frontend.
12:00: Lunch time calls! We are now in a strong position if the facial recognition fails we now have the manual search process functional and tied together with our backend. The real fun starts soon!
12:30: Facial identification time. We've not been able to get back any photos from the police API yet so I've had to take a couple of photos from a few people around the event. Three images of each person should create a good model during the training process. Matt is now working on tidying up the frontend into a user-friendly state. David is working on some issues with the police API and publishing while I'm working on the facial recognition training process.
14:30: Facial identification is working! The Azure face API is actually quite impressive even if it did throw a few spanners in the works by not returning error messages during the training process. I also managed to assign IDs of individuals from the police API into the face data. All my willing victims from earlier have now been turned into criminals. Sorry! The frontend is looking good and is ready to now implement the facial identification process. David has been fighting some publishing issues but nothing that should affect the end result.
16:00: The frontend is now passing an image back successfully, the facial recognition is detecting faces as required and returning both the azure GUID and our police API ID. David has been working on returning the individuals data from the police API and should be ready to tie in shortly to the template page we created earlier.
17:30: Matt has produced us a nice presentation for the end show. David and I have been battling Git merge issues and multiple communication issues. The final half an hour is going to be a rush to tie this all together.
18:00: Just made it, the functionality that we set out to create has been produced! A few little tweaks to some JSON strings and ID manipulation (as they start at 108 not 1) and we've officially hacked are the way to a working solution. David has also snuck in Azure insights at some point so every API call is now being logged. Magic! 62 commits and now all that's left is to present our work.
19:15: Presentations completed! A few minor issues but that's what happens when you only finish your solution five minutes before the end. A couple of strong contenders, it's going to be close.
19:45: Winners announced, Congratulations CrEYEm Labs! Now off for a well-earned pint! Cheers everyone and a big thank you to the organisers and everyone else who helped in the running of the day!
Looking back on the day we didn't kill each other and our solution was fairly successful. Maybe we should have done a bit more research into the provided API's responses earlier on and maybe we could have done with an extra member to help with polishing up the solution but ultimately we're happy with the result of the day. Our code can be found here and I'll produce another post on the Azure Face API training process in the near future.
Comments