Inspiration
- Stampix
- Previous Hackathons
- Awesome side-projects
What it does
Our platform collects metadata from a variety of sources spread across the Internet. The Content Metadata Hub collects metadata from various APIs. The data is collected from multiple APIs and aggregated through Logstash into Elasticsearch. This ES datastore is available to our React front-end through a NodeJS Express API.
This allows our customers to get an overview of all the available content on the Internet. In the future, machine learning algorithms can be leveraged to perform recommendations and predictions on what content to invest in. This allows our customers to invest based on Big Data analysis, resulting in purchases that are more targeted towards their client-base.
How we built it
The NodeJS Scraper collects data from the Movie Database and Trakt APIs. The scraper feeds the data into Logstash. Logstash combines the data from different APIs and enables cross-referencing.
Logstash stores the data into Elasticsearch. Our data statistics are visible through Kibana as a back-end dashboard. Next, a NodeJS+Express API connects our front-end to our Elasticsearch datastore. This adds an extra security layer and puts up a foundation for our authentication in the near future.
Challenges we ran into:
- Cross-reference the data from different sources.
- Getting a working prototype up and running in a limited time frame.
- Deciding on a profitable pricing strategy.
Accomplishments that we're proud of:
- Our working prototype.
- The fact that we’ve built quite a sizable stack in a limited time frame.
What we learned:
A lot of stuff:
- Efficiënt team-work
- How to pitch
- How to do proper yoga
- How to build a house of cards
- …
What's next for Het Vaticaan
Winning all the Hackathons.
Log in or sign up for Devpost to join the conversation.