-
-
Login Screen
-
Twitter Auth Screen
-
Splash Screen (Unselected values)
-
Splash Screen (Selected values)
-
Help Page 1
-
Help Page 2
-
Help Page 3
-
Help Page 4
-
Feed Screen (Hands-Free)
-
Feed Screen (Hands-Free and voice assistant enabled)
-
Search Screen
-
Drawer Menu Items
-
Settings Screen
-
Feed Screen - Translated Tweets
-
Feed Screen (Hands-On mode)
Inspiration
Our lives are very dependent on technology. The numbers say that a staggering 1300 hours is the time spent in a year by an average American on social media . This addiction decreases productivity and increases screen time. Liber allows social media consumers to focus on more productive day to day tasks while tuning into their favourite social media content in the background. Moreover, most social media isn’t accessible to physically impaired people because they mostly rely on touch and sight. Liber breaks these boundaries of communication with the help of its voice assistant, by bringing the sense of hearing. We hope to give an opportunity for everyone to equally connect to the rest of the world through Liber.
What it does
We built Liber with user Productivity and accessibility in mind. Our Hands-Free feature provides a seamless experience to navigate using voice commands. Imagine Liber as an audiobook for social media. The voice commands are as simple as saying ‘play’ and ‘stop’. Liber is integrated with 7 different voice commands which successfully navigate through the app. We also implemented a convenient switch to Hands-On mode to fit everyone’s preferences. Our second highlight is the translation feature that makes social media available to a wider audience through its vast language translation options. We currently support 67 languages.
How we built it
We developed Liber using the Flutter framework, powered by the Dart platform. Firebase authentication with the Twitter API login was used for user authentication. Firestore cloud database allows cloud capabilities and data sync. Sqlite local database was used to store temporary data. Google Translate API translates original text to the user's preferred language. Text to Speech library converts text to speech and reads out the tweets on screen. Alan SDK allows users to input voice commands and utilize voice assistant features. And last but not least, we used the Twitter API to fetch and query tweets and usernames.
Challenges we ran into
The flutter speech to text package is not continuous on android due to privacy settings. This caused our Hands-Free feature to no longer be available, since the user would have to manually turn on the mic. In order to solve this problem, we had to migrate our speech to text logic to Alan AI SDK. Alan AI provides continuous listening, allowing the user to use the wake word “Hey Alan” to turn on the mic, for our Hands-Free mode. We tried to fetch data though the twitter API using HTTP requests in Flutter Web, however, the tweets couldn’t be fetched because of a CORS error. While developing on iOS, some of the external dependencies and packages were not compatible, so the iOS build wasn’t able to be executed. The speech recognition also recognizes the voice of Text to Speech AI. For best experience, use earphones or headset while using the application.
Accomplishments that we're proud of
The chirp development hackathon improved our flutter skills greatly, since we had the opportunity to use several API and SDKs to develop our app. Through the Chirp development hackathon, we were able to learn and successfully fetch and use data from Twitter API V2. We also learnt how to implement text-to-speech and voice recognition to make an automated application. Some of the non-tech related skills we improved on are time management and collaboration.
What we learned
- We learned to develop apps on cross-platforms (iOS, android, web, windows and macOS)
- We learnt how to make an auto scrolling feature for the feed screen.
- We learned to use the Postman tool to test API calls for Twitter API V2.
- We learned networking concepts such as HTTP, status codes and CRUD operations for REST APIs
- We learned to build automated test scripts to test our application
- We grasped the deployment process of an android app as we were able to successfully release it on play store.
What's next for Liber
What’s next for Liber:
- Phase 1: Build web and iOS product version, add more voice capabilities, Acquire user feedback, analyze usage.
- Phase2 : Monetization, and Expand dev team.
- Phase 3: Implementing Liber’s own voice command SDK, Large scale cloud deployment, Developing Liber’s own API for developers
Built With
- alanai
- dart
- firebase
- firestore
- flutter
- flutter-tts
- google-translate
- sqflite
- twitterapi-v2

Log in or sign up for Devpost to join the conversation.