Clarification for the demo video: The main users are divided into two roles: mentees and mentors. It can be very roughly described as those who want help and those who want to help. As they are roles, you have the choice to choose when you enter the application.

Inspiration

There arises two extreme paradigms from the rise in the capability of AI. One is "human only" - dismissal of the use of AI in doing work. The other one is "AI only" - student entering a prompt and then proceed to copy the wisdom of ChatGPT. Both of these are foolish and wasteful, as with most extreme things. We believe the ideal paradigm exists in the middle, where AI is leveraged strategically as a "productivity multiplier" - multiplying the productivity that is already there from humans. We explored this idea in the context of mental healthcare, which led to our product.

Going to a therapist is hard logistically - think of the money, time, and planning needed. Online community of peer supporter has grown organically as an alternative to ease these hassles, with the notable example of TalkLife. However, the supporters were found to be ill-equipped to deal with negative message from support seekers. Furthermore, the one behind the screen is still a person, who is not available 24/7, with the risk of unavailability at the most risky moment (e.g. at night).

What it does

We leveraged the current LLMs model for 2 tasks:

  • Check the message content from mentors and prevent sending if the content is not empathetic or helpful in the situation.
  • Suggest ways to rewrite the mentor's message to be more empathetic and helpful in the mentee's situation.
  • If a mentee needs help at the moment the mentor is unavailable, a chatbot capable of holding empathetic dialogues takes over while alerting the supporter.

How we built it

We were grateful for having the resources from AWS. The front-end was created with React.js. The back-end was handled by Node.js and Flask, and hosted on AWS EC2. The models used were trained and deployed as endpoint by AWS SageMaker, executed serverlessly with AWS Lambda, and integrated into the backend via REST API created with AWS API Gateway and AWS CloudFront. AWS S3 was used to store the training data and files needed in the development of the website.

Challenges we ran into

AWS resources were boons, but they were also banes. The range of resources were so vast that it took some time to get up to speed with the resources we were given. We also ran into some nonsensical errors while interacting with the features of AWS (i.e. .json file formatting for Lambda function).

Accomplishments that we're proud of

  • Ideated a meaningful project.
  • Finished building and hosting the website.
  • Hosted a model from HuggingFace in a SageMaker endpoint.

What we learned

  • Business-driven ideation.
  • How awesome and versatile AWS is.
  • draw.io in the AWS style.

What's next for SeSame

There are many cool ideas we want to add to this MVP. Here are a sample of them:

  • Roll out support for speech-to-text conversation. The planned out workflow is saving the voice file attachment to an S3 bucket. The S3 bucket will be a trigger to a Lambda function, which will call a fine-tuned Amazon Transcribe model or a SageMaker-deployed model to transcribe the voice into message. We want this to be a move to make the platform more accessible to visually-impaired users.
  • Support multiple users. Right now it is just an MVP for 2 users. We want to add authentication and the features to create chat room.

Built With

Share this project:

Updates