Inspiration

This project is deeply personal for me. As a software engineer apprentice at Sony Interactive Entertainment, I often join professional meetings on Zoom and Microsoft Teams. However, I frequently leave my camera off—not out of preference, but because Zoom’s virtual backgrounds fail to represent me properly. My curly, textured hair is often distorted, cut off, or blends unnaturally, making me feel misrepresented and self-conscious on screen.

This isn't just my experience; many Black professionals and individuals with curly or afro-textured hair face the same issue. Existing segmentation models are primarily trained on straight and wavy hair, leaving our textures as an afterthought. AfroVision aims to fix this by improving hair segmentation for virtual backgrounds, ensuring people with all hair types are represented accurately and equitably.

What it does

AfroVision enhances virtual background rendering by improving the segmentation of curly, coily, and afro-textured hair. We implemented DeepLabV3 with ResNet101 to segment hair while testing how different models handle diverse textures. Our system processes images, generates segmentation masks, and seamlessly blends the subject with a virtual background.

How we built it

Data Collection: We sourced datasets focusing on curly, wavy, and afro-textured hair, scraping additional images to ensure diversity.

Segmentation Model: Used DeepLabV3-ResNet101 for semantic segmentation.

Image Processing: Preprocessed images, normalized data, and resized masks for improved accuracy.

Testing & Evaluation: Applied segmentation to real-world images to assess effectiveness on different hair textures.

Challenges we ran into

Dataset Limitations: Many publicly available datasets lack representation of diverse hair textures. We had to scrape and manually verify images to supplement the dataset.

Google Drive Issues: Encountered technical difficulties accessing and managing data stored in Google Drive, slowing down our workflow.

Processing Errors: Debugging file path errors and refining image preprocessing took significant time.

Time Constraints: With a limited window, fine-tuning on afro-textured hair remains a next step beyond this hackathon.

Accomplishments that we're proud of

Successfully segmented multiple curly hair textures and demonstrated virtual background blending.

Created a working pipeline for future fine-tuning and improvement.

Identified key areas where segmentation models fall short for Black and curly-haired users, setting a foundation for addressing this bias.

What we learned

Importance of Representation in AI: Bias in training data leads to biased models. We saw firsthand how standard segmentation models struggle with diverse hair types.

Fine-Tuning is Essential: Pre-trained models work to an extent but must be customized to better capture curls and coils.

Efficient Debugging Matters: Managing large datasets and troubleshooting Google Drive path issues is a skill in itself.

What's next for AfroVision

Integrate with Zoom SDK: Implement AfroVision into Zoom's virtual background system to enable real-time segmentation improvements.

Fine-Tune on Afro Hair Data: We need to train DeepLabV3 further with more labeled images of afro-textured hair.

Expand Dataset & Annotation: Manually annotate and generate higher-quality segmentation masks for kinky, coily, and tightly curled hair.

Improve Real-Time Performance: Optimize processing speed for live applications like Zoom and Google Meet.

Open Source Contribution: Publish the dataset and model improvements to benefit other developers tackling bias in computer vision.

Fine-Tune on Afro Hair Data: We need to train DeepLabV3 further with more labeled images of afro-textured hair.

Expand Dataset & Annotation: Manually annotate and generate higher-quality segmentation masks for kinky, coily, and tightly curled hair.

Improve Real-Time Performance: Optimize processing speed for live applications like Zoom and Google Meet.

Open Source Contribution: Publish the dataset and model improvements to benefit other developers tackling bias in computer vision.

Built With

Share this project:

Updates