Inspiration

Self-driving cars are transforming the automotive industry, not to mention empowering the future of sustainable transport. According to recent studies conducted by the NSF and MIT, up to 95% of all vehicles on the road could be fully autonomous by 2050. In order to prevent collisions between autonomous vehicles, it is beneficial to store all vehicles on a network and share data when required. This process is called cooperative perception. But how do we determine edge cases in cooperative perception? Furthermore, how can we integrate data from vehicles, sensors, and other nodes to benchmark smart systems and ensure robust user safety?

What it does

Our project Twilight is an interactive UI that allows users to create custom environments and landscapes, consisting of vehicles, obstacles, sensors, and non-drivable regions. Users can run simulations on these environments, both with and without the cooperative perception component. This leads to a greater understanding of the network's overall behavior across various physical scenarios. By integrating data from nearby sensors and surrounding vehicles, we can also recover collision footage and determine the overall effectiveness of the system, including its performance in failures.

How we built it

The frontend is a fully-integrated UI hosted on Flask. We constructed backend scripts and our main run file in Python, but most of our smart contracts, which connect to the Midnight Proof Network, are in TypeScript or JSON. We began by creating a simple framework of the frontend and then established different object classes for vehicles, sensors, barriers, etc. Once the layout was finalized, we encoded functionality on the backend side, such as handling user data and clicks; lastly, we create smart contracts that send and verify system data using the Midnight Contract API.

Challenges we ran into

Our project is highly complex. One issue we encountered often during our hacking was translating commands from the backend to the frontend/UI. Combining components was difficult, due to mismatches in data type. We found ways to work around this issue by creating a Python server that could send tuples of data to the frontend. Another issue we experienced was the difficulty of working with APIs and software tools that were entirely new to us. Since neither of us had used Midnight prior to the event, we spent a significant chunk of time reading up on the Docs and familiarizing ourselves with the ledger, contract, and contract-cli formats.

Accomplishments that we're proud of

  • Fully-functional and visually appealing web-based UI/UX frontend
  • Ability of users to create and test network simulations on custom traffic scenarios
  • Generating a novel algorithm to a difficult problem in computation geometry
  • Recreating the process of cooperative perception and data sharing on a decentralized network

What we learned

We feel that completing this project was an overall incredible learning experience. Mihika learned how to interface with Insomnia, Docker, and APIs to deploy smart contracts for decentralized networks. Vishal and David worked extensively with the backend, generating innovative algorithms for collision detection and cooperative perception. Julianne discovered how to debug frontend issues with the help of generative AI tools and learned about using Socket.IO to drive client-server operations.

What's next for Twilight

It may be getting dark, but we are burning the midnight oil on this one (literally). Our plans for Twilight are to expand the user interface to include even greater functionality, such as importing aerial images and converting these into a grid format. We also wish to implement a feature that allows users to see an aggregate view of a vehicle's vision - before and after cooperative perception.

Share this project:

Updates