(scroll for FlutterFlow, Perplexity, Rox, Tesla, and Agentic Workflows!)
Project Summary
Let's say Stock A goes up at t=0 (where time is t). The idea is that there is a latency between a related/correlated Stock B that may be positively or negatively correlated at t = 3. We would use this tool to predict this correlation and buy at t=2 before Stock B response to some market force affecting Stock A.
More specifically, our team developed a semi-high frequency trading system leveraging CUDA C++ accelerated computing that allowed for an agent network based chain of reasoning in order to use qualitative factors to explain the quantitative phenomena. Our agentic workflow allows us to benefit from market research and news analysis for a robust report.
NVIDIA Technology:
Our semi-high frequency trading system, powered by CUDA and chain of reasoning, comprises the following components:
CUDA-Accelerated ICA:
We implemented a custom CUDA-based Independent Component Analysis (ICA) module to extract statistical market forces from mixed signals. This approach is based on the concept that while independent source signals are non-Gaussian, their mixtures tend to exhibit a more normal distribution. For further reading, refer to this quick guide.Accelerated Monte Carlo Simulation:
Using CUDA acceleration, our system runs an implementation of a Monte Carlo simulation that models variant BM and JP inclusive sequence. The simulations are weighted based on a feedback cycle derived from the ICA process.High-Performance Processing:
By focusing on parallelization, the entire system completes computations in just a few milliseconds.Predictive Filtering with HMM:
A Hidden Markov Model (HMM) is employed to filter and identify the most reliable predictive paths. This ensures that trading decisions are based on a reward-based estimation of interval delays to set an optimal position lifetime.
Together, these components form an integrated system capable of high-speed, data-driven trading decisions.
FlutterFlow! :
To effectively visualize key components of our trading system—Best Case, Best Path, Last Price, Original Values, and Worst Case—which consists of over 4.8 million computed data points, we used FlutterFlow for a clean, interactive interface.
Why FlutterFlow? We chose FlutterFlow for its seamless Firebase integration, allowing us to easily fetch, store, and display real-time stock data without extra complexity. Additionally, FlutterFlow offers powerful graph rendering, ideal for our stock trend visualizations.
Visualization & Explanation Generation By integrating agentic web scraping capabilities, we enhanced our explanations with real-time analytics and relevant news articles, providing a comprehensive, data-backed decision-making tool.
Perplexity:
We integrated Perplexity Sonar to enhance our decision-making process by fetching real-time financial news relevant to each stock. This allowed our analysis agent to incorporate market sentiment and recent events, providing contextual explanations for each trading decision. By combining quantitative predictions from our CUDA-accelerated models with qualitative insights from Perplexity, we created a more comprehensive and transparent trading system.
Rox & Agentic Workflow:
Our project aligns with Rox's focus on agentic workflows, leveraging LLMs for context fetching, prompt orchestration, and tool calling to complete a complex financial workflow. Our trading system integrates real-time market analysis, predictive modeling, and automated decision-making to streamline stock market insights. More details on our system diagram and architecture can be found on this document.

Tesla Challenge:
Analyzing video with VLMs is hard. Gemini and Qwen provide decent models that can process video, but in our testing we found them to be lackluster for Tesla's needs. Given the time constraints, we attempted to optimize our challenge approach through an orchestration of image chunking and prompting.
The idea: Instead of processing a video, we split the mp4 into jpeg frames -- the attached Tesla diagram visualizes the following: In chunks of 3, we analyze a subset of frames from the video in parallel. This allows the VLM to see how the video changes within a short interval (for example, 3 frames may show the car moving forward or turning right).
If we cut the entire video into frames and run it all through a VLM in parallel, we can quickly gain a holistic analysis of the video contents and how it changes over time. We then combine all of this information and pass it into an LLM to reason on the information. Finally, it works with an output validation LLM to choose an option from {A, B, C, D, E}.




Log in or sign up for Devpost to join the conversation.