[{"content":"I remember being a kid with an iPod filled with music. It had music which I owned which I could listen to without an internet connection or using data from my phone plan. I missed that feeling and decided to build my own mp3 player as an Android app. I wanted to get more familiar with Kotlin, Android app development, and build something that could help people own their own data. The app is free, open source, and there are zero advertisements. You can download it here and start building your own music library\u0026hellip;again.\nDirectory structure My plan was to use the Music directory on my phone to hold artists and their albums. Tapping on any song would load all songs in that directory into a playlist.\n/Music ├── The Beatles │ ├── Abbey Road │ │ ├── 01 - Come Together.mp3 │ │ ├── 02 - Something.mp3 │ │ ├── 03 - Maxwell\u0026#39;s Silver Hammer.mp3 │ │ └── ... │ ├── Revolver │ │ ├── 01 - Taxman.mp3 │ │ ├── 02 - Eleanor Rigby.mp3 │ │ ├── 03 - I\u0026#39;m Only Sleeping.mp3 │ │ └── ... ├── Daft Punk │ ├── Discovery │ │ ├── 01 - One More Time.mp3 │ │ ├── 02 - Aerodynamic.mp3 │ │ ├── 03 - Digital Love.mp3 │ │ └── ... └── Radiohead ├── OK Computer │ ├── 01 - Airbag.mp3 │ ├── 02 - Paranoid Android.mp3 │ ├── 03 - Subterranean Homesick Alien.mp3 │ └── … Learning Jetpack Compose \u0026amp; Android Development My Google Pixel has a Music directory which was unused. But I knew that somehow an app could play music. So over several weekends I went through Android’s Training courses to get a high-level overview of how to build apps with this library.\nExoPlayer I used ExoPlayer, the default implementation for Jetpack Media3’s Player interface to play music. The great thing about this library is that it allows me to play music in the background using a MediaSessionService.\nSummary I built this MP3 Player app to bring back the simplicity of offline music listening—no ads, no subscriptions, no disappearing albums. It’s free and open source.\n🔗 GitHub Repo: https://github.com/Fallenstedt/mp3_player 📥 Download the APK: You can download the latest APK from the Releases page on GitHub and install it on your Android device.","href":"https://fallenstedt.com/blog/mp3_player/","kind":"page","lang":"en","lastmod":"2025-04-01T12:00:00-08:00","objectID":"d32ba570293dea0568629ccb41ccee7b","publishDate":"2025-04-01T12:00:00-08:00","section":"blog","tags":[],"title":"Mp3 Player","type":"blog"},{"content":"I used to rely on dependency injection and interfaces to mock my http clients. This worked, but was very burdensome. I learned about the httptest package which provides utilities for HTTP testing. I\u0026rsquo;ll walk through an example test I made in my weather CLI\nThe code which makes a network request I ping NOAA for a weather forecast using the following code. I have a FetchForecast method which invokes fetch to perform the network request at a specific URL.\npackage weather type Weather struct { forecastUrl string } func (w *Weather) FetchForecast() ([]Forecast, error) { var fr forecastReponse err := w.fetch(w.forecastUrl, \u0026amp;fr) if err != nil { return nil, fmt.Errorf(\u0026#34;failed to unmarshal response, %w\u0026#34;, err) } return fr.Properties.Periods, nil } func (w *Weather) fetch(url string, unmarshal interface{}) error { resp, err := http.Get(url) if err != nil { return fmt.Errorf(\u0026#34;%w, %w\u0026#34;, ErrFetchForecast, err) } defer resp.Body.Close() if resp.StatusCode != http.StatusOK { return fmt.Errorf(\u0026#34;%w, got status %d\u0026#34;, ErrFetchForecast, resp.StatusCode) } body, err := io.ReadAll(resp.Body) if err != nil { return err } err = json.Unmarshal(body, unmarshal) if err != nil { return fmt.Errorf(\u0026#34;failed to unmarshal response, %w\u0026#34;, err) } return nil } Writing a test for this is easy. You can use set up a mock server with httptest, and supply a URL to that server. Here is an example test where I supply a mock response from NOAA using a testing server. My code will make a GET request to this server, and receive code.\npackage weather_test import ( \u0026#34;net/http\u0026#34; \u0026#34;net/http/httptest\u0026#34; \u0026#34;testing\u0026#34; \u0026#34;github.com/Fallenstedt/weather/common/weather\u0026#34; ) func TestFetchForcast(t *testing.T) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.Write([]byte(`{ foo: \u0026#34;bar\u0026#34; // Mock JSON goes here }`)) })) // create an http test server which responds with some JSON defer server.Close() // close the server at the end of the test w := weather.Weather{ForecastUrl: server.URL} // supply a server url _, err := w.FetchForecast() // fetch data from the httptest server if err != nil { t.Errorf(\u0026#34;Found error: %v\u0026#34;, err) } //... add more test expectations here }","href":"https://fallenstedt.com/blog/mocking-http-requests-in-go/","kind":"page","lang":"en","lastmod":"2023-12-21T12:36:41-08:00","objectID":"d5a7dd025027620b2dbaa2f165b7e143","publishDate":"2023-12-21T12:36:41-08:00","section":"blog","tags":[],"title":"Mocking http requests with Go","type":"blog"},{"content":"I like chatgpt, but I don\u0026rsquo;t like it crawling my site. That\u0026rsquo;s why I told ChatGPT to frig off.\nThese two lines will make that happen\nUser-agent: GPTBot Disallow: / You can check out an example on this site by visiting its robots.txt file.\nMore information about ChatGPT\u0026rsquo;s crawler can be found here","href":"https://fallenstedt.com/blog/frig-off-chatgpt/","kind":"page","lang":"en","lastmod":"2023-08-19T12:36:41-08:00","objectID":"6a4a652181a06cd2e33c7a24bd2356ce","publishDate":"2023-08-19T12:36:41-08:00","section":"blog","tags":[],"title":"Frig off chat gpt","type":"blog"},{"content":"How do you measure the performance and user experience of your website? Every change made can affect page loading speed, interactivity and visual stability. Together, these contribute to your product’s performance, search engine ranking, and advertisement performance. With New Relic, you can monitor these web vitals and be notified of user experience regressions.\nWhich main web vitals I should monitor? From Google, their are several main web vitals you should focus on monitoring to evaluate how well users are experiencing your application.\nLargest Contentful Paint — How fast your application loads as perceived by the user. First Contentful Paint - The time it takes for the first content element to be painted on the screen in the user\u0026rsquo;s browser First Input Delay — How quickly your application responds to interaction with clicking a button, tapping a link, or using a form field. Cumulative Layout Shift — The visual stability of your application. Together, these web vitals provide a holistic picture of the loading performance, interactivity, and visual stability of your application. Monitoring and improving these metrics can help your team create faster, visually stable, and highly interactive web applications.\nA reasonable threshold to measure is the 75th percentile of each metric. This ensures you\u0026rsquo;re hitting the recommended target for most of your users. We use a percentile to understand the spread and distribution of data. Assuming your web application produces large amounts of data, there can be outliers which might skew the interpretation of the overall data distribution.\nHow do I record these metrics with New Relic? Assuming you have New Relic’s browser monitoring enabled on your web application, then you should have access to the PageViewTiming event. This event represents individual timing events during a page view lifecycle, and offers valuable insights into how real users experience the performance of their website.\nLargest contentful paint \u0026amp; first contentful paint You can target the largest contentful paint and first contentful paint of your web application with the following query. It will select the 75th percentile of each metric, faceted by the page url. It will plot these value son a graph over the past month.\nFROM PageViewTiming SELECT percentile(largestContentfulPaint, 75), percentile(firstContentfulPaint, 75) WHERE appName LIKE 'Your App Name' FACET pageUrl TIMESERIES MAX SINCE 1 month ago\nKnowing the FCP and LCP values showcases which time users see something loading on the screen, until the largest element on the page is finally rendered. Together, this reflects how quickly users can access the main content of the page. A lower score here score indicates a fast-loading page, leading to quicker engagement and reduced bounce rates from your site.\n🧠 Hint! If you want to know the element which calculated largest contentful paint, then facet by the elementId and its size: FROM PageViewTiming SELECT percentile(largestContentfulPaint, 75) WHERE elementId != '' AND appName LIKE 'Your App Name' FACET elementId, elementSize TIMESERIES MAX SINCE 1 month ago\nFirst input delay Measuring the delay between the user’s first interaction on the site to the response of that interaction can be accomplished with the following New Relic NRQL query:\nSELECT percentile(firstInputDelay, 75) as 'fid' FROM PageViewTiming WHERE timingName = 'firstInteraction' TIMESERIES FACET browserTransactionName, interactionType WHERE routePath LIKE '%condition-builder.create%' SINCE 1 month ago\nThe First Input Delay value represents how smoothly and quickly a page becomes interactive. A good lower value means users can interact with the page without frustrating delays, resulting in a positive user experience.\nHow this works is a timer starts when the first user interaction occurs, like a mouse click or key press. The browser then checks if any long-running tasks are happening on the main thread. Once the main thread can begin processing event handlers in response to that interaction, then First Input Delay is calculated.\nCumulative Layout Shift The visual stability during the entire lifespan of a page of your webpage is measured with Cumulative Layout Shift. Lower scores mean the page is visually stable when the web page is loading all text, images, and other parts of the document, and higher scores lead to more visually unstable experiences.\nFROM PageViewTiming SELECT percentile(cumulativeLayoutShift, 75) WHERE appName LIKE 'Your app name' TIMESERIES SINCE 1 month ago\nCumulative layout shift represents all individual layout shifts that happen on the page. A layout shift occurs when elements change their start position. For example, When a new element is added to the DOM or an existing element changes size, and visible elements change their start position, then there is a cumulative layout shift.\nHow do I get notified when these web vitals are not meeting a threshold? You can create New Relic alert conditions for your web application’s user experience. This guide is not focused on alert creation, so I will leave a link to the docs here. When a threshold is not met for any of the web vitals, you can diagnose the specific issue with your web page, make adjustments to the user experience, and continue monitoring your web application.","href":"https://fallenstedt.com/blog/web-vitals/","kind":"page","lang":"en","lastmod":"2023-08-06T12:36:41-08:00","objectID":"e07b5180b0c32919a23d9de7ecf736c6","publishDate":"2023-08-06T12:36:41-08:00","section":"blog","tags":[],"title":"Recording Web Vitals with New Relic","type":"blog"},{"content":"The Problem API Gateway allows you to assocate deployments with stages, each stage representing a logical reference of your api. For example, my gardentour API project needs a dev and a prod stage to represent my environments.\nI manage infrastructure with terraform, and I needed to achieve full isolation of my stages. It would be impracticable to manage deploys of many API Gateway stages with a single Terraform environment like so:\nterraform/ ├─ main.tf ├─ modules/ │ ├─ api-gateway/ │ │ ├─ main.tf │ │ ├─ variables.tf │ │ ├─ outputs.tf │ ├─ api-gateway-stage/ │ │ ├─ main.tf │ │ ├─ variables.tf │ │ ├─ outputs.tf Assuming the main.tf for terraform used a single api-gateway and multiple api-gateway-stages, I would be locked to single deployment for all of my stages. My dev stage would not be independent of my prod stage.\nThe Solution By creating separate Terraform environments for each stage, you can more easily manage and isolate changes to your API Gateway deployments. Your dev environment be worked on independently of your test environment, or your prod environmnet. Terraform environments can be isolated by placing environment specific infrastructure into its own directory.\nThis approach had a major benefit of knowing which environment was being worked on, and limited myself from messing up my entire project with an accidental deploy.\nterraform/ ├─ global/ │ ├─ api-gateway/ │ │ ├─ main.tf ├─ dev/ │ ├─ main.tf ├─ prod/ │ ├─ main.tf ├─ modules/ │ ├─ api-gateway/ │ │ ├─ main.tf │ │ ├─ variables.tf │ │ ├─ outputs.tf │ ├─ api-gateway-stage/ │ │ ├─ main.tf │ │ ├─ variables.tf There are three environments and one modules directory:\nglobal refers to infrastructure that is available across all environments. These can include my IAM roles, Route53 domains and hosted zones, or a global API Gateway instance.\ndev would reference the global API Gateway instance as a data source. This environment would add a dev stage to the API Gateway.\nprod would also reference the global API Gateway instance as a data source, and add a prod stage to it.\nmodules are terraform modules that encapsulate the volatility of some piece of infrastructure. These modules are used across all environments.\nThis directory structure keeps the lifecycle of my API Gateway stages independent, while using a single API Gateway instance.","href":"https://fallenstedt.com/blog/terraform-api-gateway-stages/","kind":"page","lang":"en","lastmod":"2023-02-20T12:36:41-08:00","objectID":"2a549dfbd7670d204186f72b6784a393","publishDate":"2023-02-20T12:36:41-08:00","section":"blog","tags":[],"title":"Managing API Gateway Deployments with Terraform: Achieving Full Stage Isolation","type":"blog"},{"content":"I recently had a power outage which lasted 10 hours here at home. It was not pleasant sitting the dark with just a flashlight. I knew I had to build a power box, a device which provides a variety of input leads from a battery. This article details what how I designed and built a power box for simple applications such as powering lamps, charging phones, laptops, and powering my ham radio.\nDesign and Planning The materials I used for the power box were specific to my power needs. I optimized for reducing time to construct the power box at some extra cost. Additionally, I used existing tools I had around. What I discovered is the total price of this power box was less than buying a pre-made one of equivalent power.\nBattery The needs for my battery were\nMinimal EMI/RFI (Electromagnetic Interference/Radio-Frequency Interference) so I can operate a ham radio well. Can power lamps, charge phones, and laptops. Enough power stored more multiple days of continuous use without charging. The size of the battery I got was a 12v 40Ah LiFePO4 battery which supplied me with 480Wh. This deep cycle battery has a large capacity and a very high life cycle. Additionally, this battery chemistry is well designed for solar applications too!\nI got my battery from Bioenno. There are some great features with this battery, first being it has minimal EMI/RFI. Additionally, the battery comes with Anderson power pole connectors. The larger connector is for supplying electricity, and the smaller connector is for charging. Anderson powerpoles are a great connector for electronic projects like this.\nSolar Panels Knowing the battery I am using infers how large my solar panels could be. The spec sheet for the battery I chose says the maximum charging current is 6A.\nI opted for Bioenno’s 100 watt solar panels. These foldable panels supply a maximum of 5.56A, which is under my battery’s 6A limit. Additionally, these panels come with Anderson power pole connectors, making the system easy to assemble.\nSolar charge controller Knowing the panels I was using helped me decide which solar charge controller to get. I needed a solar charge controller to connect the solar panels to a battery. This controller should be able to handle at least 6A of current from my solar panels.\nI decided on a GV-10-Li-14.2V lithium solar charge controller from genasun. This solar charge controller is made in the USA, can handle the power output from my solar panels, and can charge LiFePO4 batteries too.\nI got a slightly oversized solar charge controller because I knew I may build a larger battery system later, and I would like to reuse as many components as possible.\nPower box Here is the shortcut I took. Rather than buying individual components to build the box, I decided to get a pre-made box that had a lot of built in components already. The cost of all the components, wires, connectors, and the box individually would be just about the same as buying a pre-buitl power box.\nThe powerwerx megabox fulfilled my needs perfectly. It could fit my solar charge controller and battery inside well. Additionally, it has the USB inputs for charging small devices and Anderson power pole DC outputs for powering inverter or my ham radio.\nBuilding the Power Box The wiring for the power box is very easy. I had to create a couple cable splices, but I used tools I had available for this.\nI tried to use 10 AWG (American Wire Gauge) for all the cables. This avoids any significant voltage drops, and allows me to handle up to 85A. You could get by with 12AWG for this box, but I wanted to minimize any power loss due to heat.\nThe first step was rewiring one of the Anderson power pole outputs on the box to a DC input for my solar panel. This way I could connect the solar panels from outside of my box.\nI also mounted the solar charge controller to the side of the box. I used some nuts, bolts and washers I had laying around. I also got some rubber washers from the plumbing section of home depot to ensure a water tight fit was made in this area.\nAnd that’s about it for modifications I made to this box. There is some extra space at the top of the box for one more module, and one day I will put something there.\nI also have a small pure sine wave inverter so I can plug in small AC powered equipment or charge laptops.\nAnd that\u0026rsquo;s it! Overall, I am very pleased with it. I use this box regularly for charging devices when I am in the garage, powering my ham radio, providing light when I go camping, and having a peace of mind on road trips. It\u0026rsquo;s a small tool added to my earthquake kit too. This project was great way for me to learn more about solar, and has inspired me to build a larger 24v system for my shed.","href":"https://fallenstedt.com/blog/powerbox/","kind":"page","lang":"en","lastmod":"2023-01-19T12:36:41-08:00","objectID":"94b444e08a52e4a7f1d1490fa6019d8a","publishDate":"2023-01-19T12:36:41-08:00","section":"blog","tags":[],"title":"How I built a Powerbox","type":"blog"},{"content":"The Problem I have been building a side project with AWS Cognito and Terraform. I wanted a custom message lambda trigger to be invoked anytime the user signed up for my app, however I kept getting permission errors. This blog shows you the terraform configuration you need to to let cognito invoke lambda triggers with Terrafrom.\nTerraform config AWC Cognito For AWS Cognito, I have a simple user pool that allows users to sign up with their email and a passowrd, and a single user pool client to hold users for my development environment.\nresource \u0026#34;aws_cognito_user_pool\u0026#34; \u0026#34;garden_tour_user_pool\u0026#34; { name = \u0026#34;garden_tour_user_pool\u0026#34; username_attributes = [\u0026#34;email\u0026#34;] auto_verified_attributes = [\u0026#34;email\u0026#34;] password_policy { minimum_length = 6 temporary_password_validity_days = 2 } schema { attribute_data_type = \u0026#34;String\u0026#34; developer_only_attribute = false mutable = true name = \u0026#34;email\u0026#34; required = true string_attribute_constraints { min_length = 1 max_length = 256 } } lambda_config { custom_message = module.lambda.lambda_arn } } resource \u0026#34;aws_cognito_user_pool_client\u0026#34; \u0026#34;garden_tour_client_development\u0026#34; { name = \u0026#34;garden_tour_client_development\u0026#34; user_pool_id = aws_cognito_user_pool.garden_tour_user_pool.id generate_secret = false refresh_token_validity = 90 prevent_user_existence_errors = \u0026#34;ENABLED\u0026#34; explicit_auth_flows = [ \u0026#34;ALLOW_REFRESH_TOKEN_AUTH\u0026#34;, \u0026#34;ALLOW_USER_PASSWORD_AUTH\u0026#34;, ] } resource \u0026#34;aws_lambda_permission\u0026#34; \u0026#34;allow_cognito_invoke_trigger\u0026#34; { statement_id = \u0026#34;AllowExecutionFromCognito\u0026#34; action = \u0026#34;lambda:InvokeFunction\u0026#34; function_name = module.lambda.lambda_function_name principal = \u0026#34;cognito-idp.amazonaws.com\u0026#34; source_arn = aws_cognito_user_pool.garden_tour_user_pool.arn } The essential resource needed is the aws_lambda_permission. Without this resource, your user pool\u0026rsquo;s lambda\u0026rsquo;s (found in lambda_config) will not be invoked. Instead, AWS Cognito will return a cryptic error message.\nI have configured my Cognito user pool resource with a lambda_config. This configuration object allows you to invoke lambda when a specific cognito event occurs. You can find more information about these lambda triggers at the AWS docs.\nLambda The lambda trigger can also be managed by Terraform. You need to create a variety of AWS resources to deploy your AWS Lambda though. I recommend follwoing along Terraform\u0026rsquo;s guide Deploy Serverless Applications with AWS Lambda and API Gateway to fully understand what resources you are creating.\nIn a nutshell, you are creating the following:\nAn S3 bucket with aws_s3_bucket An archive file with your lambda code with archive_file An S3 object with aws_s3_object which references your archive_file An AWS lambda function with aws_lambda_function which uses the file aws_s3_object inside your aws_s3_bucket. An cloudwatch group with aws_cloudwatch_log_group so you can store log messages from your Lambda function for 30 days. An aws_iam_role which allows Lambda to access resources in your AWS account. And finally an aws_iam_role_policy_attachment so your Lambda function can write to CloudWatch logs. resource \u0026#34;random_pet\u0026#34; \u0026#34;s3_bucket\u0026#34; { prefix = var.s3_bucket_name length = 4 } resource \u0026#34;aws_s3_bucket\u0026#34; \u0026#34;s3_bucket\u0026#34; { bucket = random_pet.s3_bucket.id force_destroy = true } data \u0026#34;archive_file\u0026#34; \u0026#34;lambda\u0026#34; { type = \u0026#34;zip\u0026#34; source_dir = var.lambda_source_dir output_path = var.lambda_output_dir } resource \u0026#34;aws_s3_object\u0026#34; \u0026#34;s3_object\u0026#34; { bucket = aws_s3_bucket.s3_bucket.id key = var.s3_object_key source = data.archive_file.lambda.output_path etag = filemd5(data.archive_file.lambda.output_path) } resource \u0026#34;aws_lambda_function\u0026#34; \u0026#34;lambda\u0026#34; { function_name = var.lambda_function_name s3_bucket = aws_s3_bucket.s3_bucket.id s3_key = aws_s3_object.s3_object.key runtime = var.lambda_function_handler_runtime handler = var.lambda_function_handler_name source_code_hash = data.archive_file.lambda.output_base64sha256 role = aws_iam_role.lambda.arn timeout = var.lambda_function_timeout environment { variables = { COGNITO_USER_POOL_ID = var.lambda_environment_variables.cognito_user_pool_id COGNITO_CLIENT_ID = var.lambda_environment_variables.cognito_client_id TABLE_NAME = var.lambda_environment_variables.table_name DYNAMO_REGION = var.lambda_environment_variables.dynamo_region } } } resource \u0026#34;aws_cloudwatch_log_group\u0026#34; \u0026#34;lambda\u0026#34; { name = \u0026#34;/aws/lambda/${aws_lambda_function.lambda.function_name}\u0026#34; retention_in_days = 30 } resource \u0026#34;aws_iam_role\u0026#34; \u0026#34;lambda\u0026#34; { name = \u0026#34;serverless_lambda\u0026#34; assume_role_policy = jsonencode({ Version = \u0026#34;2012-10-17\u0026#34; Statement = [{ Action = \u0026#34;sts:AssumeRole\u0026#34; Effect = \u0026#34;Allow\u0026#34; Sid = \u0026#34;\u0026#34; Principal = { Service = \u0026#34;lambda.amazonaws.com\u0026#34; } } ] }) } resource \u0026#34;aws_iam_role_policy_attachment\u0026#34; \u0026#34;lambda_policy\u0026#34; { role = aws_iam_role.lambda.name policy_arn = \u0026#34;arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole\u0026#34; } Hope this helps those trying to manage their AWS Cognito lambda triggers with terraform. If you encounter issues, feel free to ping me on mastodon with questions.","href":"https://fallenstedt.com/blog/terraform-cognito/","kind":"page","lang":"en","lastmod":"2022-07-18T12:36:41-08:00","objectID":"d0b30bf66887a7de11e0cb92e75cfc34","publishDate":"2022-07-18T12:36:41-08:00","section":"blog","tags":[],"title":"Terraform Cognito With Lambda Triggers","type":"blog"},{"content":"Update This repo is read-only and is no longer worked on. Twitter was a wonderful platform, built by a wonderful team. With its recent changes, its time to re-evaluate what we invest in and the world we want to build. To the projects using this library, I\u0026rsquo;ll leave the repo read-only so you can migrate on your own time\nBuilding with free APIs is a great way to teach yourself new skills in languages you like. I’ve always found APIs as an underrated way to learn something new. Building with APIs brings challenges that force you to learn new parts of programming that video tutorials can not do.\nTwitter’s API’s filtered stream endpoint allows you to filter the real-time stream of public Tweets. You can tap into twitter discussions by filtering tweets for specific attributes. You can find the latest job postings, monitor weather events, or keep on top of trends.\nIn this article I will discuss how to create twitter rules and manage a stream with my open source library twitterstream. This library was built for my project findtechjobs so I could find the latest tech jobs posted on twitter.\nIf you want a complete code example to get started, head over to the examples on twitterstream\nWhere do I start? The first step is to create an app on Twitter Developers and obtain a set of consumer keys. One you have an API key and an API secret key, you can generate an access token with twitterstream\nGenerate an Access Token We can use twitterstream to generate an access token. This access token will be used to authenticate all network requests going forward. In the code below, we make a network request to twitter’s oauth2/token endpoint with the The \u0026lsquo;Basic\u0026rsquo; HTTP Authentication Scheme. Then we create an instance of twitterstream with our access token.\ntok, err := twitterstream.NewTokenGenerator().SetApiKeyAndSecret(\u0026#34;YOUR_KEY\u0026#34;, \u0026#34;YOUR_SECRET_KEY\u0026#34;).RequestBearerToken() // Create an instance of twitter api api := twitterstream.NewTwitterStream(tok.AccessToken) Set up Streaming Rules Streaming rules make your stream deliver relevant information. The rules match a variety of twitter attributes such as message keywords, hashtags, and URLs. Creating great rules is fundamental to having a successful twitter stream. It’s important to continue refining your rules as you stream so you can harvest relevant information.\nLet’s create a stream for software engineer job postings with twitterstream. A valid job posting tweet should should be:\nPosted in the english language Not a retweet Not a reply to another tweet Contain the word “hiring” And contain the words “software developer” or “software engineer” The twitterstream package makes building rules easy. We can use a NewRuleBuilder to create as many rules as the Twitter API allows for our consumer keys.\nrules := twitterstream.NewRuleBuilder(). AddRule(\u0026#34;lang:en -is:retweet -is:quote hiring (software developer OR software engineer)\u0026#34;, \u0026#34;hiring software role\u0026#34;). Build() res, err := api.Rules.Create(rules, false) The first part is using twitterstream to create a NewRuleBuilder.\nWe pass in two arguments when we add our rule with AddRule. The first is a long string with many operators. Successive operators with a space between them will result in boolean \u0026ldquo;AND\u0026rdquo; logic, meaning that Tweets will match only if both conditions are met. For example cats dogs will match tweets that contain the words “cats” and “dogs”. The second argument for AddRule is the tag label. This is a free-form text you can use to identify the rules that matched a specific Tweet in the streaming response. Tags can be the same across rules.\nLet’s focus on the first argument. Each operator does something unique:\nThe first is the single lang:en which is BCP 47 language identifier. This filters the stream for tweets posted in the English language. You can only use a single lang operator in a rule, and it must be used with a conjunction.\nThen we exclude retweets with -is:retweet. We use NOT logic (negation) by including a minus sign in front of our operator. The negation can be applied to words too. For example, cat #meme -grumpy will match tweets with the word cat, #meme, and do not include the word “grumpy”.\nWe also exclude quote tweets with -is:quote. Quote tweets are tweets with comments, and I’ve found this operator very useful. When I was building findtechjobs.io, I encountered a lot of people retweeting an article about automated hiring with their opinion. These quote tweets cluttered my dataset with unrelated job postings.\nI then narrow my stream of tweets to words that include hiring. People who tweet about jobs would say “My team is hiring…”, or “StartupCo is hiring…”.\nFinally (software developer OR software engineer), is a grouping of operators combined with an OR logic. Tweets will match if the tweet contains either of these words.\nAfter we build our rules, we create them with api.Rules.Create. If you want to delete your rules, you can use api.Rules.Delete with the ID of each rule you currently have. You can find your current rules with api.Rules.Get.\nYou can learn more about rule operators here. Additionally, the endpoint that creates the rules is documented here.\nSet the Unmarshal Hook We need to create our own struct for our tweets so we can unmarshal the tweet well. Twitter’s Filtered Stream endpoint allows us to fetch additional information for each tweet (more on this later). To allow us to find this data easily, we need to create a struct that will represent our data model.\ntype StreamDataExample struct { Data struct { Text string `json:\u0026#34;text\u0026#34;` ID string `json:\u0026#34;id\u0026#34;` CreatedAt time.Time `json:\u0026#34;created_at\u0026#34;` AuthorID string `json:\u0026#34;author_id\u0026#34;` } `json:\u0026#34;data\u0026#34;` Includes struct { Users []struct { ID string `json:\u0026#34;id\u0026#34;` Name string `json:\u0026#34;name\u0026#34;` Username string `json:\u0026#34;username\u0026#34;` } `json:\u0026#34;users\u0026#34;` } `json:\u0026#34;includes\u0026#34;` MatchingRules []struct { ID string `json:\u0026#34;id\u0026#34;` Tag string `json:\u0026#34;tag\u0026#34;` } `json:\u0026#34;matching_rules\u0026#34;` } Every tweet that is streamed is returned as a []bytes by default. We can turn our data into something usable by unmarshaling each tweet into the struct StreamDataExample. It’s important to set an unmarshal hook with SetUnmarshalHook so we can process []bytes in a goroutine safe way.\napi.SetUnmarshalHook(func(bytes []byte) (interface{}, error) { data := StreamDataExample{} if err := json.Unmarshal(bytes, \u0026amp;data); err != nil { fmt.Printf(\u0026#34;failed to unmarshal bytes: %v\u0026#34;, err) } return data, err }) If you are uncertain what your data model will look like, you can always create a string from the slice of bytes.\napi.SetUnmarshalHook(func(bytes []byte) (interface{}, error) { return string(bytes), nil }) Starting a Stream After creating our streaming rules and unmarshal hook, we are ready to start streaming tweets. By default, twitter returns a limited amount of information about each tweet when we stream. We can request additional information on each tweet with a stream expansion.\nstreamExpansions := twitterstream.NewStreamQueryParamsBuilder(). AddExpansion(\u0026#34;author_id\u0026#34;). AddTweetField(\u0026#34;created_at\u0026#34;). Build() // StartStream will start the stream err = api.StartStream(streamExpansions) We first create some stream expansions with a NewStreamQueryParamsBuilder. This builder will create query parameters to start our stream with. Here, we are adding two additional piece of information to each tweet\nAddExpansion(\u0026quot;author_id\u0026quot;) will request the author’s id for each tweet streamed. This is useful if you are keeping track of users who are tweeting. AddTweetField(\u0026quot;created_at\u0026quot;) will request the time the tweet was tweeted. This is useful if you need to sort tweets chronologically. You can learn more about the available stream expansions here Then we start the stream with our expansions using api.StartStream. This method will start a long running GET request to twitter’s streaming endpoint. The request is parsed incrementally throughout the duration of the network request. If you are interested in learning more about how to consume streaming data from twitter, then you should read their documentation Consuming Streaming Data\nConsuming the Stream Each tweet that is processed in our long running GET request is sent to a go channel. We range over this channel to process each tweet and check for errors from twitter. The stream will stop when we invoke api.StopStream, then we skip the remaining part of the loop, return to the top and wait for aclose signal from the channel.\n// Start processing data from twitter after starting the stream for tweet := range api.GetMessages() { // Handle disconnections from twitter if tweet.Err != nil { fmt.Printf(\u0026#34;got error from twitter: %v\u0026#34;, tweet.Err) // Stop the stream and wait for the channel to close on the next iteration. api.StopStream() continue } result := tweet.Data.(StreamDataExample) // Here I am printing out the text. // You can send this off to a queue for processing. // Or do your processing here in the loop fmt.Println(result.Data.Text) } Twitter’s servers attempt to hold the stream connection indefinitely. The error from twitter is made available in the stream. Disconnections can occur from several possible reasons:\nA streaming server is restarted on the Twitter side. This is usually related to a code deploy and should be generally expected and designed around. Your account exceeded your daily/monthly quota of Tweets. You have too many active redundant connections. More disconnect reasons can be found here Anticipating Disconnects from Twitter It’s important to maintain the connection to Twitter as long as possible because missing relevant information in your stream can create poor datasets. It should be expected that disconnections will occur and reconnection logic be built to handle disconnections from twitter\nWe can build reconnection logic using twitterstream’s api and a defer statement. A full example of handling reconnects can be found here. Below is a snippet\n// This will run forever func initiateStream() { fmt.Println(\u0026#34;Starting Stream\u0026#34;) // Start the stream // And return the library\u0026#39;s api api := fetchTweets() // When the loop below ends, restart the stream defer initiateStream() defer initateStream() // Start processing data from twitter for tweet := range api.GetMessages() { if tweet.Err != nil { fmt.Printf(\u0026#34;got error from twitter: %v\u0026#34;, tweet.Err) api.StopStream() continue } result := tweet.Data.(StreamDataExample) fmt.Println(result.Data.Text) } fmt.Println(\u0026#34;Stopped Stream\u0026#34;) } After we have started the stream and before we start processing the tweets, we defer the method itself. This will handle reconnections to twitter whenever the messages channel closes.\nFinal Thoughts I hope you find this library useful in streaming tweets from twitter. Building this library was a challenge, and I learned how Go’s concurrency model works. If you liked this post, follow me on mastodon as I document my journey in the software world.","href":"https://fallenstedt.com/blog/twitter-stream/","kind":"page","lang":"en","lastmod":"2022-11-07T12:36:41-08:00","objectID":"05b2cd4b26219a7a9c3ba33f0731db77","publishDate":"2021-12-29T12:36:41-08:00","section":"blog","tags":[],"title":"Streaming Tweets With Go","type":"blog"},{"content":"Last year, I took the plunge and applied for a Frontend Engineering Position at Amazon. I knew little about the interview process and I had just one month to prepare for it. The result? I got an offer. In fact, the use of the interview techniques below granted me the opportunity to choose between two competing job offers. In this guide I will reveal the steps I took in order to become a competitive candidate.\nReview Your Accomplishments On the day of my Amazon interview I chatted with eight people over the period of five hours. Each interviewer was curious to know how my accomplishments have impacted companies I\u0026rsquo;ve worked for. To prepare, I spent hours combing through completed JIRA tickets, slack conversations, and logs to measure the impact of my contributions.\nI used the interview prep grid from Cracking the Coding Interview, 6th Edition to organize information about my past accomplishments. This interview prep grid addresses common topics an interviewer would be curious about— such as leadership skills, conflict resolution, and experiences with failure. Start documenting your accomplishments with this google sheet template I made.\nCompleting the interview prep grid is fundamental to having a successful interview. In the grid, columns represent projects or jobs you\u0026rsquo;ve had and rows represent topics the interviewer will be curious about. Complete the grid to analyze the role you played in high impact projects and you will discover talking points for your interview.\nI continue to fill out my grid, even now, because having a written record of my work experiences will simplify things if I ever need to prepare for another interview. It\u0026rsquo;s important to keep a record of both failures and accomplishments. I focus on describing my own accomplishments as opposed to my team\u0026rsquo;s accomplishments. To begin brainstorming your work experiences as a software engineer, ask yourself these questions:\nWhat projects have you led? What was the challenging about them? Have you taken down production before? How did you help resolve the situation? What have you made simpler at work? What was the outcome? For reference, below is an example of a column from my interview prep grid that I used at my Amazon interview.\nCommon Questions Angular App Refactor Challenges * Refactor Angular app to reduce redundant network requests by 78%. Mistakes/Failures * Took down production with a single line of untested code being changed Enjoyed * Teaching the team how observables work and pub sub design pattern Leadership * Identified production outage via 1 star review. created lambda canary coverage to ensure we are alerted to the minute. Conflicts * Teaching the team how observables work and pub sub design pattern. What you would do differently * I would have started teaching the team how to use RxJS sooner Prepare for Behavioral Interview Questions A large part of the interview focuses on Amazon\u0026rsquo;s Leadership Principles. Take the time to understand the meaning of these leadership principles and how you relate to them. You\u0026rsquo;ve probably followed these leadership principles throughout your career without necessarily being conscious of them. Be prepared to share examples of how you have followed these principles. Make an effort to really study these principles and memorize how they have guided your work experiences.\nMy interview process felt quite lengthy. To prepare for several hours of discussion, I searched online for sample Amazon interview questions and made flashcards. Every day, I practiced answering the flashcard questions using examples from my interview prep gird. I wanted to make sure I could confidently answer any behavioral interview question that would come my way.\nLet\u0026rsquo;s walk through an example interview question and formulate a response using the Situation Action Result (SAR) method:\nTell me about a time you had to quickly adjust your work priorities to meet changing demands?\nYou can create a response to this type of question by using the interview prep grid to focus on key points that describe your specific work experiences. As you answer, provide evidence that justifies the decisions you have made. At the end of your response, expect the interviewer to dive deeper. I cannot stress how important it is to keep your response reasonably brief so that a discussion can follow. The content of your response is just as important as the way you respond.\nThe SAR method is a trustworthy interview technique used to create structured and succinct responses to behavioral questions. Using this method will help your interviewer easily follow your thought process. SAR stands for situation, action, result.\nSituation: Context and relevant details of your example situation Action: The steps you took to address the situation, including your decision-making process Result: The direct effects and measurable result of your efforts Here is how I answered the above question using the SAR method:\n(Situation) I was notified by our call center that many customers were complaining about our website being down. (Action) A quick test showed parts of our Angular components were loading poorly in production, but loading fine in development. I knew we deployed last week, so the issue must have been caused by a third party library being injected at runtime. I inspected what network events occurred as I interacted with the DOM and discovered hotjar, a behavior analytics tool, as the culprit. (Result) I made the code change to remove the 3rd party library from our app, deployed to production, and got our app back online in about 10 minutes. I can go into more detail if you\u0026rsquo;d like.\nIn my interview response I was able to highlight these three Amazon leadership principles:\nCustomer Obsession: I addressed the situation immediately because we risked losing customer trust in our product. Ownership: Instead of saying, \u0026ldquo;That\u0026rsquo;s not my job,\u0026rdquo; I took ownership of the problem and quickly began working on a solution. Bias for Action: I knew that my fix would hinder our marketing department\u0026rsquo;s data collection efforts, but the damage this plugin was causing was greater than the value of analytics, and so I took action. I recommend making flashcards of sample behavioral interview questions and practicing daily using the SAR method and referencing your interview prep grid. Here are some sample Amazon interview questions to get you started:\nTell me about a time when you were faced with a problem that had a number of possible solutions. What was the problem and how did you determine the course of action? What was the outcome of that course of action? When did you take a risk, make a mistake, or fail? How did you respond and how did you grow from that experience? Describe a time you took the lead on a project. What did you do when you needed to motivate a group of individuals or promote collaboration on a particular project? How have you leveraged data to develop a strategy? Describe a time when you took on work outside of your comfort area. How did you identify what you needed to learn to be successful? How did you go about building expertise to meet your goal? Did you meet your goal? Confidently Solve Whiteboard Questions If you\u0026rsquo;re interviewing for an engineering role at Amazon, then you will have whiteboard questions. You may be thinking, \u0026ldquo;Ugh, whiteboard questions are awful.\u0026rdquo; But if you think whiteboard questions are gruesome, then you\u0026rsquo;re probably not approaching them with the right mindset.\nAmazon interviewers ask you to solve whiteboard questions because they want to evaluate your problem solving and communication skills. Interviewers want to see how you approach challenging problems that you\u0026rsquo;ve never encountered before. Don\u0026rsquo;t worry about being unable to solve whiteboard questions—the interviewers are more interested in seeing how you work towards the solution than if you are actually able to provide the correct answer.\nI had three difficult whiteboard questions that ranged from algorithms, architecture, and designing part of a game. Each question asked was unfamiliar to me despite having studied computer science algorithms in preparation. To answer these questions, I used several techniques to answer these questions to the best of my ability.\nAsk questions to clarify the problem before you start The quality of your attempt at solving a tough question depends on how well you define the problem you are solving. By asking clarifying questions about the problem, you are laying the groundwork to help you discover acceptance criteria.\nIn one whiteboard question, I was tasked with creating a \u0026ldquo;move\u0026rdquo; function for a board game called gobblet. I spent 10 minutes asking my interviewer clarifying questions before I started solving the question. I wrote my thoughts on the whiteboard as I asked questions about expectations, rules for a valid move, and how a winner is determined. By asking clarifying questions I discovered that the problem I was solving for was not just \u0026ldquo;moving\u0026rdquo; pieces, but designing application state and having the game be replayable. Once my questions were answered, only then did I begin solving the problem.\nIf you do not gather information before you start solving a problem, you can miss a lot of valuable context which can improve the quality of your answer. Clarifying questions identify the crux of the problem and steer you away from premature conclusions.\nVoice your thought process so that the interviewer understands what you are thinking at all times. It is important to communicate your thought process when solving your whiteboard question, especially if you are stuck. Amazon whiteboard questions are meant to explore your critical thinking skills as you tackle an unfamiliar problem. When you communicate your thoughts with your interviewer, you are showing them how you are planning to solve for the solution even if you are uncertain how to solve it.\nOne of my Amazon Interview questions was to find a value in a sorted and rotated array without using the Array object\u0026rsquo;s built-in methods. I told my interviewer that I\u0026rsquo;ve never built a binary search algorithm before, but I had a general idea of how it worked. I explained that if I were to search for a name in a phone book, which I knew began with the letter \u0026ldquo;M\u0026rdquo;, it would be ridiculous if I started at the first page. Ideally, I\u0026rsquo;d start somewhere in the middle and see if I need to go forward or backward based on where I landed. To solve for the rotated array, I continued with the phone book analogy and discovered my first step would be finding the index where the rotation starts. Knowing where the rotation started would determine on which \u0026ldquo;side\u0026rdquo; of the array my binary search should be performed.\nVoicing your thoughts with your interviewer will help you discover a solution if you are unsure how to solve it. When you have constant dialogue with your interviewer, it helps them understand how you think and helps you explore solutions to unfamiliar problems.\nWhat\u0026rsquo;s next? I hope you feel inspired to do well at your next interview, even if it\u0026rsquo;s not at Amazon. My Amazon interview was the most challenging interview I\u0026rsquo;ve ever had, and I learned that careful review of my accomplishments, public speaking, and critical thinking are the foundation for any successful interview.\nIf you liked this post, follow me on mastodon as I document my journey in the software world. I frequently share my knowledge about Rust, Go and Typescript and enjoy meeting others who have the desire to learn.","href":"https://fallenstedt.com/blog/three-steps-job/","kind":"page","lang":"en","lastmod":"2021-07-12T12:36:41-08:00","objectID":"ef80b59aa44546fc49aeab8461731f1f","publishDate":"2021-07-12T12:36:41-08:00","section":"blog","tags":[],"title":"Three Steps I Took to Get a Job Offer From Amazon","type":"blog"},{"content":"At Streem we are on a mission to make the world\u0026rsquo;s expertise more accessible. We create guidance tools to steer the discussion and ensure accurate understanding the first time. One of the guidance tools we are developing for web is a 3d cursor that can be positioned in a remote video. To accomplish this, we need to process a lot of raw pixel data and AR data per frame.\nPositioning remote artifacts in AR involves a lot of computation between animation frames. It involves so much computation that it is simply too much to cover in one article. In this post, I will discuss how we used Rust to access raw pixel data from a video frame.\nIf you would rather jump straight to the code, then hop over here and give this repo a ⭐\nWhat is Web Assembly? WebAssembly (wasm) is a type of code that can be run in web browsers and mobile devices. Wasm was designed to be a compilation target for low-level languages like C, C++, and Rust. With wasm, web browsers and mobile devices can now run code written in multiple languages at near-native speeds by taking advantage of common hardware capabilities.\nWasm was introduced to all modern web browsers to help extend the capabilities of JavaScript. Since JavaScript has complete control over how WebAssembly code is downloaded, compiled and run, JavaScript developers can think of wasm as a feature for efficiently creating high-performance functions.\nIn this demo, we used WebAssembly to extract raw pixel data from a remote video feed. This guide will cover high level details about web assembly. It will not cover setting up a web assembly project. There are tools and tutorials to help you get started with your next web assembly project. If you are completely new to Rust, then you should watch Tensor Programming\u0026rsquo;s Intro to Rust playlist\nHow do I process pixels from a remote video feed? To process raw pixel data for every frame of a video, we used a video track from a MediaStream object, which was then used to create an HtmlVideoElement. The video element can then be used as a source for a canvas to draw an image with. With the image drawn onto a canvas at 60fps, we have access the raw underlying pixel data with CanvasRenderingContext2D.getImageData().\nBelow is a high level diagram demonstrating how you can put individual video frames onto a canvas element. With the video frame drawn onto a canvas element, you will have access to raw pixel data.\nOnce we knew how to access raw pixel data from a frame, we brought in Rust and wasm. We wanted the interface between JavaScript and Rust to be simple, so we had our RenderingEngine be responsible for two things\nRegistering target canvases for our processed video frame to render onto Processing every frame from a video feed Registering Target Canvases A target canvas is where our processed video frames would render.\nAfter dynamically loading our wasm, we can invoke add_target_canvas to register a rendering destination for our RenderingEngine\nconst renderingEngine = new wasm.RenderingEngine(); renderingEngine.add_target_canvas(canvas); The RenderingEngine is a struct which consumes three private fields\ncanvas the buffer canvas to parse LightShow data on render_targets A vector of canvas elements to render the final frames onto cancel A signal to stop rendering frames onto a canvas pub struct RenderingEngine { canvas: Rc\u0026lt;RenderingEngineCanvas\u0026gt;, render_targets: Rc\u0026lt;RefCell\u0026lt;Vec\u0026lt;RenderingEngineCanvas\u0026gt;\u0026gt;\u0026gt;, cancel: Rc\u0026lt;RefCell\u0026lt;bool\u0026gt;\u0026gt;, } Each of these fields is wrapped in Rust\u0026rsquo;s Reference Counter (Rc). Rcs enable shared ownership of data. A Rc is used when we need several references to an immutable value at the same time. Rc pointers are distinct from Rust\u0026rsquo;s usual references in that, while they are allocated on the heap, cloning a Rc pointer does not cause a new heap allocation. Instead, a counter inside the Rc is incremented. We will see how this is used with our animation loop. This is needed because we can\u0026rsquo;t use lifetimes with wasm_bindgen. See this issue.\nInside our Rc is a RefCell, which provides us a way to mutate data when there are immutable references to that data. We will need add many render_targets and mutate our cancel flag as our application is used at runtime. In a nutshell, a RefCell let\u0026rsquo;s you get \u0026amp;mut references of your contents. When we use Rc\u0026lt;RefCell\u0026lt;T\u0026gt;\u0026gt;, we are saying we have shared, mutable ownership of data in our application.\nIn Rust, add_target_canvas is a public method exposed with wasm_bindgen. It\u0026rsquo;s important to note this method uses \u0026amp;mut self. This reference type allows you to modify self without taking ownership of it.\n#[derive(Debug)] struct RenderingEngineCanvas { element: HtmlCanvasElement, context_2d: CanvasRenderingContext2d, } #[wasm_bindgen] #[derive(Debug)] pub struct RenderingEngine { canvas: Rc\u0026lt;RenderingEngineCanvas\u0026gt;, render_targets: Rc\u0026lt;RefCell\u0026lt;Vec\u0026lt;RenderingEngineCanvas\u0026gt;\u0026gt;\u0026gt;, cancel: Rc\u0026lt;RefCell\u0026lt;bool\u0026gt;\u0026gt;, } #[wasm_bindgen] impl RenderingEngine { #[wasm_bindgen(constructor)] pub fn new() -\u0026gt; RenderingEngine { let canvas = Rc::new(RenderingEngine::create_buffer_canvas()); let render_targets = Rc::new(RefCell::new(Vec::new())); let cancel = Rc::new(RefCell::new(false)); RenderingEngine { canvas, render_targets, cancel, } } #[wasm_bindgen(method)] pub fn add_target_canvas(\u0026amp;mut self, canvas: HtmlCanvasElement) { // Obtain 2D context from canvas let context = canvas .get_context(\u0026#34;2d\u0026#34;) .unwrap() .unwrap() .dyn_into::\u0026lt;CanvasRenderingContext2d\u0026gt;() .expect(\u0026#34;failed to obtain 2d rendering context for target \u0026lt;canvas\u0026gt;\u0026#34;); // Create a struct let container = RenderingEngineCanvas { element: canvas, context_2d: context, }; // Update instance of rendering engine let mut render_targets = self.render_targets.borrow_mut(); render_targets.push(container); } } Processing every frame from a video feed Processing every frame from a video feed is more involved. I will remove a lot of finer details, however, you can explore the github repo for a complete code example\nFrom JavaScript, we can invoke our animation loop with a start method. It\u0026rsquo;s only argument is MediaStream object which is obtained by requesting the user\u0026rsquo;s media\nconst renderingEngine = new wasm.RenderingEngine(); renderingEngine.add_target_canvas(canvas); const userMedia = await navigator.mediaDevices.getUserMedia(someContraints); renderingEngine.start(userMedia); In Rust, we create an HTMLVideoElement and start our animation loop. With start_animation_loop, we clone the values we will be using in our animation loop.\nvideo is needed so we can obtain it\u0026rsquo;s dimensions and frames from. canvas is our buffer canvas so we can proccess our pixel data cancel is a signal we can use to trigger a stop to our animation loop render_targets are all the target canvases on JS that need render our final image onto. There\u0026rsquo;s also two new constants f and g. We want to call requestAnimationFrame every frame until our video ends. After the video source ends we want all our resources cleaned up. We will use f to store our closure we want to execute on each frame, and g to kick it off for us.\nThe closure we create is stored on g for the first frame. We call borrow_mut to get a mutuable reference to value inside RefCell::new(None).\nWe learned a lot about this from this PR at rustwasm and how to capture an environment within an anonymous function\n#[wasm_bindgen(method)] pub fn start(\u0026amp;self, media_stream: \u0026amp;MediaStream) { let video = RenderingEngine::create_video_element(media_stream); \u0026amp;self.start_animation_loop(\u0026amp;video); } fn start_animation_loop(\u0026amp;self, video: \u0026amp;Rc\u0026lt;HtmlVideoElement\u0026gt;) { let video = video.clone(); let canvas = self.canvas.clone(); let cancel = self.cancel.clone(); let render_targets = self.render_targets.clone(); let f = Rc::new(RefCell::new(None)); let g = f.clone(); *g.borrow_mut() = Some(Closure::wrap(Box::new(move || { // clean up f when cancel is set to true if *cancel.borrow() == true { let _ = f.borrow_mut().take(); return; } // continuously animate with the value of f. RenderingEngine::request_animation_frame( f.borrow().as_ref().unwrap() }) as Box\u0026lt;dyn FnMut()\u0026gt;)); // start the animation loop here for 1 frame, drop g. RenderingEngine::request_animation_frame(g.borrow().as_ref().unwrap()); } // Note this method call, which uses `as_ref()` to get a `JsValue` // from our `Closure` which is then converted to a `\u0026amp;Function` // using the `JsCast::unchecked_ref` function. fn request_animation_frame(n: \u0026amp;Closure\u0026lt;dyn FnMut()\u0026gt;) { RenderingEngine::get_window() .request_animation_frame(n.as_ref().unchecked_ref()) .expect(\u0026#34;should register `requestAnimationFrame` OK\u0026#34;); } With a function wrapped in a Closure for JavaScript to execute, we can process our video frames\u0026rsquo; pixel data. I will make the code example below simple, however, you can find the original code here.\n// inside our animation loop // obtain video dimensions let video_dimensions = Dimensions { width: video.video_width() as f64, height: video.video_height() as f64, }; // draw frame onto buffer canvas // perform any pixel manipulation you need on this canvas canvas.element.set_width(video_dimensions.width as u32); canvas.element.set_height(video_dimensions.height as u32); canvas.context_2d.draw_image_with_html_video_element(\u0026amp;video, 0.0, 0.0).expect(\u0026#34;failed to draw video frame to \u0026lt;canvas\u0026gt; element\u0026#34;); // render resulting image onto target canvas for target in render_targets.borrow().iter() { // Use scrollWidth/scrollHeight so we fill the canvas element. let target_dimensions = Dimensions { width: target.element.scroll_width() as f64, height: target.element.scroll_height() as f64, }; let scaled_dimensions = RenderingEngine::get_scaled_video_size( \u0026amp;video_dimensions, \u0026amp;target_dimensions, ); let offset = Dimensions { width: (target_dimensions.width - scaled_dimensions.width) / 2.0, height: (target_dimensions.height - scaled_dimensions.height) / 2.0, }; // Ensure the target canvas has a set width/height, otherwise rendering breaks. target.element.set_width(target_dimensions.width as u32); target.element.set_height(target_dimensions.height as u32); target.context_2d.draw_image_with_html_canvas_element_and_dw_and_dh( \u0026amp;canvas.element, offset.width, offset.height, scaled_dimensions.width, scaled_dimensions.height, ).expect(\u0026#34;failed to draw buffer \u0026lt;canvas\u0026gt; to target \u0026lt;canvas\u0026gt;\u0026#34;); } If you liked this example and want to learn more about Rust, WebAssembly, and TypeScript then find me on mastodon","href":"https://fallenstedt.com/blog/processing-pixels/","kind":"page","lang":"en","lastmod":"2021-05-10T12:36:41-08:00","objectID":"ea1de9a16fec2dd502c2d09e403b7785","publishDate":"2021-05-10T12:36:41-08:00","section":"blog","tags":[],"title":"Using Rust and WebAssembly to Process Pixels from a Video Feed","type":"blog"}]