<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>BRIAN R. ISETT</title>
	<atom:link href="https://brianisett.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://brianisett.com</link>
	<description>Scientist &#38; Writer</description>
	<lastBuildDate>Wed, 13 Mar 2024 11:06:14 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.5</generator>
	<item>
		<title>ML Series Part I: DeepLabCut for machine vision and scaling production on a GPU cluster</title>
		<link>https://brianisett.com/2021/04/29/ml-series-part-1-using-machine-vision-deeplabcut-to-extract-data-and-scale-production-on-a-gpu-cluster/</link>
					<comments>https://brianisett.com/2021/04/29/ml-series-part-1-using-machine-vision-deeplabcut-to-extract-data-and-scale-production-on-a-gpu-cluster/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 29 Apr 2021 21:14:19 +0000</pubDate>
				<category><![CDATA[Data Science]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<category><![CDATA[behavior]]></category>
		<category><![CDATA[classification]]></category>
		<category><![CDATA[DeepLabCut]]></category>
		<category><![CDATA[Machine vision]]></category>
		<category><![CDATA[ML]]></category>
		<category><![CDATA[supervised]]></category>
		<guid isPermaLink="false">http://brianisett.com/?p=420</guid>

					<description><![CDATA[This post has a protocol for putting DeepLabCut into GPU server production on github. 1. Using DeepLabCut to perform markless body tracking in mice DeepLabCut is an open source machine vision package developed in Professor Mackenzie Mathis&#8217;s lab at EPFL and Harvard to harness two elements of machine vision into one tool. Firstly, they use convolutional neural networks (CNNS) to learn how to identify different parts of an object using image patches drawn from a single video frame. Next, they use that identification to then find all matching parts in a new video frame and extract the 2D coordinates of the matching parts. So for example, after manually marking a few dozen frames with coordinates for a mouse&#8217;s nose, tail tip, left and right hind paws, etc. DeepLabCut will quickly begin to identify the correct location of these parts in any new video run through the algorithm (tens of thousands of frames). This becomes incredibly powerful in basic neuroscience research, where tracking animal movements and behaviors can then be linked to activity in the nervous system and can help better define elements of disease. I am a researcher studying the neural circuit basis of Parkinsonism in mouse models of the disease, so accurately tracking mouse motor behavior is critical. 2. Training a good tracking model on a local computer with GPU To train the model, you first need some representative example data. This requires thinking about all the major sources of variability you might have: different backgrounds? Different lighting? Different objects that might obscure the image? Different subjects? In my case, I had all of these! In addition I also had video from two different resolution cameras and either in color or B&#38;W video. So I made sure to include example frames of each of these conditions to start with. The experience of labeling frames is a bit mind numbing but can be fun. (Those immediately interested can see what the experience is like by labeling some animal datasets hosted online). DLC comes with some good Jupyter Notebook examples (e.g. Demo_labeledexample_Openfield.ipynb) that show how to extract example frames from one or more videos, and then use the included GUI to label the mouse body parts in each frame. The process of hand labeling forces you to appreciate the possible difficulties the algorithm might also encounter in the data. In my case, we have a lot of data collected on low resolution cameras, and it could be surprisingly difficult to identify all of the mouse bodyparts accurately using the same criteria for each frame when the mouse was in the far corner, facing away. Higher resolution images were much easier to mark correctly, especially if mouse was close to the camera and not facing away from the camera. DeepLabCut is designed for this process to be iterative: you select some frames, train the model, check for frames likely to have mistakes (deviate greatly from nearby frames), put markers on the mistaken frames, train again, etc. The process is very similar for adding brand new conditions to an existing model, or tracking new body parts that were not previously labeled. So the strength of the iterations is that it adds a lot of important flexibility. Side note on adding body parts: make sure that the column order doesn&#8217;t change in your training set! That happened to me when I added a new body part to the model and it took a bit of time to realize this was the issue. The downside of the iterations is that you might need quite a few to get something that works reasonably well across really diverse conditions. I ended up needing about 10 iterations, adding around 50 frames each time to get usable data (495 frames total). But some of those iterations were also to add new body parts or new environments. Overall the tracking got pretty consistent. It&#8217;s not perfect (still needs some pre-processing, covered in Part 2 of this series), but this was a good enough start for my purposes, and took less than a day of my time to get the tracking to this point . In this case, each iteration trained for about 1 million cycles overnight on a single GeForce RTX 2070 GPU. 3. Scaling processing to a GPU cluster for production So now that I had a reasonable model for tracking body coordinates, I had a lot of videos to process. Processing these data on a single GPU takes close to real-time, so a 30 minute video takes approximately 30 minutes to process. I have +1,000 hours of video from more than 1,000 experiments, which means one pass of the processing would take 42 days. Instead, I did it in 2 days. Here&#8217;s how! Carnegie Mellon University has a close relationship with the Pittsburgh Super Computer, and I was able to get a starter grant funded for about 1500 hours of compute time on their GPU nodes. That was just about all I needed to get through my backlog of videos and test out a production for scaling up processing using DeepLabCut. The full protocol is available here: https://github.com/KidElectric/dlc_protocol but in brief: I realized that an easy approach would be to put DeepLabCut into a container. In essence, a container is like an entire computer in software form: all the files are in place and all it needs is a place to run. In this case, many places to run (in parallel). The PSC GPU nodes were set up to use Singularity images, which are similar to docker containers. So I found an existing docker image, made a few changes, and found the method for importing it to the supercomputer as a singularity image. 2. I wrote a protocol for uploading all of my videos to the supercomputer using ssh and uploaded the trained DLC model onto the supercomputer. 3. Three scripts were necessary to make the process work in parallel. The trick was to use the SLUM job array call on the supercomputer&#8211;this method automatically sends your singularity image out to as many nodes as become available and starts processing new data. So for that I needed: 1) a SLURM job script to initialize the Singularity images and pass the Job ID 2) a Linux .sh script to start python in the image and pass the job ID along and 3) a python script to run inside the Singularity image to translate the SLURM job ID into a video file to analyze and send that to DeepLabCut to begin processing. It also became useful to write code that checked that all videos were processed correctly. 4. Conclusion This method allowed me to analyze MANY videos simultaneously. Nodes become available whenever other jobs are finished, at certain points I remember seeing a dozens jobs running simultaneously, and sometimes many dozen! Altogether it took about 2 days to analyze the 1,000 hour backlog. That&#8217;s a 20x improvement over running on a single GPU computer. Would it be possible to analyze 1000 hours of video on a single GPU PC? Certainly. However, waiting 42 days is a big risk to take if you might find out that something didn&#8217;t work correctly. And it becomes even worse still if you realize the model needs to improve under a new condition. Lastly, if a lab ever wanted to scale up towards a full pipeline, it would only take a few more lines of code: check the lab server for new videos, send them to the supercomputer! Now that I&#8217;ve typed that out, maybe I&#8217;ll do that this afternoon ;-). Next up, now that we have some tracking coordinates, what do we do with them??? Part 2: Pre-processing the data. Part 3: classifying the data. Part 4: verification and final tweaks!]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-cover has-background-dim" style="min-height:160px;aspect-ratio:unset;"><video class="wp-block-cover__video-background intrinsic-ignore" autoplay muted loop playsinline src="http://brianisett.com/wp-content/uploads/2020/07/rear_detect_rear_short.mp4" data-object-fit="cover"></video><div class="wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow">
<p class="has-text-align-center">Extracting mouse behavior at scale</p>
</div></div>



<p>This post has a protocol for putting DeepLabCut into GPU server production on <a href="https://github.com/KidElectric/dlc_protocol">github.</a></p>



<h2 class="wp-block-heading">1. Using DeepLabCut to perform markless body tracking in mice</h2>



<p id="dlc_explained"><a href="http://www.mackenziemathislab.org/deeplabcut">DeepLabCut</a> is an open source machine vision package developed in Professor Mackenzie Mathis&#8217;s lab at EPFL and Harvard to harness two elements of machine vision into one tool. Firstly, they use convolutional neural networks (CNNS) to learn how to identify different parts of an object using image patches drawn from a single video frame. Next, they use that identification to then find all matching parts in a new video frame and extract the 2D coordinates of the matching parts. So for example, after manually marking a few dozen frames with coordinates for a mouse&#8217;s nose, tail tip, left and right hind paws, etc. DeepLabCut will quickly begin to identify the correct location of these parts in any new video run through the algorithm (tens of thousands of frames).  This becomes incredibly powerful in basic neuroscience research, where tracking animal movements and behaviors can then be linked to activity in the nervous system and can help better define elements of disease. I am a researcher studying the neural circuit basis of Parkinsonism in mouse models of the disease, so accurately tracking mouse motor behavior is critical.<br></p>



<h2 class="wp-block-heading">2. Training a good tracking model on a local computer with GPU</h2>



<p>To train the model, you first need some representative example data. This requires thinking about all the major sources of variability you might have: different backgrounds? Different lighting? Different objects that might obscure the image? Different subjects? In my case, I had all of these! In addition I also had video from two different resolution cameras and either in color or B&amp;W video. So I made sure to include example frames of each of these conditions to start with. </p>



<p>The experience of labeling frames is a bit mind numbing but can be fun. (Those immediately interested can see what the experience is like by labeling some animal datasets <a href="https://contrib.deeplabcut.org/">hosted online</a>). DLC comes with some good Jupyter Notebook examples (e.g. Demo_labeledexample_Openfield.ipynb) that show how to extract example frames from one or more videos, and then use the included GUI to label the mouse body parts in each frame.</p>



<p>The process of hand labeling forces you to appreciate the possible difficulties the algorithm might also encounter in the data. In my case, we have a lot of data collected on low resolution cameras, and it could be surprisingly difficult to identify all of the mouse bodyparts accurately using the same criteria for each frame when the mouse was in the far corner, facing away.</p>



<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img fetchpriority="high" decoding="async" src="http://brianisett.com/wp-content/uploads/2021/04/img057173_cropped.png" alt="" class="wp-image-476" width="490" height="480"/><figcaption>Is this even a mouse? A difficult frame to classify all body parts because the video is low resolution, the mouse is facing away, far from the camera, and part of body is obscured by a lever device.</figcaption></figure></div>



<p>Higher resolution images were much easier to mark correctly, especially if mouse was close to the camera and not facing away from the camera.</p>



<div class="wp-block-image"><figure class="aligncenter size-large is-resized"><img decoding="async" src="http://brianisett.com/wp-content/uploads/2021/04/img134829_cropped.png" alt="" class="wp-image-475" width="485" height="442"/><figcaption>Can mark all body parts clearly here.</figcaption></figure></div>



<p>DeepLabCut is designed for this process to be iterative: you select some frames, train the model, check for frames likely to have mistakes (deviate greatly from nearby frames), put markers on the mistaken frames, train again, etc. The process is very similar for adding brand new conditions to an existing model, or tracking new body parts that were not previously labeled. So the strength of the iterations is that it adds a lot of important flexibility. Side note on adding body parts: make sure that the column order doesn&#8217;t change in your training set! That happened to me when I added a new body part to the model and it took a bit of time to realize this was the issue. </p>



<p>The downside of the iterations is that you might need quite a few to get something that works reasonably well across really diverse conditions. I ended up needing about 10 iterations, adding around 50 frames each time to get usable data (495 frames total). But some of those iterations were also to add new body parts or new environments.</p>



<figure class="wp-block-video"><video controls src="http://brianisett.com/wp-content/uploads/2021/04/iter10_example.mp4"></video><figcaption>Top down and side camera tracking. Short video clip with coordinate tracking of nose tip, eye locations, paws and tail base.</figcaption></figure>



<p>Overall the tracking got pretty consistent.  It&#8217;s not perfect (still needs some pre-processing, covered in Part 2 of this series), but this was a good enough start for my purposes, and took less than a day of my time to get the tracking to this point . In this case, each iteration trained for about 1 million cycles overnight on a single GeForce RTX 2070 GPU.</p>



<h2 class="wp-block-heading">3. Scaling processing to a GPU cluster for production</h2>



<p>So now that I had a reasonable model for tracking body coordinates, I had a lot of videos to process. Processing these data on a single GPU takes close to real-time, so a 30 minute video takes approximately 30 minutes to process. I have +1,000 hours of video from more than 1,000 experiments, which means one pass of the processing would take 42 days.  Instead, I did it in 2 days. Here&#8217;s how!</p>



<p>Carnegie Mellon University has a close relationship with the Pittsburgh Super Computer, and I was able to get a starter grant funded for about 1500 hours of compute time on their GPU nodes. That was just about all I needed to get through my backlog of videos and test out a production for scaling up processing using DeepLabCut. </p>



<p>The full protocol is available here: <a href="https://github.com/KidElectric/dlc_protocol">https://github.com/KidElectric/dlc_protocol</a> but in brief:</p>



<ol class="wp-block-list"><li>I realized that an easy approach would be to put DeepLabCut into a container. In essence, a container is like an entire computer in software form: all the files are in place and all it needs is a place to run. In this case, many places to run (in parallel). The PSC GPU nodes were set up to use Singularity images, which are similar to docker containers.  So I found an existing docker image, made a few changes, and found the method for importing it to the supercomputer as a singularity image.</li></ol>



<pre class="wp-block-code"><code>module load singularity
singularity build dlc_217.simg docker://kidelectric/deeplabcut:ver2_1_7
#Then, for example, you can run a Linux shell from inside that image:
singularity shell dlc_217.simg</code></pre>



<p>2. I wrote a protocol for uploading all of my videos to the supercomputer using ssh and uploaded the trained DLC model onto the supercomputer.</p>



<p>3. Three scripts were necessary to make the process work in parallel. The trick was to use the SLUM job array call on the supercomputer&#8211;this method automatically sends your singularity image out to as many nodes as become available and starts processing new data. So for that I needed: 1) a SLURM job script to initialize the Singularity images and pass the Job ID 2) a Linux .sh script to start python in the image and pass the job ID along and 3) a python script to run inside the Singularity image to translate the SLURM job ID into a video file to analyze and send that to DeepLabCut to begin processing. It also became useful to write code that checked that all videos were processed correctly.</p>



<h2 class="wp-block-heading">4. Conclusion</h2>



<p>This method allowed me to analyze MANY videos simultaneously. Nodes become available whenever other jobs are finished, at certain points I remember seeing a dozens jobs running simultaneously, and sometimes many dozen! Altogether it took about 2 days to analyze the 1,000 hour backlog. That&#8217;s a 20x improvement over running on a single GPU computer.</p>



<p>Would it be possible to analyze 1000 hours of video on a single GPU PC? Certainly.  However, waiting 42 days is a big risk to take if you might find out that something didn&#8217;t work correctly. And it becomes even worse still if you realize the model needs to improve under a new condition. Lastly, if a lab ever wanted to scale up towards a full pipeline, it would only take a few more lines of code: check the lab server for new videos, send them to the supercomputer! Now that I&#8217;ve typed that out, maybe I&#8217;ll do that this afternoon ;-).</p>



<p>Next up, now that we have some tracking coordinates, what do we do with them??? Part 2: Pre-processing the data.  Part 3: classifying the data. Part 4: verification and final tweaks!</p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://brianisett.com/2021/04/29/ml-series-part-1-using-machine-vision-deeplabcut-to-extract-data-and-scale-production-on-a-gpu-cluster/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="http://brianisett.com/wp-content/uploads/2020/07/rear_detect_rear_short.mp4" length="473730" type="video/mp4" />
<enclosure url="http://brianisett.com/wp-content/uploads/2021/04/iter10_example.mp4" length="2302987" type="video/mp4" />

			</item>
		<item>
		<title>DIY Q&#038;A: Grid Poems Vol. 1</title>
		<link>https://brianisett.com/2017/08/25/grid-poems-vol-1-qa/</link>
					<comments>https://brianisett.com/2017/08/25/grid-poems-vol-1-qa/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 25 Aug 2017 19:11:37 +0000</pubDate>
				<category><![CDATA[News]]></category>
		<category><![CDATA[diy]]></category>
		<category><![CDATA[grid poems]]></category>
		<category><![CDATA[poetry]]></category>
		<category><![CDATA[Writing]]></category>
		<guid isPermaLink="false">http://brianisett.com/?p=376</guid>

					<description><![CDATA[Designer / artist John Soat (co-founder of Point in Passing) and myself (Brian Isett) recently completed a collection of 45 illustrated poems: Grid Poems Vol. I.  Everything from selecting the paper type, cover cloth, and ink selection, to the design and content of the book were entirely up to us.  Collaborating on a DIY project at this scale required a lot of decision-making and new learning that we wanted to share. So we caught up with ourselves with a little self-reflection-style Q &#38; A. BE= Body Electric, BI= Brian Isett, JS = John Soat. BE: How did this project start? BI: In 2013, I was taking a neural networks class for my PhD and encountered the phenomenon of &#8220;multistable perception,&#8221; which I found to be incredibly cool. Some information in the world is so convincing, and yet so ambiguous, that we will actually notice our perception spontaneously switching between two very different interpretations [see example below]. There is something &#8220;true&#8221; constrained by the alternating perceptions, but that truth is also inaccessible. This idea really resonated with me. I was in a writing group at the time and I started writing poems in the shape of a 3 x 3 grid of lines that could be read both left-to-right and top-to-bottom. Thus, given sufficient grammatical ambiguity, the same text could pivot to allow two perspectives to emerge from the same text. BE: Basically &#8220;The dress&#8221; of gold vs. blue fame. BI: Exactly. It&#8217;s obviously blue [laughs]. And also obvioulsy gold. Does the cylinder turn clockwise or counterclockwise? Ambiguity allows your perception to spontaneously switch how the movements are interpreted. BE: What did this form allow you to do, poetically? BI: All experience is open to multiple interpretations, but the grid poems allowed me to experiment with that property explicitly. It allowed me to express and examine the repercussions of interlocking, pivoting subjectivity. For example, these pivots broaden the range of possible identities and experiences each of us can have. But at the same time, the same pivots expose us to miscommunication, disappointment and disillusionment.  This trade-off became a common theme throughout the book. BE: When did you two first decide to collaborate on this project? JS: Brian and myself had collaborated years ago on a hybrid poetry and design blog, called Selective Synthesis. As most blogs do, they fizzle out in time, but we had a blast doing it and always knew that we shared a collaborative energy and complementary skill set. Last July, we both attended a wedding in the English countryside and commiserated how we needed to find a path for making work together again. Brian, shared that he had been working on a series, called “Grid Poems” for quite some time. The moment I heard his concept for this form of poetry, I was captivated. As a designer and Josef Müller-Brockmann proselytizer, the grid is sacred. I was immediately inspired by the potential of how this could work in tandem with a visual system. BI: All true! I sometimes forget about Selective Synthesis but that was definitely the first design/poetry collaboration we had. I had been working on these grid poems for a while and I knew that the form would really benefit from design considerations that were beyond what I could do myself. John was a natural choice of people to approach. BE: You have very different expertise and live in different states&#8211;was it difficult to collaborate? JS: Working from NY and CA actually proved to be much easier than I expected. When the only thing you are sharing is text docs and pdfs, it’s easy to review and iterate. Throughout the process we provided constant critique, but ultimately because we are working in different mediums, we allowed the other to express themselves how they saw fit. Lengthy emails became second nature to the process. I’m pretty sure I’ve emailed Brian more in the last year than anyone, ever. BI: Like John said, it proved to be pretty easy. One thing I worried about was whether John would have time to dive deep into the poetry. Not only did he do that, he gave in depth feedback on every single poem that I found incredibly useful. It ended up being much more collaborative and productive than I expected, to great effect.  BE: Why did you decide to design and publish this book yourselves? BI: We wanted complete creative control over the project in order to bring it into the world exactly how we imagined. I think the design of the book was a huge part. Typical publishers don’t like taking risks and use in-house designers. At the same time, there is a lot of marketing and distribution overhead in any successful project&#8211;a publisher is a huge asset for these parts of the equation. JS: Immediacy. We honestly didn’t even seek out a publisher. From the beginning, we had a pretty clear vision for how we wanted the book to be executed, and did not want to wait on anyone else for approval. BI: We’ve discussed approaching a publisher if the project seems to warrant a second edition. BE: Had you ever designed a book before? JS: Never. Quite a bit of trial and error through the process. Constantly printing out versions on the home inkjet. However, I’m incredibly inspired to do it again. BI: I actually designed one other book previously: a self-help book that my dad wanted to put out there (“Think Right, Feel Right”). That book was a success, so I think that experience showed me the DIY approach could work. It was also good because technical aspects like a book&#8217;s &#8220;gutter margin,&#8221; were already on my radar. This central margin of the book changes size depending on the type of paper you use and how many pages the book requires, making it a common source of DIY error. BE: What were some book design considerations you had to make that weren&#8217;t on your radar before this project? JS: Initially, the printing on internal pages were multi-colored, but because of cost restrictions, we limited the book to a single-color black. Ultimately, I think this limitation provided a stronger structure and viewpoint for the entire piece. This distilled everything down to be judged on the purity of its form. BI: The cover cloth and foil-ink selection were new to me. [The ‘foil’ is the title and design stamped into the cloth of a hardcover book.] John created a ton of cover mock-ups but then it turned out we were pretty constrained to using a subset of cloths that could hold the right amount of ink. We also couldn’t justify the cost of a full mock-up book being made so we didn’t know for sure what it would look like until they shipped us all 150. But I was so happy with how it turned out! We relied heavily on John’s design intuition and Conveyor Edition’s patience and expertise. BE: John, how did you use the poetic form and content to inspire your visual work in the book? JS: The gridded form of the poetry determined everything. This defined the book’s grid, as well as the exact structure each illustration needed to live on—becoming probably the most painstaking and annoying set of restrictions I’ve ever imposed on myself during a design project. Early in the process, I was “editor” for the individual poems. This allowed me to really dig into the content and many layers each poem possesses. With that insight, I then began illustrating a complementary visual component. Having the poem side-by-side with the visual, allows the context of the poems to drive much of the perception. Each illustration became a meditation on restraint and creating the most reductive, but expressive element possible. This was not an attempt at extreme minimalism, but rather, reducing a concept to a few simple shapes allows for an array interpretations, much in the same way the poetry does. BE: Will there be a Grid Poems Vol. II? JS: I really hope so! The intent going into the project, was to build a system which evolve and grow. At this point, we are looking to distribute and publish Grid Poems Vol. I on a larger scale. If this volume is well received, I feel it would definitely inspire a desire to continue making these books. We are infinitely more well prepared now to create a second volume. BI: Such an ambitious title, right? If we do, I could imagine it being a complete re-imagination: using a new set of constraints, or switching roles so that I write poetry in response to John’s work… I think there are a lot of cool possibilities. Like John said, if this project is successful, I think we would both be excited to brainstorm what a second project would look like. BE: Thanks for reading about the project and thanks to John for making this Q &#38; A with me! -BE.]]></description>
										<content:encoded><![CDATA[<p>Designer / artist <a href="http://work.jsoat.us/">John Soat</a> (co-founder of <a href="http://www.pointinpassing.com/">Point in Passing</a>) and myself (Brian Isett) recently completed a collection of 45 illustrated poems: <a href="https://a.co/d/45pTeGl">Grid Poems Vol. I</a>.  Everything from selecting the paper type, cover cloth, and ink selection, to the design and content of the book were entirely up to us.  Collaborating on a DIY project at this scale required a lot of decision-making and new learning that we wanted to share. So we caught up with ourselves with a little self-reflection-style Q &amp; A. BE= Body Electric, BI= Brian Isett, JS = John Soat.</p>
<p><b><i><em><strong>BE: </strong></em>How did this project start?</i></b></p>
<p><span style="font-weight: 400;"><strong>BI:</strong> In 2013, I was taking a neural networks class for my PhD and encountered the phenomenon of &#8220;multistable perception,&#8221; which I found to be incredibly cool. Some information in the world is so convincing, and yet so ambiguous, that we will actually notice our perception spontaneously switching between two very different interpretations [see example below]. There is something &#8220;true&#8221; constrained by the alternating perceptions, but that truth is also inaccessible. This idea really resonated with me. I was in a writing group at the time and I started writing poems in the shape of a 3 x 3 grid of lines that could be read both left-to-right and top-to-bottom. Thus, given sufficient grammatical ambiguity</span><span style="font-weight: 400;">, the same text could pivot to allow two perspectives to emerge from the same text.</span></p>
<p><em><strong>BE: Basically &#8220;<a href="https://en.wikipedia.org/wiki/The_dress">The dress</a>&#8221; of gold vs. blue fame.</strong></em></p>
<p><strong>BI: </strong>Exactly. It&#8217;s obviously blue [laughs]. And also obvioulsy gold.</p>
<p><figure class="wp-block-embed wp-block-embed-youtube is-type-video is-provider-youtube epyt-figure"><div class="wp-block-embed__wrapper"><iframe  id="_ytid_65965"  width="960" height="720"  data-origwidth="960" data-origheight="720" src="https://www.youtube.com/embed/DLBkwig3M2U?enablejsapi=1&autoplay=1&cc_load_policy=0&cc_lang_pref=&iv_load_policy=1&loop=0&modestbranding=0&rel=0&fs=1&playsinline=0&autohide=2&theme=dark&color=red&controls=1&" class="__youtube_prefs__  no-lazyload" title="YouTube player"  allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen data-no-lazy="1" data-skipgform_ajax_framebjll=""></iframe></div></figure></p>
<h6>Does the cylinder turn clockwise or counterclockwise? Ambiguity allows your perception to spontaneously switch how the movements are interpreted.</h6>
<p><em><strong><b><i>BE: </i></b>What did this form allow you to do, poetically?</strong></em></p>
<p><span style="font-weight: 400;"><strong>BI:</strong> All experience is open to multiple interpretations, but the grid poems allowed me to experiment with that property explicitly. It allowed me to express and examine the repercussions of interlocking, pivoting subjectivity. For example, these pivots broaden the range of possible identities and experiences each of us can have. But at the same time, the same pivots expose us to miscommunication, disappointment and disillusionment.  This trade-off became a common theme throughout the book.</span></p>
<p><a href="http://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001.jpg"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-381" src="http://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001.jpg" alt="" width="1440" height="962" srcset="https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001.jpg 1440w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001-300x200.jpg 300w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001-768x513.jpg 768w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001-1024x684.jpg 1024w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001-120x80.jpg 120w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_001-480x320.jpg 480w" sizes="auto, (max-width: 1440px) 100vw, 1440px" /></a></p>
<p><b><i><em><strong>BE: </strong></em>When did you two first decide to collaborate on this project?</i></b></p>
<p><figure id="attachment_393" aria-describedby="caption-attachment-393" style="width: 305px" class="wp-caption alignright"><a href="http://brianisett.com/wp-content/uploads/2017/08/JMB-TT8.jpg"><img loading="lazy" decoding="async" class="wp-image-393" src="http://brianisett.com/wp-content/uploads/2017/08/JMB-TT8.jpg" alt="" width="305" height="426" srcset="https://brianisett.com/wp-content/uploads/2017/08/JMB-TT8.jpg 756w, https://brianisett.com/wp-content/uploads/2017/08/JMB-TT8-215x300.jpg 215w, https://brianisett.com/wp-content/uploads/2017/08/JMB-TT8-733x1024.jpg 733w" sizes="auto, (max-width: 305px) 100vw, 305px" /></a><figcaption id="caption-attachment-393" class="wp-caption-text">Work by influential designer, Josef Müller-Brockmann</figcaption></figure></p>
<p><span style="font-weight: 400;"><strong>JS</strong>: Brian and myself had collaborated years ago on a hybrid poetry and design blog, called Selective Synthesis. As most blogs do, they fizzle out in time, but we had a blast doing it and always knew that we shared a collaborative energy and complementary skill set. </span></p>
<p><span style="font-weight: 400;">Last July, we both attended a wedding in the English countryside and commiserated how we needed to find a path for making work together again. Brian, shared that he had been working on a series, called “Grid Poems” for quite some time. The moment I heard his concept for this form of poetry, I was captivated. As a designer and Josef Müller-Brockmann proselytizer, the grid is sacred. I was immediately inspired by the potential of how this could work in tandem with a visual system.</span></p>
<p><span style="font-weight: 400;"><strong>BI</strong>: All true! I sometimes forget about Selective Synthesis but that was definitely the first design/poetry collaboration we had. I had been working on these grid poems for a while and I knew that the form would really benefit from design considerations that were beyond what I could do myself. John was a natural choice of people to approach.</span></p>
<p><b><i><em><strong>BE: You have very different expertise and live in different states&#8211;w</strong></em>as it difficult to collaborate?</i></b></p>
<p><span style="font-weight: 400;"><strong>JS: </strong>Working from NY and CA actually proved to be much easier than I expected. When the only thing you are sharing is text docs and pdfs, it’s easy to review and iterate. Throughout the process we provided constant critique, but ultimately because we are working in different mediums, we allowed the other to express themselves how they saw fit. Lengthy emails became second nature to the process. I’m pretty sure I’ve emailed Brian more in the last year than anyone, ever. </span></p>
<p><span style="font-weight: 400;"><strong>BI: </strong>Like John said, it proved to be pretty easy. One thing I worried about was whether John would have time to dive deep into the poetry. Not only did he do that, he gave in depth feedback on every single poem that I found incredibly useful. It ended up being much more collaborative and productive than I expected, to great effect. </span></p>
<p><a href="http://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004.jpg"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-384" src="http://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004.jpg" alt="" width="1440" height="962" srcset="https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004.jpg 1440w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004-300x200.jpg 300w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004-768x513.jpg 768w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004-1024x684.jpg 1024w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004-120x80.jpg 120w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_004-480x320.jpg 480w" sizes="auto, (max-width: 1440px) 100vw, 1440px" /></a></p>
<p><b><i><em><strong>BE: </strong></em>Why did you decide to design and publish this book yourselves?</i></b></p>
<p><span style="font-weight: 400;"><strong>BI: </strong>We wanted complete creative control over the project in order to bring it into the world exactly how we imagined. I think the design of the book was a huge part. Typical publishers don’t like taking risks and use in-house designers. At the same time, there is a lot of marketing and distribution overhead in any successful project&#8211;a publisher is a huge asset for these parts of the equation.</span></p>
<p><span style="font-weight: 400;"><strong>JS: </strong>Immediacy. We honestly didn’t even seek out a publisher. From the beginning, we had a pretty clear vision for how we wanted the book to be executed, and did not want to wait on anyone else for approval.</span></p>
<p><span style="font-weight: 400;"><strong>BI: </strong>We’ve discussed approaching a publisher if the project seems to warrant a second edition.</span></p>
<p><em><strong>BE: Had </strong></em><b><i>you ever designed a book before?</i></b></p>
<p><span style="font-weight: 400;"><strong>JS: </strong>Never. Quite a bit of trial and error through the process. Constantly printing out versions on the home inkjet. However, I’m incredibly inspired to do it again.</span></p>
<p><span style="font-weight: 400;"><strong>BI: </strong>I actually designed one other book previously: a self-help book that my dad wanted to put out there (“Think Right, Feel Right”). That book was a success, so I think that experience showed </span><span style="font-weight: 400;">me the DIY approach could work. It was also good because technical aspects like a book&#8217;s &#8220;gutter margin,&#8221; were already on my radar. This central margin of the book changes size depending on the type of paper you use and how many pages the book requires, making it a common source of DIY error.</span></p>
<p><b><i>BE: What were some book design considerations you had to make that weren&#8217;t on your ra</i></b><b><i>da</i></b><b><i>r before this project?</i></b></p>
<p><figure id="attachment_379" aria-describedby="caption-attachment-379" style="width: 165px" class="wp-caption alignright"><a href="http://brianisett.com/wp-content/uploads/2017/08/020517_COVER_OPTIONS_v_002.png"><img loading="lazy" decoding="async" class="wp-image-379" src="http://brianisett.com/wp-content/uploads/2017/08/020517_COVER_OPTIONS_v_002.png" alt="" width="165" height="394" /></a><figcaption id="caption-attachment-379" class="wp-caption-text"><strong>Early cover drafts.</strong></figcaption></figure></p>
<p><strong>JS: </strong>Initially, the printing on internal pages were multi-colored, but because of cost restrictions, we limited <span style="font-size: 1.0625rem;">the book to a single-color black. Ultimately, I think this limitation provided a stronger structure and viewpoint for the entire piece. This distilled everything down to be judged on the purity of its form.</span></p>
<p><span style="font-weight: 400;"><strong>BI: </strong>The cover cloth and foil-ink selection were new to me. [The ‘foil’ is the title and design stamped into the cloth of a hardcover book.] John created a ton of cover mock-ups but then it turned out we were pretty constrained to using a subset of cloths that could hold the right amount of ink. We also couldn’t justify the cost of a full mock-up book being made so we didn’t know for sure what it would look like until they shipped us all 150. But I was so happy with how it turned out! </span><span style="font-weight: 400;">We relied heavily on John’s design intuition and Conveyor Edition’s patience and expertise.</span></p>
<p><b><i>BE: John, how did you use the poetic form and content to inspire your visual work in the book?</i></b></p>
<p><span style="font-weight: 400;"><strong>JS: </strong>The gridded form of the poetry determined everything. This defined the book’s grid, as well as the exact structure each illustration needed to live on—becoming probably the most painstaking and annoying set of restrictions I’ve ever imposed on myself during a design project. </span><span style="font-size: 1.0625rem;">Early in the process, I was “editor” for the individual poems. This allowed me to really dig into the content and many layers each poem possesses. With that insight, I then began illustrating a complementary visual component. Having the poem side-by-side with the visual, allows the context of the poems to drive much of the perception. Each illustration became a meditation on restraint and creating the most reductive, but expressive element possible. This was not an attempt at extreme minimalism, but rather, reducing a concept to a few simple shapes allows for an array interpretations, much in the same way the poetry does.</span></p>
<p><a href="http://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010.jpg"><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-386" src="http://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010.jpg" alt="" width="1440" height="962" srcset="https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010.jpg 1440w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010-300x200.jpg 300w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010-768x513.jpg 768w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010-1024x684.jpg 1024w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010-120x80.jpg 120w, https://brianisett.com/wp-content/uploads/2017/08/SPREAD_GridPoems_Still_v_010-480x320.jpg 480w" sizes="auto, (max-width: 1440px) 100vw, 1440px" /></a></p>
<p><b><i>BE: Will there be a Grid Poems Vol. II?</i></b></p>
<p><span style="font-weight: 400;"><strong>JS: </strong>I really hope so! The intent going into the project, was to build a system which evolve and grow. At this point, we are looking to distribute and publish Grid Poems Vol. I on a larger scale. If this volume is well received, I feel it would definitely inspire a desire to continue making these books. We are infinitely more well prepared now to create a second volume. </span></p>
<p><span style="font-weight: 400;"><strong>BI: </strong>Such an ambitious title, right? If we do, I could imagine it being a complete re-imagination: using a new set of constraints, or switching roles so that I write poetry in response to John’s work… I think there are a lot of cool possibilities. Like John said, if this project is successful, I think we would both be excited to brainstorm what a second project would look like.</span></p>
<p><strong><em>BE: </em></strong>Thanks for reading about the project and thanks to John for making this Q &amp; A with me! -BE.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brianisett.com/2017/08/25/grid-poems-vol-1-qa/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Grid Poems Vol. I</title>
		<link>https://brianisett.com/2017/06/23/grid-poems-vol-i/</link>
					<comments>https://brianisett.com/2017/06/23/grid-poems-vol-i/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Fri, 23 Jun 2017 21:32:09 +0000</pubDate>
				<category><![CDATA[Writing]]></category>
		<category><![CDATA[poetry]]></category>
		<guid isPermaLink="false">http://brianisett.com/?p=362</guid>

					<description><![CDATA[I am very excited to share the recent publication of Grid Poems Vol. I, my first book of poetry! &#160;Half shifting-snapshot poetry, half art object, this book is the result of a beautiful collaboration with designer / artist John Soat. Find out more in our highlight in It&#8217;s Nice That.]]></description>
										<content:encoded><![CDATA[
<p>I am very excited to share the recent publication of <a href="https://a.co/d/2t8Ay5v">Grid Poems Vol. I</a>, my first book of poetry! &nbsp;Half shifting-snapshot poetry, half art object, this book is the result of a beautiful collaboration with designer / artist <a href="http://work.jsoat.us/">John Soat</a>. Find out more in our highlight in <a href="https://www.itsnicethat.com/articles/brian-isett-john-soat-grid-poems-graphic-design-051017">It&#8217;s Nice That</a>.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="683" src="http://brianisett.com/wp-content/uploads/2017/06/TEST_HORIZONTAL-1024x683.jpg" alt="" class="wp-image-364" srcset="https://brianisett.com/wp-content/uploads/2017/06/TEST_HORIZONTAL-1024x683.jpg 1024w, https://brianisett.com/wp-content/uploads/2017/06/TEST_HORIZONTAL-300x200.jpg 300w, https://brianisett.com/wp-content/uploads/2017/06/TEST_HORIZONTAL-768x512.jpg 768w, https://brianisett.com/wp-content/uploads/2017/06/TEST_HORIZONTAL-120x80.jpg 120w, https://brianisett.com/wp-content/uploads/2017/06/TEST_HORIZONTAL-480x320.jpg 480w, https://brianisett.com/wp-content/uploads/2017/06/TEST_HORIZONTAL.jpg 1440w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure></div>]]></content:encoded>
					
					<wfw:commentRss>https://brianisett.com/2017/06/23/grid-poems-vol-i/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>DIY rodent running disk: using a rotary encoder for position signals</title>
		<link>https://brianisett.com/2017/06/22/diy-running-disk/</link>
					<comments>https://brianisett.com/2017/06/22/diy-running-disk/#comments</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Thu, 22 Jun 2017 00:08:57 +0000</pubDate>
				<category><![CDATA[Hardware]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Science]]></category>
		<guid isPermaLink="false">http://brianisett.com/?p=348</guid>

					<description><![CDATA[During my thesis I built a 1D virtual track that responded to mouse locomotion in order to naturalistically deliver tactile shapes and textures in a controlled environment: &#160; The heart of this technical challenge was building the right running disk. In this post we will discuss how to make a low-weight running disk with high-speed, high-resolution positional readout. Along the way I discuss rotary encoders, quadrature, coding interrupts, and real-time responsive environments like the stimulus wall responding to mouse running shown above. Edit: And if you end up using the design described below, please consider citing the paper which led to its creation! Isett, B.R., Feasel, S.H., Lane, M.A., and Feldman, D.E. (2018). Slip-Based Coding of Local Shape and Texture in Mouse S1. Neuron 97, 418–433.e5. Introduction: running rodents Constructing a running disk Wiring the encoder What is quadrature? Decoding quadrature on an Arduino Uno Need for speed: closed-loop control 1. Introduction: running rodents When a rodent explores its environment, it increases sensory information acquisition, and this facilitates primary cortical areas like visual cortex (Niell &#38; Stryker 2010).  In the case of the whisker system, it is well known that whisking frequency is highly correlated with running speed (Fig. 1). In other words, rodents increase the acquisition rate of tactile information as they increase their running speed. This might be a byproduct of increased breathing rate, or it may be a way of making sure a lot of surrounding areas are still sampled by the whiskers at high running speeds. In either case: it&#8217;s pretty cool! Since so many behaviors are synchronized with running, it is often useful to allow rodents to run during experiments.  And it&#8217;s even better if this running information can be used to gather position and velocity estimates for analysis and even &#8220;closed-loop&#8221; systems (more on that later). Many scientists have implemented strategies for keeping track of rodent running speed and position, including the infamous(e) &#8220;Tank Ball&#8221; named after Professor David Tank at Princeton (a version of which is featured in this glorious Wired article). A Tank Ball consists of a Styrofoam ball floating on air (think ping-pong ball trick for scientists) with a computer optical mouse on one or two sides for reading ball movements. While this works great for visual stimuli (as shown above), we had concerns that the air blowing around could interfere with mouse whiskers in their normal movements. They are very fine hairs, and this could introduce unpredictable movements. Thus we settled on a simpler design: a 1D running wheel made of a 6&#8243; plastic disk on a rotary encoder. 2. Constructing a running disk Back to top The disk can easily be laser-cut from an online company like Ponoko, or you can do what I did and make it yourself using a Dremel circle-cutter and a drill press (be sure you are trained in using these tools before any such effort!). However, you aren&#8217;t done yet: acrylic is too slippery for mice feet, so you will need to add a nice texture. I found attaching a fine polypropylene mesh (McMaster Carr) via epoxy at the edges provided a good solution. As for attaching the disk and encoder, we ordered encoders with precision ground 1/4&#8243; shafts, so I found a matching 1/4&#8243; hub and used a small set-screw for mounting to prevent imbalance (i.e. avoid a large machine screw hanging off the side). The hub I originally used isn&#8217;t available but this hub (Sparkfun) should work. Other good resources for these items are: Pololu, ServoCity. The wheel up to this point is shown in Fig. 4. I later epoxied a thin 1/4&#8243; plastic lip around the edge of the wheel to further aid traction (Fig. 5). But constructing the wheel is really only half the battle, the next section will describe how we interpret information coming from a running disk / 1D track. 3. Wiring the encoder Back to top First thing is to wire up the encoder as per the datasheet. This will depend on whether your encoder is single-ended (one ground) or has differential output (two pins for each output&#8211;&#62; this can be converted to single-ended). I used a single-ended encoder with 1000 pulses per rotation, with A and B outputs for quadrature (H5-1000-IE-S, US Digital). In this configuration, we do not need the indexing pin (which can give an absolute reference pulse, once per rotation). If we want to know when the wheel moves forward vs. backwards, we need to interpret the quadrature. One possibility is to use a chip like the LS7184 chip to do this, but I found it was simpler to implement a real-time quadrature decoder on an Arduino, which I will describe below. 4. What is quadrature? Back to top Let&#8217;s first take a second to describe what signals a rotary encoder provides! Inside an optical encoder, you will find a very finely-marked transparent piece of plastic (symbolized by blue disk in Fig 7). By reading light occlusion from two neighboring sensors, different phases of light/dark will occur depending on whether the wheel moves clockwise or counter-clockwise (imagine starting from the arrow and reading left or right in Fig. 7). Thus, whenever either pin changes state, a known increment has occurred. To reveal the direction of this increment, we can check the state of the opposite pin of the pin that changed state. I phrased this in a particular way to make it more clear how we might write code, but to be clear: there are 4 possible states with particular transition paths and these can be used to create a state machine. If this all doesn&#8217;t quite make sense yet, draw out the possibilities and how to get to each state for yourself (if you are thorough, it should look something like this). Also, there are many good tutorials, including this one, which I adapted for the decoder described below. 5. Decoding quadrature on an Arduino Uno Back to top There are many ways to do this, but the one I liked most was to use 2 interrupt pins: one each for Channel A and B. Interrupt pins will momentarily leave whatever code is currently being processed and call a designated interrupt function. As I alluded to above, one way to implement this decoder would be: any time Channel A (B) changes state, call an interrupt and check the state of Channel B (A). Depending on the state of Channel B (A), increase or decrease a counter (or set a direction flag) as appropriate. Parenthetical letters are to show that we do this for when either channel A or B goes changes state. To set up the interrupts on an Arduino Uno (there are only certain pins that operate this way), we do something like: #define chB 1 // Pin 3 is Int 1 on UNO #define chA 0 // Pin 2 is Int 0 on UNO volatile long clicks // Variable used to store wheel movements. Volatile is important // for variables that can change in an interrupt void setup() { //This can be done here or at a relevant point in void loop(). attachInterrupt(chA, quadA, CHANGE); // CHANGE = whenever input goes HIGH or LOW on chA, call function quadA attachInterrupt(chB, quadB, CHANGE); // And for chB } A illustrative example of possible quadA and quadB functions would be something like this: void quadA() { // If pin 2 (int 0) state CHANGES, this interrupt is called. if (digitalRead(chA)==HIGH) { //If this pin (DIO2) is HIGH //Check state of DI03: if (digitalRead(chB) == HIGH) { //if DI03 is high clicks++; // Wheel is turning backwards/clockwise (mouse running forwards) } else { clicks--; // Wheel is turning forwards / counter-clockwise (mouse "running" backwards): } } else { //IF DI02 is LOW //Check state of DI03: if (digitalRead(chB) == HIGH) { //if DIO3 is high clicks--; // Wheel is turning fowards } else { clicks++; // Wheel is turning backwards } } } void quadB() { // If pin 2 (int 0) state CHANGES, this interrupt is called. if (digitalRead(chB)==HIGH) { //If this pin (DIO3) is HIGH //Check state of DI02: if (digitalRead(chA) == HIGH) { //if DI02 is high clicks--; // Wheel is turning forwards / counter-clockwise (mouse "running" backwards): } else { clicks++; // Wheel is turning backwards/clockwise (mouse running forwards) } } else { //IF DI03 is LOW //Check state of DI02: if (digitalRead(chA) == HIGH) { //if DIO2 is high clicks++; // Wheel is turning backwards } else { clicks--; // Wheel is turning forwards } } } &#160; Two things become clear from this example: 1) Notice that the state (chA==HIGH, chB==HIGH) has opposite meaning depending on which pin changed. Thus, quadrature is largely about capturing transitions and not states themselves. The quadA() and quadB() functions are exactly inverted, as they capture complementary sets of transitions. 2) Quadrature gets 4x higher step resolution than the listed &#8220;Clicks Per Rotation&#8221; of the encoder (which is the pulse resolution of a single A or B channel by itself). In this case, the encoder is 1000 CPR, and I used quadrature to read 4000 CPR. Thus we can convert clicks into a real distance using 2*π*r / clicks, where r = mean radius of the mouse&#8217;s position on the running disk. 6. Need for speed: closed-loop control Back to top To prevent interrupts from disrupting your other Arduino code excessively, the interrupt functions need to be very fast. With higher encoder resolution, this becomes quite relevant, as higher CPR means more interrupts to handle. There are several ways to make interrupts faster: 1) Do little more than increment a counter / set pin states in the interrupt, and address the implications of these state changes outside of the interrupts 2) Use the digitalWrite2() package available here for any pin state manipulations within the interrupt. 3) For an extra boost, consider checking pin states using hardware-specific code as opposed to using the native digitalRead( ) function. The downside to this is that the hardware can no longer port between different Arduino types, so this is generally not recommended. But for my purposes, it ended up being necessary. In addition to being fast, I wanted to use the same interrupts to optionally yoke the movement of a stepper motor to the incoming quadrature pulses, thus creating a closed-loop interface where mice ran in a 1D virtual track. The combination of these goals leads to a slightly more complex quadA() and quadB() function: void quadA() { // If pin 2 (int 0) state CHANGES, this interrupt is called. if ((PIND &#38; (1 &#60;&#60; PIND2)) == 0) { //If this pin (DIO2) is HIGH //Check state of DI03: if ((PIND &#38; (1 &#60;&#60; PIND3)) == 0) { //if DI03 is high (c code digital read of DI03 on UNO ) clicks++; // Wheel is turning backwards &#38; set direction pin state; // Set direction pin state: digitalWrite2(stimWheelDir, HIGH); // PORTD &#124;= (1&#60;&#60;PIND5); //HIG } else { clicks--; // Wheel is turning forwards &#38; set direction pin state // Set direction pin state: digitalWrite2(stimWheelDir, LOW); //PORTD &#38;= !(1&#60;&#60;PIND5);// } } else { //IF DI02 is LOW //Check state of DI03: if ((PIND &#38; (1 &#60;&#60; PIND3)) == 0) { //if DIO3 is high (c code digital read of DI03 on UNO ) clicks--; // Wheel is turning fowards // Set stepper motor direction pin state: digitalWrite2(stimWheelDir, LOW); //PORTD &#38;= !(1&#60;&#60;PIND5);// } else { clicks++; // Wheel is turning backwards // Set stepper motor direction pin state: digitalWrite2(stimWheelDir, HIGH); // PORTD &#124;= (1&#60;&#60;PIND5); //HIGH } } if (syncMove) { //Serial.println(clicks); digitalWrite2(stimWheelStp, HIGH); // but faster: PORTD &#124;= (1&#60;&#60;PIND4); // (1&#60;&#60;PIND4); //Equivalent to digitalWrite2(stimWheelStp,HIGH); but faster delayMicroseconds(stepPulseDur);// digitalWrite2(stimWheelStp, LOW); // but faster: PORTD &#38;= !(1&#60;&#60;PIND4); // Equivalent to } } void quadB() { // If pin 3 (int 1) state CHANGES, this interrupt is called. //If DI03 is HIGH if ((PIND &#38; (1 &#60;&#60; PIND3)) == 0) { //Check state of DI02: if ((PIND &#38; (1 &#60;&#60; PIND2)) == 0) { //if DI02 is high (c code digital read of DIO2 on UNO) clicks--; // Wheel is turning forwards // Set direction pin state: //Pin 5 digitalWrite2(stimWheelDir, LOW); //PORTD &#38;= !(1&#60;&#60;PIND5);// } else { clicks++; // Wheel is turning backwards // Set stepper motor direction pin state: digitalWrite2(stimWheelDir, HIGH); // PORTD &#124;= (1&#60;&#60;PIND5); //HIGH } } else { //IF pin B is LOW if ((PIND &#38; (1 &#60;&#60; PIND2)) == 0) { //if DI02 is high (c code digital read of DIO2 on UNO) clicks++; // Wheel is turning backwards // Set direction pin state: digitalWrite2(stimWheelDir, HIGH); // PORTD &#124;= (1&#60;&#60;PIND5); //HIGH } else { clicks--; // Wheel is turning forwards // Set stepper motor direction pin state: digitalWrite2(stimWheelDir, LOW); //PORTD &#38;= !(1&#60;&#60;PIND5);// } } if (syncMove) { digitalWrite2(stimWheelStp, HIGH); // but faster: PORTD &#124;= (1&#60;&#60;PIND4); // (1&#60;&#60;PIND4); //Equivalent to digitalWriteFast(stimWheelStp,HIGH); but faster delayMicroseconds(1);// digitalWrite2(stimWheelStp, LOW); // but faster: PORTD &#38;= !(1&#60;&#60;PIND4); // Equivalent to } } &#160; Which leads to closed-loop behavior like this: That&#8217;s about it for running disks! In the future this post will be paired with stepper motor stimulus control&#8230;]]></description>
										<content:encoded><![CDATA[<div id="top"></div>
<p>During my thesis I built a 1D virtual track that responded to mouse locomotion in order to naturalistically deliver tactile shapes and textures in a controlled environment:</p>
<p><div style="width: 960px;" class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]-->
<video class="wp-video-shortcode" id="video-348-1" width="960" height="540" preload="metadata" controls="controls"><source type="video/mp4" src="http://brianisett.com/wp-content/uploads/2016/03/slow_mo_whisker_shapes.mp4?_=1" /><a href="http://brianisett.com/wp-content/uploads/2016/03/slow_mo_whisker_shapes.mp4">http://brianisett.com/wp-content/uploads/2016/03/slow_mo_whisker_shapes.mp4</a></video></div></p>
<p>&nbsp;</p>
<p>The heart of this technical challenge was building the right running disk. In this post we will discuss how to make a low-weight running disk with high-speed, high-resolution positional readout. Along the way I discuss rotary encoders, quadrature, coding interrupts, and real-time responsive environments like the stimulus wall responding to mouse running shown above.</p>
<p>Edit: And if you end up using the design described below, please consider citing the paper which led to its creation!</p>
<div>Isett, B.R., Feasel, S.H., Lane, M.A., and Feldman, D.E. (2018). Slip-Based Coding of Local Shape and Texture in Mouse S1. Neuron 97, 418–433.e5.</div>
<div></div>
<ol>
<li><a href="#intro">Introduction: running rodents</a></li>
<li><a href="#disk-build">Constructing a running disk</a></li>
<li><a href="#wiring">Wiring the encoder</a></li>
<li><a href="#quad">What is quadrature?</a></li>
<li><a href="#decode-quad">Decoding quadrature on an Arduino Uno</a></li>
<li><a href="#nsf">Need for speed: closed-loop control</a></li>
</ol>
<h1>1. Introduction: running rodents</h1>
<p>When a rodent explores its environment, it increases sensory information acquisition, and this facilitates primary cortical areas like visual cortex (Niell &amp; Stryker 2010).  In the case of the whisker system, it is well known that whisking frequency is highly correlated with running speed (Fig. 1).</p>
<p><figure id="attachment_254" aria-describedby="caption-attachment-254" style="width: 300px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" class="wp-image-254 size-medium" src="http://brianisett.com/wp-content/uploads/2016/08/Sofroniew-et-al.-2014-Run-speed-vs-whisking-frequency-300x176.png" alt="Run faster, whisker faster (From: Sofroniew et al. 2014)." width="300" height="176" srcset="https://brianisett.com/wp-content/uploads/2016/08/Sofroniew-et-al.-2014-Run-speed-vs-whisking-frequency-300x176.png 300w, https://brianisett.com/wp-content/uploads/2016/08/Sofroniew-et-al.-2014-Run-speed-vs-whisking-frequency.png 302w" sizes="auto, (max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-254" class="wp-caption-text">Fig. 1. Whisking frequency increases with running speed (From: Sofroniew et al. 2014).</figcaption></figure></p>
<p>In other words, rodents increase the acquisition rate of tactile information as they increase their running speed. This might be a byproduct of increased breathing rate, or it may be a way of making sure a lot of surrounding areas are still sampled by the whiskers at high running speeds. In either case: it&#8217;s pretty cool! Since so many behaviors are synchronized with running, it is often useful to allow rodents to run during experiments.  And it&#8217;s even better if this running information can be used to gather position and velocity estimates for analysis and even &#8220;closed-loop&#8221; systems (more on that later).</p>
<p>Many scientists have implemented strategies for keeping track of rodent running speed and position, including the infamous(e) &#8220;Tank Ball&#8221; named after Professor David Tank at Princeton (a version of which is featured in <a href="http://www.wired.com/2009/10/mouse-virtual-reality/">this</a> glorious Wired article). A Tank Ball consists of a Styrofoam ball floating on air (think <a href="https://www.youtube.com/watch?v=bCRjPFhlSYk">ping-pong ball trick</a> for scientists) with a <em>computer</em> <em>optical</em> mouse on one or two sides for reading ball movements.</p>
<p><figure style="width: 312px" class="wp-caption aligncenter"><img loading="lazy" decoding="async" src="https://www.wired.com/images_blogs/wiredscience/2009/10/vr_mouse_setup.jpg" alt="vr_mouse_setup" width="312" height="334" /><figcaption class="wp-caption-text">Fig. 2. Tank Ball.</figcaption></figure></p>
<p>While this works great for visual stimuli (as shown above), we had concerns that the air blowing around could interfere with mouse whiskers in their normal movements. They are very fine hairs, and this could introduce unpredictable movements. Thus we settled on a simpler design: a 1D running wheel made of a 6&#8243; plastic disk on a rotary encoder.</p>
<h1>2. Constructing a running disk</h1>
<p><a href="#top">Back to top</a></p>
<p><figure id="attachment_333" aria-describedby="caption-attachment-333" style="width: 576px" class="wp-caption aligncenter"><a href="http://brianisett.com/wp-content/uploads/2017/06/DiskH5_encoder..png"><img loading="lazy" decoding="async" class="wp-image-333 size-full" src="http://brianisett.com/wp-content/uploads/2017/06/DiskH5_encoder..png" alt="" width="576" height="285" /></a><figcaption id="caption-attachment-333" class="wp-caption-text">Fig 3. 1D running with a flat disk and a rotary encoder (H5-1000-IE-S, US Digital)</figcaption></figure></p>
<p>The disk can easily be laser-cut from an online company like Ponoko, or you can do what I did and make it yourself using a <a href="https://www.dremel.com/en_US/products/-/show-product/tools/678-01-circle-cutter-straight-edge-guide">Dremel circle-cutter </a>and a drill press (be sure you are trained in using these tools before any such effort!). However, you aren&#8217;t done yet: acrylic is too slippery for mice feet, so you will need to add a nice texture. I found attaching a fine polypropylene mesh (<a href="https://www.mcmaster.com/#plastic-mesh-screens/=1861zhv">McMaster Carr</a>) via epoxy at the edges provided a good solution. As for attaching the disk and encoder, we ordered encoders with precision ground 1/4&#8243; shafts, so I found a matching 1/4&#8243; hub and used a small set-screw for mounting to prevent imbalance (i.e. avoid a large machine screw hanging off the side). The hub I originally used isn&#8217;t available but <a href="https://www.sparkfun.com/products/12488">this hub</a> (Sparkfun) should work. Other good resources for these items are: Pololu, ServoCity. The wheel up to this point is shown in Fig. 4.</p>
<p><figure id="attachment_335" aria-describedby="caption-attachment-335" style="width: 400px" class="wp-caption aligncenter"><a href="http://brianisett.com/wp-content/uploads/2017/06/Overhead_2.png"><img loading="lazy" decoding="async" class="wp-image-335 size-full" src="http://brianisett.com/wp-content/uploads/2017/06/Overhead_2.png" alt="" width="400" height="405" srcset="https://brianisett.com/wp-content/uploads/2017/06/Overhead_2.png 400w, https://brianisett.com/wp-content/uploads/2017/06/Overhead_2-296x300.png 296w" sizes="auto, (max-width: 400px) 100vw, 400px" /></a><figcaption id="caption-attachment-335" class="wp-caption-text">Fig. 4. Wheel with mesh attached to rotary encoder under lickport (LP) and head-holder (HH). Ruler for scale.</figcaption></figure></p>
<p>I later epoxied a thin 1/4&#8243; plastic lip around the edge of the wheel to further aid traction (Fig. 5).</p>
<p><figure id="attachment_337" aria-describedby="caption-attachment-337" style="width: 525px" class="wp-caption aligncenter"><a href="http://brianisett.com/wp-content/uploads/2017/06/thin_IR_protract_4mm_cropped_sharpened.png"><img loading="lazy" decoding="async" class="wp-image-337" src="http://brianisett.com/wp-content/uploads/2017/06/thin_IR_protract_4mm_cropped_sharpened.png" alt="" width="525" height="275" /></a><figcaption id="caption-attachment-337" class="wp-caption-text">Fig. 5. 1/4&#8243; lip made of shim plastic (held by mouse in picture). As shown, prevents mice feet from going off the edge of the wheel.</figcaption></figure></p>
<p>But constructing the wheel is really only half the battle, the next section will describe how we interpret information coming from a running disk / 1D track.</p>
<h1>3. Wiring the encoder</h1>
<p><a href="#top">Back to top</a><br />
First thing is to wire up the encoder as per the datasheet. This will depend on whether your encoder is single-ended (one ground) or has differential output (two pins for each output&#8211;&gt; this can be converted to single-ended). I used a single-ended encoder with 1000 pulses per rotation, with A and B outputs for quadrature (H5-1000-IE-S, US Digital).</p>
<p><figure id="attachment_338" aria-describedby="caption-attachment-338" style="width: 659px" class="wp-caption aligncenter"><a href="http://brianisett.com/wp-content/uploads/2017/06/encoder-wiring.png"><img loading="lazy" decoding="async" class="wp-image-338" src="http://brianisett.com/wp-content/uploads/2017/06/encoder-wiring.png" alt="" width="659" height="224" /></a><figcaption id="caption-attachment-338" class="wp-caption-text">Fig. 6. Rotary encoder connected to a DSUB4-type connector.</figcaption></figure></p>
<p>In this configuration, we do not need the indexing pin (which can give an absolute reference pulse, once per rotation). If we want to know when the wheel moves forward vs. backwards, we need to interpret the quadrature. One possibility is to use a chip like the LS7184 chip to do this, but I found it was simpler to implement a real-time quadrature decoder on an Arduino, which I will describe below.</p>
<h1>4. What is quadrature?</h1>
<p><a href="#top">Back to top</a><br />
Let&#8217;s first take a second to describe what signals a rotary encoder provides!</p>
<p><figure style="width: 650px" class="wp-caption aligncenter"><a href="https://www.pjrc.com/teensy/td_libs_Encoder.html"><img loading="lazy" decoding="async" src="https://www.pjrc.com/teensy/td_libs_Encoder_pos2.png" alt="" width="650" height="242" /></a><figcaption class="wp-caption-text">Fig. 7. Optical encoder creating quadrature clock. (Click to see source page of image).</figcaption></figure></p>
<p>Inside an optical encoder, you will find a very finely-marked transparent piece of plastic (symbolized by blue disk in Fig 7). By reading light occlusion from two neighboring sensors, different phases of light/dark will occur depending on whether the wheel moves clockwise or counter-clockwise (imagine starting from the arrow and reading left or right in Fig. 7). Thus, whenever either pin changes state, a known increment has occurred. To reveal the direction of this increment, we can check the state of the opposite pin of the pin that changed state. I phrased this in a particular way to make it more clear how we might write code, but to be clear: there are 4 possible states with particular transition paths and these can be used to create a state machine. If this all doesn&#8217;t quite make sense yet, draw out the possibilities and how to get to each state for yourself (if you are thorough, it should look something like <a href="http://forums.ni.com/legacyfs/online/176247_state%20machine.png">this</a>). Also, there are many good tutorials, including <a href="http://playground.arduino.cc/Main/RotaryEncoders">this one</a>, which I adapted for the decoder described below.</p>
<h1>5. Decoding quadrature on an Arduino Uno</h1>
<p><a href="#top">Back to top</a><br />
There are many ways to do this, but the one I liked most was to use 2 interrupt pins: one each for Channel A and B. Interrupt pins will momentarily leave whatever code is currently being processed and call a designated interrupt function. As I alluded to above, one way to implement this decoder would be: any time Channel A (B) changes state, call an interrupt and check the state of Channel B (A). Depending on the state of Channel B (A), increase or decrease a counter (or set a direction flag) as appropriate. Parenthetical letters are to show that we do this for when either channel A or B goes changes state.</p>
<p>To set up the interrupts on an Arduino Uno (there are only certain pins that operate this way), we do something like:</p>
<pre> #define chB  1      // Pin 3 is Int 1 on UNO
 #define chA  0      // Pin 2 is Int 0 on UNO 
 volatile long clicks // Variable used to store wheel movements. Volatile is important
                      // for variables that can change in an interrupt
 void setup() {
   //This can be done here or at a relevant point in void loop().
   attachInterrupt(chA, quadA, CHANGE); // CHANGE = whenever input goes HIGH or LOW on chA, call function quadA
   attachInterrupt(chB, quadB, CHANGE); // And for chB
 }
</pre>
<p>A illustrative example of possible quadA and quadB functions would be something like this:</p>
<pre>void quadA() {
  // If pin 2 (int 0) state CHANGES, this interrupt is called.
  if (digitalRead(chA)==HIGH) { //If this pin (DIO2) is HIGH
    //Check state of DI03:
    if (digitalRead(chB) == HIGH) { //if DI03 is high 
      clicks++; // Wheel is turning backwards/clockwise (mouse running forwards)
    }
    else {
      clicks--; // Wheel is turning forwards / counter-clockwise (mouse "running" backwards):
    }
  }
  else { //IF DI02 is LOW
    //Check state of DI03:
    if (digitalRead(chB) == HIGH) { //if DIO3 is high
      clicks--; // Wheel is turning fowards
    }
    else {
      clicks++; // Wheel is turning backwards
    }
  }
}

void quadB() {
  // If pin 2 (int 0) state CHANGES, this interrupt is called.
  if (digitalRead(chB)==HIGH) { //If this pin (DIO3) is HIGH
    //Check state of DI02:
    if (digitalRead(chA) == HIGH) { //if DI02 is high 
      clicks--; // Wheel is turning forwards / counter-clockwise (mouse "running" backwards):
    }
    else {
      clicks++; // Wheel is turning backwards/clockwise (mouse running forwards)
    }
  }
  else { //IF DI03 is LOW
    //Check state of DI02:
    if (digitalRead(chA) == HIGH) { //if DIO2 is high
      clicks++; // Wheel is turning backwards
    }
    else {
      clicks--; // Wheel is turning forwards
    }
  }
}
</pre>
<p>&nbsp;</p>
<p>Two things become clear from this example: 1) Notice that the state (chA==HIGH, chB==HIGH) has opposite meaning depending on which pin changed. Thus, quadrature is largely about capturing transitions and not states themselves. The quadA() and quadB() functions are exactly inverted, as they capture complementary sets of transitions. 2) Quadrature gets 4x higher step resolution than the listed &#8220;Clicks Per Rotation&#8221; of the encoder (which is the pulse resolution of a single A or B channel by itself). In this case, the encoder is 1000 CPR, and I used quadrature to read 4000 CPR. Thus we can convert clicks into a real distance using 2*π*r / clicks, where r = mean radius of the mouse&#8217;s position on the running disk.</p>
<h1>6. Need for speed: closed-loop control</h1>
<p><a href="#top">Back to top</a><br />
To prevent interrupts from disrupting your other Arduino code excessively, the interrupt functions need to be very fast. With higher encoder resolution, this becomes quite relevant, as higher CPR means more interrupts to handle. There are several ways to make interrupts faster:</p>
<p>1) Do little more than increment a counter / set pin states in the interrupt, and address the implications of these state changes outside of the interrupts<br />
2) Use the digitalWrite2() package available <a href="https://www.codeproject.com/Articles/732646/Fast-digital-I-O-for-Arduino">here </a>for any pin state manipulations within the interrupt.<br />
3) For an extra boost, consider checking pin states using hardware-specific code as opposed to using the native digitalRead( ) function. The downside to this is that the hardware can no longer port between different Arduino types, so this is generally not recommended. But for my purposes, it ended up being necessary.</p>
<p>In addition to being fast, I wanted to use the same interrupts to optionally yoke the movement of a stepper motor to the incoming quadrature pulses, thus creating a closed-loop interface where mice ran in a 1D virtual track. The combination of these goals leads to a slightly more complex quadA() and quadB() function:</p>
<pre>void quadA() {
  // If pin 2 (int 0) state CHANGES, this interrupt is called.
  if ((PIND &amp; (1 &lt;&lt; PIND2)) == 0) { //If this pin (DIO2) is HIGH
    //Check state of DI03:
    if ((PIND &amp; (1 &lt;&lt; PIND3)) == 0) { //if DI03 is high (c code digital read of DI03 on UNO )
      clicks++; // Wheel is turning backwards &amp; set direction pin state;
      // Set direction pin state:
      digitalWrite2(stimWheelDir, HIGH); // PORTD |= (1&lt;&lt;PIND5); //HIG
    }
    else {
      clicks--; // Wheel is turning forwards &amp; set direction pin state
      // Set direction pin state:
      digitalWrite2(stimWheelDir, LOW); //PORTD &amp;= !(1&lt;&lt;PIND5);//
    }
  }
  else { //IF DI02 is LOW
    //Check state of DI03:
    if ((PIND &amp; (1 &lt;&lt; PIND3)) == 0) { //if DIO3 is high  (c code digital read of DI03 on UNO )
      clicks--; // Wheel is turning fowards
       // Set stepper motor direction pin state:
      digitalWrite2(stimWheelDir, LOW); //PORTD &amp;= !(1&lt;&lt;PIND5);//
    }
    else {
      clicks++; // Wheel is turning backwards
      // Set stepper motor direction pin state:
      digitalWrite2(stimWheelDir, HIGH); // PORTD |= (1&lt;&lt;PIND5); //HIGH
    }
  }
  if (syncMove) {
    //Serial.println(clicks);
    digitalWrite2(stimWheelStp, HIGH); // but faster: PORTD |= (1&lt;&lt;PIND4); // (1&lt;&lt;PIND4); //Equivalent to digitalWrite2(stimWheelStp,HIGH); but faster
    delayMicroseconds(stepPulseDur);// 
    digitalWrite2(stimWheelStp, LOW); // but faster: PORTD &amp;= !(1&lt;&lt;PIND4); // Equivalent to
  }
}
void quadB() {
  // If pin 3 (int 1) state CHANGES, this interrupt is called.
  //If DI03 is HIGH
  if ((PIND &amp; (1 &lt;&lt; PIND3)) == 0) {
    //Check state of DI02:
    if ((PIND &amp; (1 &lt;&lt; PIND2)) == 0) { //if DI02 is high  (c code digital read of DIO2 on UNO)
      clicks--; // Wheel is turning forwards
      // Set direction pin state:
      //Pin 5
      digitalWrite2(stimWheelDir, LOW); //PORTD &amp;= !(1&lt;&lt;PIND5);//
    }
    else {
      clicks++; // Wheel is turning backwards
       // Set stepper motor direction pin state:
      digitalWrite2(stimWheelDir, HIGH); // PORTD |= (1&lt;&lt;PIND5); //HIGH
    }
  }
  else { //IF pin B is LOW
    if ((PIND &amp; (1 &lt;&lt; PIND2)) == 0) { //if DI02 is high  (c code digital read of DIO2 on UNO)
      clicks++; // Wheel is turning backwards
      // Set direction pin state:
      digitalWrite2(stimWheelDir, HIGH); // PORTD |= (1&lt;&lt;PIND5); //HIGH
    }
    else {
      clicks--; // Wheel is turning forwards
       // Set stepper motor direction pin state:
      digitalWrite2(stimWheelDir, LOW); //PORTD &amp;= !(1&lt;&lt;PIND5);//
    }
  }
  if (syncMove) {
    digitalWrite2(stimWheelStp, HIGH); // but faster: PORTD |= (1&lt;&lt;PIND4); // (1&lt;&lt;PIND4); //Equivalent to digitalWriteFast(stimWheelStp,HIGH); but faster
    delayMicroseconds(1);// 
    digitalWrite2(stimWheelStp, LOW); // but faster: PORTD &amp;= !(1&lt;&lt;PIND4); // Equivalent to
  }
}
</pre>
<p>&nbsp;</p>
<p>Which leads to closed-loop behavior like this:</p>
<p><div style="width: 960px;" class="wp-video"><video class="wp-video-shortcode" id="video-348-2" width="960" height="540" preload="metadata" controls="controls"><source type="video/mp4" src="http://brianisett.com/wp-content/uploads/2016/03/slow_mo_whisker_shapes.mp4?_=2" /><a href="http://brianisett.com/wp-content/uploads/2016/03/slow_mo_whisker_shapes.mp4">http://brianisett.com/wp-content/uploads/2016/03/slow_mo_whisker_shapes.mp4</a></video></div></p>
<p>That&#8217;s about it for running disks! In the future this post will be paired with stepper motor stimulus control&#8230;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brianisett.com/2017/06/22/diy-running-disk/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		<enclosure url="http://brianisett.com/wp-content/uploads/2016/03/slow_mo_whisker_shapes.mp4" length="10457587" type="video/mp4" />

			</item>
		<item>
		<title>Poems published in Shot Glass Journal</title>
		<link>https://brianisett.com/2017/06/19/shot-glass-journal-1/</link>
					<comments>https://brianisett.com/2017/06/19/shot-glass-journal-1/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 19 Jun 2017 20:03:36 +0000</pubDate>
				<category><![CDATA[Writing]]></category>
		<category><![CDATA[poetry]]></category>
		<guid isPermaLink="false">http://brianisett.com/?p=324</guid>

					<description><![CDATA[This happened a while ago but it feels right to publish it here! I had 3 poems and a glossary item (&#8220;grid poem&#8221;) accepted into Shot Glass Journal. This journal only publishes short poetry and it is a perfect fit for the work I submitted. Check it out here !]]></description>
										<content:encoded><![CDATA[
<p>This happened a while ago but it feels right to publish it here! I had 3 poems and a glossary item (&#8220;grid poem&#8221;) accepted into Shot Glass Journal. This journal only publishes short poetry and it is a perfect fit for the work I submitted. Check it out <a href="http://www.musepiepress.com/shotglass/issue19/brian_isett1.html"> here </a> !</p>
]]></content:encoded>
					
					<wfw:commentRss>https://brianisett.com/2017/06/19/shot-glass-journal-1/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Pixel sequencer for playing a .jpg like a sheet of music</title>
		<link>https://brianisett.com/2017/01/01/pixel-sequencer-for-playing-a-jpg-like-a-sheet-of-music/</link>
					<comments>https://brianisett.com/2017/01/01/pixel-sequencer-for-playing-a-jpg-like-a-sheet-of-music/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sun, 01 Jan 2017 02:42:02 +0000</pubDate>
				<category><![CDATA[Music]]></category>
		<guid isPermaLink="false">http://brianisett.com/?p=286</guid>

					<description><![CDATA[<error>
    <code>internal_server_error</code>
    <title><![CDATA[WordPress &amp;rsaquo; Error]]></title>
    <message><![CDATA[&lt;p&gt;There has been a critical error on this website.&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://wordpress.org/documentation/article/faq-troubleshooting/&quot;&gt;Learn more about troubleshooting WordPress.&lt;/a&gt;&lt;/p&gt;]]></message>
    <data>
        <status>500</status>
    </data>
</error>
