<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet href="/feed.rss.xml" type="text/xsl" media="screen"?>
<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Breandan Considine</title>
    <description></description>
    <link>https://speakerdeck.com/breandan</link>
    <atom:link rel="self" type="application/rss+xml" href="https://speakerdeck.com/breandan.rss"/>
    <lastBuildDate>2017-04-26 04:43:53 -0400</lastBuildDate>
    <item>
      <title>DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars</title>
      <description>Presentation of a paper by Tian et al. (2017) at the Mila Robotics RG: https://arxiv.org/abs/1708.08559

When: Monday, 10/08/18 at 2.00pm
Where: PAA 3195

Synthesizing images through deformation or generation has a long history in computer vision and adversarial machine learning. More broadly, this technique is also known as data augmentation [1] [2], feature augmentation [3], or domain randomization [2] (in the non-adversarial setting). Often used to augment a dataset to increase data diversity, to probe a model for hidden biases, or reduce sensitivity to noise, this family of techniques seek to generate realistic but synthetic inputs based on our understanding of the data generating process, and in some cases, the model architecture. Such inputs can be used to discover hidden failure modes and improve generalization through retraining.

In this work, Tian et al. present a grey-box testing technique to evaluate deep neural networks by applying realistic image deformations to maximize “neuron coverage” and promote output diversity. This technique is used to discover images that would induce unsafe control outputs in a self driving car, i.e. dangerous steering angles. In this talk, we will introduce the concept “neuron coverage” and its usefulness as a proxy for capturing output diversity. We will explore the notion of metamorphic testing, and related white-box techniques for evaluating neural networks. And we will learn how fake training data can help avoid rapid unplanned deceleration of an autonomous vehicle.

[1] Understanding data augmentation for classification: When to warp? https://arxiv.org/abs/1609.08764
[2] The Effectiveness of Data Augmentation in Image Classification using Deep Learning https://arxiv.org/abs/1712.04621
[3] Dataset augmentation in feature space https://arxiv.org/abs/1702.05538
[4] Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World https://arxiv.org/abs/1703.06907</description>
      <media:content url="https://files.speakerdeck.com/presentations/da9b43d151354635a80b70c4322ea8c1/preview_slide_0.jpg?26409518" type="image/jpeg" medium="image"/>
      <content:encoded>Presentation of a paper by Tian et al. (2017) at the Mila Robotics RG: https://arxiv.org/abs/1708.08559

When: Monday, 10/08/18 at 2.00pm
Where: PAA 3195

Synthesizing images through deformation or generation has a long history in computer vision and adversarial machine learning. More broadly, this technique is also known as data augmentation [1] [2], feature augmentation [3], or domain randomization [2] (in the non-adversarial setting). Often used to augment a dataset to increase data diversity, to probe a model for hidden biases, or reduce sensitivity to noise, this family of techniques seek to generate realistic but synthetic inputs based on our understanding of the data generating process, and in some cases, the model architecture. Such inputs can be used to discover hidden failure modes and improve generalization through retraining.

In this work, Tian et al. present a grey-box testing technique to evaluate deep neural networks by applying realistic image deformations to maximize “neuron coverage” and promote output diversity. This technique is used to discover images that would induce unsafe control outputs in a self driving car, i.e. dangerous steering angles. In this talk, we will introduce the concept “neuron coverage” and its usefulness as a proxy for capturing output diversity. We will explore the notion of metamorphic testing, and related white-box techniques for evaluating neural networks. And we will learn how fake training data can help avoid rapid unplanned deceleration of an autonomous vehicle.

[1] Understanding data augmentation for classification: When to warp? https://arxiv.org/abs/1609.08764
[2] The Effectiveness of Data Augmentation in Image Classification using Deep Learning https://arxiv.org/abs/1712.04621
[3] Dataset augmentation in feature space https://arxiv.org/abs/1702.05538
[4] Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World https://arxiv.org/abs/1703.06907</content:encoded>
      <pubDate>Tue, 18 Jul 2023 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/deeptest-automated-testing-of-deep-neural-network-driven-autonomous-cars</link>
      <guid>https://speakerdeck.com/breandan/deeptest-automated-testing-of-deep-neural-network-driven-autonomous-cars</guid>
    </item>
    <item>
      <title>PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning</title>
      <description>    " Probabilistic roadmaps (PRMs) have a long and productive history in robotic motion planning. First conceived in 1996, they operate by sampling a set of points in configuration space and connecting these points using a simple line-of-sight algorithm. While PRM-based methods can construct efficient map representations, they share similar limitations with other sampling-based planners: PRMs do not consider external constraints such as the path feasibility and can suffer from unmodeled dynamics, sensor noise and non-stationary environments.
    Correspondingly, RL algorithms such as DDPG and CAFVI suggest promising new alternatives to learn policies over long time horizons, by decomposing the learning task into a set of goals and subgoals. These algorithms can be robust to sensor noise, motion stochasticity, and are resilient to (moderate) changes in the environment but require efficient state representations and can often suffer from poor local minima. By combining PRMs and RL techniques, the authors present a compelling case for learning robot dynamics separately from the environment, a technique that is shown to scale to environments up to 63 million times larger than in simulation.
Fig. 4. PRM-RL: a prosperous handshake between of RL and classical robotics.
Specifically, the authors decouple the dynamics and noise estimation from the environment itself. First they learn the dynamics in a small training environment, and use that model to inform the local graph connectivity within the target environment. Instead of adding edges along all collision- free paths, they only draw edges which can be successfully navigated by the dynamics model in a high percentage of simulations. This process generates a roadmap that is more robust to noise and motion error and simultaneously less prone to poor local minima exhibited by naive HRL planners, ensuring continuous progress towards the goal state.
In this talk we will explore how to construct a dynamically feasible roadmap using RL, how to train a dynamics model using policy gradients and value function approximation, and finally how to query the PRM to produce practical reference trajectories. No prior understanding about motion planning, HRL or robotics is assumed or required. "

When: Monday, 05/11/18 at 2.00pm
Where: PAA 3195</description>
      <media:content url="https://files.speakerdeck.com/presentations/97d75998ca6542199ea57f792cd40b14/preview_slide_0.jpg?26409499" type="image/jpeg" medium="image"/>
      <content:encoded>    " Probabilistic roadmaps (PRMs) have a long and productive history in robotic motion planning. First conceived in 1996, they operate by sampling a set of points in configuration space and connecting these points using a simple line-of-sight algorithm. While PRM-based methods can construct efficient map representations, they share similar limitations with other sampling-based planners: PRMs do not consider external constraints such as the path feasibility and can suffer from unmodeled dynamics, sensor noise and non-stationary environments.
    Correspondingly, RL algorithms such as DDPG and CAFVI suggest promising new alternatives to learn policies over long time horizons, by decomposing the learning task into a set of goals and subgoals. These algorithms can be robust to sensor noise, motion stochasticity, and are resilient to (moderate) changes in the environment but require efficient state representations and can often suffer from poor local minima. By combining PRMs and RL techniques, the authors present a compelling case for learning robot dynamics separately from the environment, a technique that is shown to scale to environments up to 63 million times larger than in simulation.
Fig. 4. PRM-RL: a prosperous handshake between of RL and classical robotics.
Specifically, the authors decouple the dynamics and noise estimation from the environment itself. First they learn the dynamics in a small training environment, and use that model to inform the local graph connectivity within the target environment. Instead of adding edges along all collision- free paths, they only draw edges which can be successfully navigated by the dynamics model in a high percentage of simulations. This process generates a roadmap that is more robust to noise and motion error and simultaneously less prone to poor local minima exhibited by naive HRL planners, ensuring continuous progress towards the goal state.
In this talk we will explore how to construct a dynamically feasible roadmap using RL, how to train a dynamics model using policy gradients and value function approximation, and finally how to query the PRM to produce practical reference trajectories. No prior understanding about motion planning, HRL or robotics is assumed or required. "

When: Monday, 05/11/18 at 2.00pm
Where: PAA 3195</content:encoded>
      <pubDate>Tue, 18 Jul 2023 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/prm-rl-long-range-robotic-navigation-tasks-by-combining-reinforcement-learning-and-sampling-based-planning</link>
      <guid>https://speakerdeck.com/breandan/prm-rl-long-range-robotic-navigation-tasks-by-combining-reinforcement-learning-and-sampling-based-planning</guid>
    </item>
    <item>
      <title>Deep, Skinny Neural Networks are not Universal Approximators</title>
      <description>Many proofs in learning theory typically utilize results from information theory or statistical learning to shed light on generalization or provide guarantees about convergence. While these are indeed powerful tools, it is possible to characterize some properties of neural networks using simpler methods. In Deep, Skinny Neural Networks are not Universal Approximators, Jesse Johnson proves, using standard set theoretic topology, that feedforward nets with a maximum layer width less than or equal to its input size cannot approximate functions with a level set containing a bounded path component. Join us next Tuesday at 10:30am in the Mila Auditorium to discuss this paper!

Presented on March 19th, 2019.

Slides for today's proof are here: https://slides.com/breandan/skinny-nns
ICLR/OpenReview discussion: https://openreview.net/forum?id=ryGgSsAcFQ
The author has a great blog on topology which I particularly enjoyed: https://ldtopology.wordpress.com/</description>
      <media:content url="https://files.speakerdeck.com/presentations/1c44e4e7cf734ae59dda6415e7c72ec5/preview_slide_0.jpg?26409478" type="image/jpeg" medium="image"/>
      <content:encoded>Many proofs in learning theory typically utilize results from information theory or statistical learning to shed light on generalization or provide guarantees about convergence. While these are indeed powerful tools, it is possible to characterize some properties of neural networks using simpler methods. In Deep, Skinny Neural Networks are not Universal Approximators, Jesse Johnson proves, using standard set theoretic topology, that feedforward nets with a maximum layer width less than or equal to its input size cannot approximate functions with a level set containing a bounded path component. Join us next Tuesday at 10:30am in the Mila Auditorium to discuss this paper!

Presented on March 19th, 2019.

Slides for today's proof are here: https://slides.com/breandan/skinny-nns
ICLR/OpenReview discussion: https://openreview.net/forum?id=ryGgSsAcFQ
The author has a great blog on topology which I particularly enjoyed: https://ldtopology.wordpress.com/</content:encoded>
      <pubDate>Tue, 18 Jul 2023 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/deep-skinny-neural-networks-are-not-universal-approximators</link>
      <guid>https://speakerdeck.com/breandan/deep-skinny-neural-networks-are-not-universal-approximators</guid>
    </item>
    <item>
      <title>Intrinsic social motivation via causal influence in multi-agent RL</title>
      <description>Presentation of a paper by Jacques et al. (2018) at the Mila Learning Agents RG: https://arxiv.org/abs/1810.08647

An important feature of human decision-making is our ability to predict each others’ behavior. Through social interaction, we learn a model of others’ internal states, which helps us to anticipate future actions, plan and collaborate. Recent deep learning models have been compared to idiot savants - capable of performing highly specialized tasks but lacking what social psychology calls a “theory of mind”. In this research, Jaques et al. study the conditions for a theory of mind to emerge in multi-agent RL, and discover an interesting connection to causal inference.

The authors began by exploring a novel reward structure based on “social influence”, observing a rudimentary form of communication emerged between agents. Then, by providing an explicit communication channel, they observed agents could achieve better collective outcomes. Finally, using tools from causal inference, they endowed each agent with a model of other agents (MOA) network, allowing them to predict others’ actions without direct access to the counterpart’s reward function. In doing so, agents exhibited intrinsic motivation and the researchers were able to remove the external reward mechanism altogether.

In this talk, we will discuss a few important ideas from Causal Inference, such as counterfactual reasoning, the MOA framework and the use of mutual information as a mechanism for designing social rewards. No prior background in causal modeling is required or expected.</description>
      <media:content url="https://files.speakerdeck.com/presentations/6cbae12328c84693993a60256363103d/preview_slide_0.jpg?26409446" type="image/jpeg" medium="image"/>
      <content:encoded>Presentation of a paper by Jacques et al. (2018) at the Mila Learning Agents RG: https://arxiv.org/abs/1810.08647

An important feature of human decision-making is our ability to predict each others’ behavior. Through social interaction, we learn a model of others’ internal states, which helps us to anticipate future actions, plan and collaborate. Recent deep learning models have been compared to idiot savants - capable of performing highly specialized tasks but lacking what social psychology calls a “theory of mind”. In this research, Jaques et al. study the conditions for a theory of mind to emerge in multi-agent RL, and discover an interesting connection to causal inference.

The authors began by exploring a novel reward structure based on “social influence”, observing a rudimentary form of communication emerged between agents. Then, by providing an explicit communication channel, they observed agents could achieve better collective outcomes. Finally, using tools from causal inference, they endowed each agent with a model of other agents (MOA) network, allowing them to predict others’ actions without direct access to the counterpart’s reward function. In doing so, agents exhibited intrinsic motivation and the researchers were able to remove the external reward mechanism altogether.

In this talk, we will discuss a few important ideas from Causal Inference, such as counterfactual reasoning, the MOA framework and the use of mutual information as a mechanism for designing social rewards. No prior background in causal modeling is required or expected.</content:encoded>
      <pubDate>Tue, 18 Jul 2023 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/intrinsic-social-motivation-via-causal-influence-in-multi-agent-rl</link>
      <guid>https://speakerdeck.com/breandan/intrinsic-social-motivation-via-causal-influence-in-multi-agent-rl</guid>
    </item>
    <item>
      <title>Idiolect: A Reconfigurable Voice Coding Assisant</title>
      <description>Idiolect is an open source IDE plugin for voice coding and a novel approach to building bots that allows for users to define custom commands on-the-fly. Unlike traditional chatbots, it does not pretend to be an omniscient virtual assistant but rather a reconfigurable voice programming system that empowers users to create their own commands and actions dynamically, without rebuilding or restarting the application. We offer an experience report describing the tool itself, illustrate some example use cases, and reflect on several lessons learned during the tool’s development.</description>
      <media:content url="https://files.speakerdeck.com/presentations/1960fe6eada5494e8c36cd5074ea5795/preview_slide_0.jpg?25765527" type="image/jpeg" medium="image"/>
      <content:encoded>Idiolect is an open source IDE plugin for voice coding and a novel approach to building bots that allows for users to define custom commands on-the-fly. Unlike traditional chatbots, it does not pretend to be an omniscient virtual assistant but rather a reconfigurable voice programming system that empowers users to create their own commands and actions dynamically, without rebuilding or restarting the application. We offer an experience report describing the tool itself, illustrate some example use cases, and reflect on several lessons learned during the tool’s development.</content:encoded>
      <pubDate>Wed, 24 May 2023 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/idiolect-a-reconfigurable-voice-coding-assisant</link>
      <guid>https://speakerdeck.com/breandan/idiolect-a-reconfigurable-voice-coding-assisant</guid>
    </item>
    <item>
      <title>Interactive Programming with Automated Reasoning</title>
      <description>Research overview.</description>
      <media:content url="https://files.speakerdeck.com/presentations/bcad3d91160e4b12b06ed13d8d14058e/preview_slide_0.jpg?25095329" type="image/jpeg" medium="image"/>
      <content:encoded>Research overview.</content:encoded>
      <pubDate>Sat, 01 Apr 2023 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/interactive-programming-with-automated-reasoning</link>
      <guid>https://speakerdeck.com/breandan/interactive-programming-with-automated-reasoning</guid>
    </item>
    <item>
      <title>Learning Structural Edits via Incremental Tree Transformations</title>
      <description></description>
      <media:content url="https://files.speakerdeck.com/presentations/7777e21be5584eccacc2b774e58d938b/preview_slide_0.jpg?19357506" type="image/jpeg" medium="image"/>
      <content:encoded></content:encoded>
      <pubDate>Tue, 28 Sep 2021 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/learning-structural-edits-via-incremental-tree-transformations</link>
      <guid>https://speakerdeck.com/breandan/learning-structural-edits-via-incremental-tree-transformations</guid>
    </item>
    <item>
      <title>Thinking Like Transformers</title>
      <description>Presented of a paper by Weiss et al. (2021) at the ML4Code RG on July 26, 2021.

https://arxiv.org/abs/2106.06981
https://ml4code-mtl.github.io/</description>
      <media:content url="https://files.speakerdeck.com/presentations/5aa15376dd1e4533b4c9c667c501383a/preview_slide_0.jpg?19357470" type="image/jpeg" medium="image"/>
      <content:encoded>Presented of a paper by Weiss et al. (2021) at the ML4Code RG on July 26, 2021.

https://arxiv.org/abs/2106.06981
https://ml4code-mtl.github.io/</content:encoded>
      <pubDate>Mon, 26 Jul 2021 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/thinking-like-transformers</link>
      <guid>https://speakerdeck.com/breandan/thinking-like-transformers</guid>
    </item>
    <item>
      <title>Discriminative Embeddings of Latent Variable Models for Structured Data</title>
      <description></description>
      <media:content url="https://files.speakerdeck.com/presentations/fe38c19e92e646d3b437ac1bd5933010/preview_slide_0.jpg?19357602" type="image/jpeg" medium="image"/>
      <content:encoded></content:encoded>
      <pubDate>Thu, 12 Mar 2020 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/discriminative-embeddings-of-latent-variable-models-for-structured-data</link>
      <guid>https://speakerdeck.com/breandan/discriminative-embeddings-of-latent-variable-models-for-structured-data</guid>
    </item>
    <item>
      <title>Derivatives. Important Concept. Simple to grasp in Kotlin.</title>
      <description>Differentiation tells us how to make a small change to the inputs of a function, so as to produce the largest change in output. At first, this idea may not seem very important for software engineers, but differential equations can be found at the heart of every other engineering discipline and nearly every major contribution to the physical sciences in the last three centuries. As digital computers begin to interface with the physical world, derivatives will begin to play an increasingly significant role in computing.

Contrary to popular belief, #derivatives are surprisingly simple to understand and compute. In this talk, we will see how to implement automatic differentiation in Kotlin, using functional programming, and see some applications for physical simulation, machine learning and automatic testing. No prior experience or mathematical background is assumed or required.</description>
      <media:content url="https://files.speakerdeck.com/presentations/7a0778c261a84a72908c03f7a1e30d46/preview_slide_0.jpg?16311328" type="image/jpeg" medium="image"/>
      <content:encoded>Differentiation tells us how to make a small change to the inputs of a function, so as to produce the largest change in output. At first, this idea may not seem very important for software engineers, but differential equations can be found at the heart of every other engineering discipline and nearly every major contribution to the physical sciences in the last three centuries. As digital computers begin to interface with the physical world, derivatives will begin to play an increasingly significant role in computing.

Contrary to popular belief, #derivatives are surprisingly simple to understand and compute. In this talk, we will see how to implement automatic differentiation in Kotlin, using functional programming, and see some applications for physical simulation, machine learning and automatic testing. No prior experience or mathematical background is assumed or required.</content:encoded>
      <pubDate>Sat, 07 Dec 2019 00:00:00 -0500</pubDate>
      <link>https://speakerdeck.com/breandan/derivatives-important-concept-simple-to-grasp-in-kotlin</link>
      <guid>https://speakerdeck.com/breandan/derivatives-important-concept-simple-to-grasp-in-kotlin</guid>
    </item>
    <item>
      <title>The intertwined quest for understanding biological intelligence  and creating artificial intelligence</title>
      <description>In “The intertwined quest for understanding biological intelligence and creating artificial intelligence,” Surya Ganguli maps out his vision for a new research program that seeks to, “unify the disciplines of neuroscience, psychology, cognitive science and AI.” He points to a handful of clues at confluence of these disciplines and hints at untapped sources of insight waiting to be discovered - by the observant explorer. Next Monday, at the Cognitively Informed Reading Group, we will survey some places where past treasure was found, such as temporal difference learning, wake-sleep, variational methods, memory networks and world models. We will then visit two outposts on the frontiers of computational neuroscience and social psychology where some some strange new patterns are emerging... Join us on _Monday, February 11th at 11:30am in A.14_ to take part in this quest.

Required Reading

"The intertwined quest for understanding biological intelligence and creating artificial intelligence" (Ganguli, 2018):
https://hai.stanford.edu/news/the_intertwined_quest_for_understanding_biological_intelligence_and_creating_artificial_intelligence/

Suggested Reading

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures (Bartunov et al., 2018): https://arxiv.org/pdf/1807.04587.pdf

Machine Theory of Mind (Rabinowitz et al., 2018): https://arxiv.org/pdf/1802.07740.pdf</description>
      <media:content url="https://files.speakerdeck.com/presentations/4b2e30893dfd43908d29d7bf8254d2c4/preview_slide_0.jpg?11801704" type="image/jpeg" medium="image"/>
      <content:encoded>In “The intertwined quest for understanding biological intelligence and creating artificial intelligence,” Surya Ganguli maps out his vision for a new research program that seeks to, “unify the disciplines of neuroscience, psychology, cognitive science and AI.” He points to a handful of clues at confluence of these disciplines and hints at untapped sources of insight waiting to be discovered - by the observant explorer. Next Monday, at the Cognitively Informed Reading Group, we will survey some places where past treasure was found, such as temporal difference learning, wake-sleep, variational methods, memory networks and world models. We will then visit two outposts on the frontiers of computational neuroscience and social psychology where some some strange new patterns are emerging... Join us on _Monday, February 11th at 11:30am in A.14_ to take part in this quest.

Required Reading

"The intertwined quest for understanding biological intelligence and creating artificial intelligence" (Ganguli, 2018):
https://hai.stanford.edu/news/the_intertwined_quest_for_understanding_biological_intelligence_and_creating_artificial_intelligence/

Suggested Reading

Assessing the Scalability of Biologically-Motivated Deep Learning Algorithms and Architectures (Bartunov et al., 2018): https://arxiv.org/pdf/1807.04587.pdf

Machine Theory of Mind (Rabinowitz et al., 2018): https://arxiv.org/pdf/1802.07740.pdf</content:encoded>
      <pubDate>Mon, 11 Feb 2019 00:00:00 -0500</pubDate>
      <link>https://speakerdeck.com/breandan/the-intertwined-quest-for-understanding-biological-intelligence-and-creating-artificial-intelligence</link>
      <guid>https://speakerdeck.com/breandan/the-intertwined-quest-for-understanding-biological-intelligence-and-creating-artificial-intelligence</guid>
    </item>
    <item>
      <title>Kotlin𝛁: Differentiable Functional Programming with Algebraic Data Types</title>
      <description>Kotlin is a multi-platform programming language with compiler support for JVM, JS and native targets. The language emphasizes static typing, null-safety and interoperability with Java and JavaScript. In this work, we present an algebraically grounded implementation of forward and reverse mode automatic differentiation written in pure Kotlin and a property-based test suite for soundness checking. Our approach enables users to target multiple platforms through a single codebase and receive compile-time static analysis. A working prototype is provided at: https://github.com/breandan/kotlingrad</description>
      <media:content url="https://files.speakerdeck.com/presentations/eed37ba4781b47fd8aeaaa8078ca5642/preview_slide_0.jpg?11801977" type="image/jpeg" medium="image"/>
      <content:encoded>Kotlin is a multi-platform programming language with compiler support for JVM, JS and native targets. The language emphasizes static typing, null-safety and interoperability with Java and JavaScript. In this work, we present an algebraically grounded implementation of forward and reverse mode automatic differentiation written in pure Kotlin and a property-based test suite for soundness checking. Our approach enables users to target multiple platforms through a single codebase and receive compile-time static analysis. A working prototype is provided at: https://github.com/breandan/kotlingrad</content:encoded>
      <pubDate>Tue, 15 Jan 2019 00:00:00 -0500</pubDate>
      <link>https://speakerdeck.com/breandan/kotlin-differentiable-functional-programming-with-algebraic-data-types</link>
      <guid>https://speakerdeck.com/breandan/kotlin-differentiable-functional-programming-with-algebraic-data-types</guid>
    </item>
    <item>
      <title>IROS Poster: Software Infrastructure for Autonomous Robotics</title>
      <description></description>
      <media:content url="https://files.speakerdeck.com/presentations/ec990a7eea804835a596d5c571ea1ae8/preview_slide_0.jpg?10916719" type="image/jpeg" medium="image"/>
      <content:encoded></content:encoded>
      <pubDate>Sat, 06 Oct 2018 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/iros-poster-software-infrastructure-for-autonomous-robotics</link>
      <guid>https://speakerdeck.com/breandan/iros-poster-software-infrastructure-for-autonomous-robotics</guid>
    </item>
    <item>
      <title>Duckietown: Software Infrastructure for Autonomous Robotics</title>
      <description>Duckietown is an international research and education platform for autonomous vehicles, incorporating a physical and a virtual town populated with miniature autonomous cars. In this tutorial, we’ll take a grand tour of Duckietown and explore how machine learning, simulators and container technology are transforming robotics development. Attendees will learn how to train a self-driving vehicle using the Duckietown Gym environment and a little help from ROS.</description>
      <media:content url="https://files.speakerdeck.com/presentations/67a2b7eef55946d9a753c4fa65d6f3e6/preview_slide_0.jpg?10878185" type="image/jpeg" medium="image"/>
      <content:encoded>Duckietown is an international research and education platform for autonomous vehicles, incorporating a physical and a virtual town populated with miniature autonomous cars. In this tutorial, we’ll take a grand tour of Duckietown and explore how machine learning, simulators and container technology are transforming robotics development. Attendees will learn how to train a self-driving vehicle using the Duckietown Gym environment and a little help from ROS.</content:encoded>
      <pubDate>Mon, 01 Oct 2018 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/duckietown-software-infrastructure-for-autonomous-robotics</link>
      <guid>https://speakerdeck.com/breandan/duckietown-software-infrastructure-for-autonomous-robotics</guid>
    </item>
    <item>
      <title>ROS + Docker</title>
      <description></description>
      <media:content url="https://files.speakerdeck.com/presentations/c3383c1862184c9fa38dde7698b4e6a3/preview_slide_0.jpg?13919391" type="image/jpeg" medium="image"/>
      <content:encoded></content:encoded>
      <pubDate>Wed, 18 Apr 2018 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/ros-plus-docker</link>
      <guid>https://speakerdeck.com/breandan/ros-plus-docker</guid>
    </item>
    <item>
      <title>Deep Learning on Java [JavaDay Tokyo 2017]</title>
      <description>機械学習は、さまざまな現実世界のアプリケーションにおいて、コンピュータ・ビジョン、音声認識や言語処理をはじめとして、目ざましい進歩を遂げています。Javaでも、deeplearning4java（DL4J）のような新しいSparkベースのツールを使用したライブラリによって、大規模なデータ・セットにこれらの手法を適用できます。

本セッションでは、最急降下法、誤差逆伝播法、モデル訓練と評価といった機械学習の基本的な構成要素を学びます。いかに「教師あり学習」モデルを構築するかなどを含め、Deep Learningの概要をご紹介します。機械学習のこれまでの経験は必要ありません。ビッグデータから新しい洞察や未知のパターンのカスタム・モデルを開発する方法について、ヒントになれば幸いです。</description>
      <media:content url="https://files.speakerdeck.com/presentations/73c90234a7dc49348e53f7d10d23fa51/preview_slide_0.jpg?7977391" type="image/jpeg" medium="image"/>
      <content:encoded>機械学習は、さまざまな現実世界のアプリケーションにおいて、コンピュータ・ビジョン、音声認識や言語処理をはじめとして、目ざましい進歩を遂げています。Javaでも、deeplearning4java（DL4J）のような新しいSparkベースのツールを使用したライブラリによって、大規模なデータ・セットにこれらの手法を適用できます。

本セッションでは、最急降下法、誤差逆伝播法、モデル訓練と評価といった機械学習の基本的な構成要素を学びます。いかに「教師あり学習」モデルを構築するかなどを含め、Deep Learningの概要をご紹介します。機械学習のこれまでの経験は必要ありません。ビッグデータから新しい洞察や未知のパターンのカスタム・モデルを開発する方法について、ヒントになれば幸いです。</content:encoded>
      <pubDate>Tue, 16 May 2017 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/deep-learning-on-java-javaday-tokyo-2017</link>
      <guid>https://speakerdeck.com/breandan/deep-learning-on-java-javaday-tokyo-2017</guid>
    </item>
    <item>
      <title>Deep Learning for Data Scientists</title>
      <description>Neural networks have seen renewed interest from data scientists and machine learning researchers for their ability to accurately classify high-dimensional data, including images, sounds and text. In this session we will discuss the fundamental algorithms behind neural networks, such as back-propogation and gradient descent. We will develop an intuition for how to train a deep neural network using large data sets. We will then use the algorithms we have developed to train a simple handwritten digit recognizer, and illustrate how to generalize this same technique across larger types of images using convolutional neural nets. In the second and final part of this presentation, we will show you how to apply the same algorithms we have implemented using Keras and TensorFlow, a Python library for deep learning on large datasets. Attendees will learn how to implement a simple neural network, monitor its training progress and test for accuracy over time. Prior experience with Python and some basic algebra is a pre-requirement.</description>
      <media:content url="https://files.speakerdeck.com/presentations/53e8317a0e444547b6287b15785300f4/preview_slide_0.jpg?7898792" type="image/jpeg" medium="image"/>
      <content:encoded>Neural networks have seen renewed interest from data scientists and machine learning researchers for their ability to accurately classify high-dimensional data, including images, sounds and text. In this session we will discuss the fundamental algorithms behind neural networks, such as back-propogation and gradient descent. We will develop an intuition for how to train a deep neural network using large data sets. We will then use the algorithms we have developed to train a simple handwritten digit recognizer, and illustrate how to generalize this same technique across larger types of images using convolutional neural nets. In the second and final part of this presentation, we will show you how to apply the same algorithms we have implemented using Keras and TensorFlow, a Python library for deep learning on large datasets. Attendees will learn how to implement a simple neural network, monitor its training progress and test for accuracy over time. Prior experience with Python and some basic algebra is a pre-requirement.</content:encoded>
      <pubDate>Fri, 28 Apr 2017 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/deep-learning-for-data-scientists</link>
      <guid>https://speakerdeck.com/breandan/deep-learning-for-data-scientists</guid>
    </item>
    <item>
      <title>Building Developer Tools with Kotlin and Gradle</title>
      <description>Kotlin is a new JVM language that offers increased null- and type-safety, support for building custom DSLs, and many functional idioms from Scala and Groovy. But one of the main advantages of using Kotlin is its tooling support. Now with Gradle, automating complex builds is now possible in Kotlin, letting you write tools and build logic in the same language. In this session, I will demonstrate how I rebuilt and maintain AceJump, a popular plugin for the IntelliJ Platform SDK using Kotlin and Gradle. We will discuss strategies for converting your existing Java code to Kotlin, how to configure Gradle to use Kotlin build scripts, and a Gradle plugin for building the IDE. This session will also cover the IntelliJ Platform SDK, a set of APIs that lets you build smarter developer tools ontop of the same platform that powers IntelliJ IDEA and its cousins. Attendees should have some basic familiarity with either Kotlin or Gradle in order to benefit from this session.</description>
      <media:content url="https://files.speakerdeck.com/presentations/1366ff3273144b6ba3718ac140cbffbc/preview_slide_0.jpg?7883246" type="image/jpeg" medium="image"/>
      <content:encoded>Kotlin is a new JVM language that offers increased null- and type-safety, support for building custom DSLs, and many functional idioms from Scala and Groovy. But one of the main advantages of using Kotlin is its tooling support. Now with Gradle, automating complex builds is now possible in Kotlin, letting you write tools and build logic in the same language. In this session, I will demonstrate how I rebuilt and maintain AceJump, a popular plugin for the IntelliJ Platform SDK using Kotlin and Gradle. We will discuss strategies for converting your existing Java code to Kotlin, how to configure Gradle to use Kotlin build scripts, and a Gradle plugin for building the IDE. This session will also cover the IntelliJ Platform SDK, a set of APIs that lets you build smarter developer tools ontop of the same platform that powers IntelliJ IDEA and its cousins. Attendees should have some basic familiarity with either Kotlin or Gradle in order to benefit from this session.</content:encoded>
      <pubDate>Wed, 26 Apr 2017 00:00:00 -0400</pubDate>
      <link>https://speakerdeck.com/breandan/building-developer-tools-with-kotlin-and-gradle</link>
      <guid>https://speakerdeck.com/breandan/building-developer-tools-with-kotlin-and-gradle</guid>
    </item>
  </channel>
</rss>
