<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Giorgio Boa</title>
    <description>The latest articles on DEV Community by Giorgio Boa (@gioboa).</description>
    <link>https://dev.to/gioboa</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/gioboa"/>
    <language>en</language>
    <item>
      <title>Reason, Act, Remember: Advanced AI with Microsoft Agent Framework</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Thu, 19 Feb 2026 09:15:01 +0000</pubDate>
      <link>https://dev.to/gioboa/reason-act-remember-advanced-ai-with-microsoft-agent-framework-1p80</link>
      <guid>https://dev.to/gioboa/reason-act-remember-advanced-ai-with-microsoft-agent-framework-1p80</guid>
      <description>&lt;p&gt;Artificial Intelligence has moved beyond simple chatbots and into the realm of intelligent agents. These agents are not just passive responders; they are capable of reasoning, acting, and remembering. The &lt;a href="https://learn.microsoft.com/en-us/agent-framework/get-started/" rel="noopener noreferrer"&gt;Microsoft Agent Framework&lt;/a&gt; provides a comprehensive set of tools and libraries that empower developers to build these sophisticated AI systems.&lt;/p&gt;

&lt;p&gt;The first step in this journey is installing the dependency.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agent-framework &lt;span class="nt"&gt;--pre&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Now that we have the right package, let's start by understanding what an agent actually is. In the simplest terms, an agent is a piece of software that uses a Large Language Model (LLM) to understand and generate human language. &lt;/p&gt;

&lt;p&gt;The framework makes it incredibly easy to bring such an entity to life. With just a few lines of code, a developer can connect to Azure's powerful AI infrastructure. You can give your agent a name and a set of instructions. For instance, you might tell it to be a friendly assistant or a technical expert. The framework handles all the complex communication with the AI model, allowing you to focus on what you want the agent to do.&lt;/p&gt;

&lt;p&gt;You can interact with this agent in two ways: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;ask a question and wait for the complete answer, which is great for short queries&lt;/li&gt;
&lt;li&gt;for longer responses, you can use streaming, where the agent "types" the answer out in real-time, making the experience feel much more interactive and responsive, just like a human typing a message.
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agent_framework.azure&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AzureOpenAIResponsesClient&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;azure.identity&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;AzureCliCredential&lt;/span&gt;

&lt;span class="c1"&gt;# Create an agent
&lt;/span&gt;&lt;span class="n"&gt;credential&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AzureCliCredential&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;AzureOpenAIResponsesClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;project_endpoint&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AZURE_AI_PROJECT_ENDPOINT&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;deployment_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;environ&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
    &lt;span class="n"&gt;credential&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;credential&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;HelloAgent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a friendly assistant. Keep your answers brief.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run agent (non-streaming)
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What is the capital of France?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Run agent (streaming)
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent (streaming): &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;flush&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Tell me a one-sentence fun fact.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;stream&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;chunk&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;end&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;""&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;flush&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Expanding Capabilities with Tools
&lt;/h2&gt;

&lt;p&gt;However, an AI that can only talk is limited. The real magic happens when you give your agent the ability to interact with the world. This is where the concept of "Tools" comes into play. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Imagine you want your agent to know the current weather. A standard language model only knows what it was trained on and cannot know if it is raining outside right now. You can define standard Python functions and attaching them to your agent. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You could write a simple function that checks a weather API. When you ask the agent about the weather, it analyses your request, the system intelligently decides to call the function you provided. It then takes the result from that function and uses it to formulate a natural language response.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;typing&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Annotated&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;agent_framework&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Field&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;randint&lt;/span&gt;

&lt;span class="nd"&gt;@tool&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;approval_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;never_require&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_weather&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;Annotated&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nc"&gt;Field&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;description&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The location to get the weather for.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)],&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Get the weather for a given location.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;conditions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sunny&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;cloudy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rainy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;stormy&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;The weather in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;location&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; is &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;conditions&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;)]&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; with a high of &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;randint&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;30&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;°C.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

&lt;span class="c1"&gt;# Create agent with tools
&lt;/span&gt;&lt;span class="n"&gt;agent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;as_agent&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;WeatherAgent&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;instructions&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;You are a helpful weather agent. Use the get_weather tool to answer questions.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tools&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;get_weather&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;--&lt;/span&gt; &lt;span class="n"&gt;here&lt;/span&gt; &lt;span class="n"&gt;you&lt;/span&gt; &lt;span class="n"&gt;have&lt;/span&gt; &lt;span class="n"&gt;to&lt;/span&gt; &lt;span class="n"&gt;define&lt;/span&gt; &lt;span class="n"&gt;the&lt;/span&gt; &lt;span class="n"&gt;tool&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;s the weather like in Seattle?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  Multi-Turn Interactions with Sessions
&lt;/h2&gt;

&lt;p&gt;If you tell an agent your name at the start of a chat, you expect it to remember that name five minutes later. The framework solves this through "Sessions". &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;A session acts as a container for the conversation history. When you talk to the agent within a session, the framework automatically keeps track of what has been said. &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This allows for multi-turn conversations where the agent maintains context. This creates an experience that feels personal and coherent rather than robotic.&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Create a session to maintain conversation history
&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create_session&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="c1"&gt;# First turn
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;My name is Alice and I love hiking.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Second turn — the agent should remember the user's name and hobby
&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;What do you remember about me?&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;session&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Agent: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;




&lt;p&gt;The Microsoft Agent Framework provides a comprehensive toolkit for building intelligent AI agents that reason, act, and remember. Key features include core agent interaction via LLMs, dynamic tool integration for real-world tasks, and session-based conversation persistence. Finally the journey for building an AI agent is both accessible and powerful.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





</description>
      <category>microsoft</category>
      <category>ai</category>
      <category>python</category>
      <category>programming</category>
    </item>
    <item>
      <title>Copilot: Secret Tip to Troubleshooting Your GitHub Actions</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Thu, 12 Feb 2026 08:16:04 +0000</pubDate>
      <link>https://dev.to/gioboa/copilot-secret-tip-to-troubleshooting-your-github-actions-30cd</link>
      <guid>https://dev.to/gioboa/copilot-secret-tip-to-troubleshooting-your-github-actions-30cd</guid>
      <description>&lt;p&gt;GitHub Actions have become an indispensable tool for automating software development workflows, enabling continuous integration and delivery directly within your repositories. However, even the most meticulously crafted pipelines can encounter issues, leading to frustrating debugging sessions. This is where GitHub Copilot emerges as a powerful assistant, offering intelligent insights and streamlining the troubleshooting process for your GitHub Actions workflows.&lt;/p&gt;

&lt;p&gt;Traditionally, pinpointing the root cause of a failed GitHub Actions run involves sifting through extensive logs, meticulously checking syntax, and understanding the intricate interplay of various actions and conditions. This can be a time-consuming, especially for intricate workflows or those managed by multiple contributors. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Copilot aims to transform this experience, acting as an intelligent partner that can quickly analyse failures and suggest solutions.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Here is an example:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1n0g9s7dwm9fmpivslbn.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1n0g9s7dwm9fmpivslbn.gif" alt="Copilot-helper"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One of Copilot's most compelling features for GitHub Actions troubleshooting is its ability to provide immediate explanations for failed checks. You no longer need to manually navigate through layers of logs to understand what went wrong. With a simple click on the "Explain error" option next to a failed check in the merge box or on the workflow run summary page, Copilot springs into action. It opens a chat window, where it will analyze the context of the failure and offer actionable instructions to resolve the issue. This direct and contextual guidance significantly reduces the time and effort required to diagnose problems.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This kind of intelligent assistance can be invaluable for understanding the flow of your workflow and identifying logic errors more efficiently.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;p&gt;It acts as an intelligent accelerator, helping you navigate complex scenarios and quickly pinpoint solutions. By leveraging Copilot as a helper in your GitHub Actions pipelines you can spend more time building and less time debugging.&lt;/p&gt;




&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>microsoft</category>
      <category>github</category>
      <category>githubcopilot</category>
      <category>ai</category>
    </item>
    <item>
      <title>Hooks are here: Now you can intercept and direct the path of the Gemini CLI</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Thu, 29 Jan 2026 09:11:47 +0000</pubDate>
      <link>https://dev.to/gioboa/hooks-are-here-now-you-can-intercept-and-direct-the-path-of-the-gemini-cli-oeh</link>
      <guid>https://dev.to/gioboa/hooks-are-here-now-you-can-intercept-and-direct-the-path-of-the-gemini-cli-oeh</guid>
      <description>&lt;p&gt;The &lt;a href="https://geminicli.com/" rel="noopener noreferrer"&gt;Gemini Command Line Interface&lt;/a&gt; (CLI) is a powerful tool, and with the introduction of its "hooks" feature, users can now significantly extend and customise its behaviour without ever touching the core source code. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Hooks are essentially scripts or programs that the Gemini CLI executes at specific, predefined points within its agentic loop, offering unparalleled control and flexibility.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Use Cases
&lt;/h2&gt;

&lt;p&gt;Imagine being able to inject crucial context before your AI model even sees a request, validate potentially risky actions before they execute, or even enforce company-wide compliance policies directly within your development workflow. This is precisely the power that Gemini CLI hooks bring to the table. By running synchronously as part of the agent loop, hooks ensure that the CLI waits for their completion before proceeding, guaranteeing that your custom logic is always applied.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;Hooks communicate with the CLI via strict JSON requirements over &lt;code&gt;stdin&lt;/code&gt; and &lt;code&gt;stdout&lt;/code&gt; – the "Golden Rule" dictates that your script must only print the final JSON object to &lt;code&gt;stdout&lt;/code&gt;, reserving &lt;code&gt;stderr&lt;/code&gt; for all debugging and logging. Exit codes play a crucial role, with &lt;code&gt;0&lt;/code&gt; signifying success and allowing &lt;code&gt;stdout&lt;/code&gt; to be parsed, while &lt;code&gt;2&lt;/code&gt; indicates a "System Block," aborting the target action and using &lt;code&gt;stderr&lt;/code&gt; as the rejection reason. Hooks can also be selectively triggered using "matchers," which are regular expressions for tool events and exact strings for lifecycle events.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Case Scenario
&lt;/h2&gt;

&lt;p&gt;Consider a scenario where you want to prevent the &lt;code&gt;delete_file&lt;/code&gt; tool from being executed if a specific environment variable isn't set. You could configure an hook with a matcher for &lt;code&gt;delete_file&lt;/code&gt;. This hook script would check for the environment variable. If it's missing, the script would print a JSON object like &lt;code&gt;{"decision": "deny", "reason": "Environment variable 'ALLOW_DELETIONS' not set."}&lt;/code&gt; to &lt;code&gt;stdout&lt;/code&gt; and exit with code &lt;code&gt;0&lt;/code&gt;. The Gemini CLI would then parse this and block the delete operation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuration
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://geminicli.com/docs/hooks/" rel="noopener noreferrer"&gt;Here is&lt;/a&gt; the official announcement with all the possibile configurations.&lt;/p&gt;

&lt;p&gt;The configuration is handled in &lt;code&gt;settings.json&lt;/code&gt;, with a clear hierarchy of precedence from project to system settings. This allows for granular control, from project-specific security checks to global compliance policies. The configuration schema defines fields like &lt;code&gt;type&lt;/code&gt; (currently only "command" is supported), the &lt;code&gt;command&lt;/code&gt; to execute, a friendly &lt;code&gt;name&lt;/code&gt;, and an optional &lt;code&gt;timeout&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;e.g.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;{
  "hooks": {
    "BeforeTool": [
      {
        "matcher": "write_file|replace",
        "hooks": [
          {
            "name": "security-check",
            "type": "command",
            "command": "$GEMINI_PROJECT_DIR/.gemini/hooks/security.sh",
            "timeout": 5000
          }
        ]
      }
    ]
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;blockquote&gt;
&lt;p&gt;Hooks execute arbitrary code with your user privileges. Therefore, caution is advised, especially with project-level hooks in untrusted projects.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;Mastering these hooks means gaining control over your AI interactions, ensuring that the Gemini CLI operates exactly as you intend, securely and efficiently, across all your projects.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





</description>
      <category>ai</category>
      <category>gemini</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Senior Developer's Secret: AI-Driven Iteration with VSCode, Cline &amp; Playwright</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Mon, 26 Jan 2026 14:23:09 +0000</pubDate>
      <link>https://dev.to/gioboa/senior-developers-secret-ai-driven-iteration-with-vscode-cline-playwright-38ee</link>
      <guid>https://dev.to/gioboa/senior-developers-secret-ai-driven-iteration-with-vscode-cline-playwright-38ee</guid>
      <description>&lt;p&gt;The integration of &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;VSCode&lt;/a&gt;, &lt;a href="https://cline.bot/" rel="noopener noreferrer"&gt;Cline&lt;/a&gt;, and the &lt;a href="https://playwright.dev/" rel="noopener noreferrer"&gt;Playwright&lt;/a&gt; Model Context Protocol (MCP) Server marks a significant leap forward in frontend application development, testing, and refinement. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This powerful combination empowers developers to iterate with browsers more effectively, streamlining workflows and accelerating the delivery of high-quality web experiences.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  The New Frontend Development Workflow
&lt;/h2&gt;

&lt;p&gt;This integration creates a seamless and highly efficient feedback loop for frontend developers. With Cline embedded in VSCode, developers can leverage AI to generate code snippets, refactor existing code, and understand complex codebases. Cline can even execute commands directly in the terminal, adapting to the development environment and toolchain. This dramatically speeds up the initial coding phase and helps maintain code quality.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi19vm1sd0bx44vwzzkn1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi19vm1sd0bx44vwzzkn1.png" alt="Cline-integration"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The Playwright MCP Server is the bridge that connects the AI capabilities of Cline (or other LLMs) with actual browser environments. It exposes Playwright's browser automation functionalities, such as navigating pages, clicking elements, and filling forms, as structured tools that an AI can utilize. Instead of manually scripting every interaction, developers can use natural language prompts to instruct the AI to perform complex browser actions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is where the integration truly shines. The AI, powered by Playwright MCP, can open a browser and validate changes made to the application.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8cynzx73dd18xy8r4so.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fa8cynzx73dd18xy8r4so.png" alt="MCP"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers can ask the AI to verify specific UI behaviors, generate automated tests based on natural language descriptions, and even suggest fixes for failing tests. Playwright's inherent capabilities, like auto-waiting for elements, web-first assertions, and detailed tracing, further enhance the reliability and debuggability of these AI-driven tests.&lt;/p&gt;

&lt;p&gt;Cline can explore the application, suggest new test cases, and automatically generate tests, significantly reducing the manual effort in quality assurance. This rapid iteration cycle means that frontend issues can be identified and resolved much faster, leading to a more robust and polished user experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real Use Case
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7bagfuqs8iovz5muhhw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs7bagfuqs8iovz5muhhw.png" alt="Carousel-Component"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I stumbled upon a fantastic UI component on the web and, without missing a beat, grabbed a screenshot. Then, the magic happened. I pasted that image directly into Cline.&lt;br&gt;
What followed was an incredible display of AI power.&lt;br&gt;
Through just a few iterative "AI loops" within Cline, the platform intelligently analysed the screenshot, understood its structure and styling, and then, almost effortlessly, translated it into functional code. After few minutes, that stylish component was not just an image on my screen, but a live, interactive part of my own application. This seamless transition from a visual idea to a working element, all powered by AI, truly felt like the future of development.&lt;/p&gt;



&lt;p&gt;In conclusion, the sophisticated integration of VSCode, Cline, and the Playwright MCP Server is transforming the landscape of frontend development. Developers can now build, test, and refine frontend applications with unprecedented speed and accuracy, thanks to a new era of intelligent, efficient, and highly effective workflows, leading to superior web experiences for users.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>microsoft</category>
      <category>playwright</category>
      <category>mcp</category>
      <category>ai</category>
    </item>
    <item>
      <title>Agent Flows At Scale with Google’s ADK for TypeScript</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Sat, 20 Dec 2025 22:08:47 +0000</pubDate>
      <link>https://dev.to/gioboa/agent-flows-at-scale-with-googles-adk-for-typescript-b8k</link>
      <guid>https://dev.to/gioboa/agent-flows-at-scale-with-googles-adk-for-typescript-b8k</guid>
      <description>&lt;p&gt;The landscape of Artificial Intelligence is undergoing a seismic shift. We are moving rapidly beyond simple, single-purpose chatbots toward autonomous, intelligent multi-agent systems capable of complex reasoning and task orchestration.&lt;/p&gt;

&lt;p&gt;To empower developers in this new era, Google has officially introduced the &lt;a href="https://developers.googleblog.com/introducing-agent-development-kit-for-typescript-build-ai-agents-with-the-power-of-a-code-first-approach/" rel="noopener noreferrer"&gt;Agent Development Kit (ADK) for TypeScript&lt;/a&gt;. This open-source framework marks a pivotal moment for the JavaScript ecosystem, bringing a strict "code-first" philosophy to AI development.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Code-First Revolution
&lt;/h2&gt;

&lt;p&gt;For too long, building AI agents felt like an exercise in abstract prompt engineering. The ADK changes this paradigm by allowing developers to define logic, tools, and orchestration directly in TypeScript. As highlighted in &lt;a href="https://developers.googleblog.com/introducing-agent-development-kit-for-typescript-build-ai-agents-with-the-power-of-a-code-first-approach/" rel="noopener noreferrer"&gt;Google’s December 2025 announcement&lt;/a&gt;, this approach enables engineers to apply standard software development best practices—such as version control, automated testing, and CI/CD integration—to their AI workflows.&lt;/p&gt;

&lt;p&gt;The framework offers end-to-end type safety, meaning developers can build their agent backend and application frontend in a cohesive language, drastically reducing integration errors. By utilising modular components like &lt;code&gt;Agents&lt;/code&gt;, &lt;code&gt;Instructions&lt;/code&gt;, and &lt;code&gt;Tools&lt;/code&gt;, the ADK transforms complex AI behaviours into clean, readable, and scalable code.&lt;/p&gt;

&lt;h2&gt;
  
  
  Case Study: The "Chef &amp;amp; Sommelier" Multi-Agent System
&lt;/h2&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;a href="https://github.com/gioboa/chef-agent-adk-typescript" rel="noopener noreferrer"&gt;Here&lt;/a&gt; you can find the working project, you only need to add your Gemini API Key.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;To demonstrate the power of the ADK, let’s explore a practical implementation: a hierarchical &lt;strong&gt;Chef Agent&lt;/strong&gt;. This project uses the latest &lt;a href="https://gemini.google.com/" rel="noopener noreferrer"&gt;Gemini 3&lt;/a&gt; model &lt;code&gt;gemini-3-pro-preview&lt;/code&gt; model to create a culinary experience that goes beyond simple recipe generation.&lt;/p&gt;

&lt;p&gt;The project structure is clean and modular, a hallmark of the ADK methodology:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;chef-agent/
├── agent.ts (The Head Chef)
└── sommelier-agent/
    └── agent.ts (The Wine Expert)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  The Hierarchical Architecture
&lt;/h2&gt;

&lt;p&gt;The core of this application lies in &lt;code&gt;chef-agent/agent.ts&lt;/code&gt;. Here, the root agent is defined as the "Chef," whose instruction is to take a single ingredient input and generate a masterpiece dish, complete with a name, description, recipe, and plating instructions.&lt;/p&gt;

&lt;p&gt;However, the true power of the ADK is showcased in how it handles sub-agents. The Chef isn't working alone.&lt;/p&gt;

&lt;p&gt;The code explicitly defines a &lt;code&gt;subAgents&lt;/code&gt; array:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LlmAgent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@google/adk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;sommelierAgent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;./sommelier-agent/agent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;rootAgent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LlmAgent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;chef_agent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3-pro-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A chef that creates amazing food based on a single ingredient.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;instruction&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;`You are a world-renowned Chef with a passion for creating culinary masterpieces.
Your specialty is taking a SINGLE INGREDIENT provided by the user and designing a complete, delicious, and amazing dish around it.

When you receive an input (which will be an ingredient):
1.  **Conceive a Dish**: Create a unique name for a dish highlighting that ingredient.
2.  **Description**: Write a mouth-watering description.
3.  **Recipe**: Provide a detailed recipe including:
    *   Ingredients list (quantities and items).
    *   Step-by-step cooking instructions.
4.  **Presentation**: Suggest how to plate the dish for maximum visual appeal.

Be enthusiastic, professional, and creative.
You also have a colleague, "sommelier_agent", who must suggest wine pairings for the dish you create.`&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;subAgents&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nx"&gt;sommelierAgent&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;This configuration tells the Chef agent that it has access to a colleague. When the Chef generates a recipe, it can automatically consult the &lt;code&gt;sommelier_agent&lt;/code&gt;. defined in &lt;code&gt;sommelier-agent/agent.ts&lt;/code&gt;. This sub-agent has a specific, narrow scope: suggesting the perfect wine pairing based on the flavour profile of the dish the Chef just created.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;sommelier_agent&lt;/code&gt; code:&lt;br&gt;
&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;LlmAgent&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;@google/adk&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;instruction&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;`You are an expert Sommelier.
Your goal is to suggest the perfect wine pairing for a given dish.
When provided with a dish name or description:
1. Suggest a specific type of wine (e.g., Cabernet Sauvignon, Chardonnay).
2. Explain why it pairs well with the dish (flavor profile, acidity, etc.).
3. Recommend a specific region if applicable.`&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;export&lt;/span&gt; &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;sommelierAgent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;LlmAgent&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;sommelier_agent&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;model&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;gemini-3-pro-preview&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;A sommelier that suggests wine pairings for a given dish.&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="nx"&gt;instruction&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;});&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;h2&gt;
  
  
  The Developer Experience
&lt;/h2&gt;

&lt;p&gt;The project leverages the &lt;code&gt;@google/adk&lt;/code&gt; library and offers a seamless developer experience. Looking at the &lt;code&gt;package.json&lt;/code&gt;, we see built-in scripts that use the ADK devtools:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Terminal Mode:&lt;/strong&gt; &lt;code&gt;pnpm run run:terminal&lt;/code&gt; allows the developer to interact with the Chef directly in the command line for rapid testing.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F964sfd6npxmc9x7do5hs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F964sfd6npxmc9x7do5hs.png" alt="ADK Terminal"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Web Interface:&lt;/strong&gt; &lt;code&gt;pnpm run run:web&lt;/code&gt; launches a local server (&lt;code&gt;localhost:8000&lt;/code&gt;), providing a chat interface to visualize the interaction between the user, the Chef, and the hidden Sommelier sub-agent.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11yappsus6y7xy2wsvcr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F11yappsus6y7xy2wsvcr.png" alt="ADK Web"&gt;&lt;/a&gt;&lt;/p&gt;



&lt;p&gt;The Chef Agent project perfectly illustrates why the ADK for TypeScript is a game-changer. It creates a structured environment where distinct AI personalities (the Chef and the Sommelier) interact through typed contracts rather than vague prompts.&lt;/p&gt;

&lt;p&gt;By combining the reasoning capabilities of the new Gemini 3 models with the reliability of TypeScript, Google’s ADK provides the foundation for the next generation of software—where code doesn't just execute commands, but thinks, creates, and collaborates.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;





</description>
      <category>ai</category>
      <category>gemini</category>
      <category>programming</category>
      <category>typescript</category>
    </item>
    <item>
      <title>Agent Development in Hours, Not Weeks: Accelerating with Gemini 3 and Google ADK</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Thu, 18 Dec 2025 18:13:04 +0000</pubDate>
      <link>https://dev.to/gioboa/agent-development-in-hours-not-weeks-accelerating-with-gemini-3-and-google-adk-4h0l</link>
      <guid>https://dev.to/gioboa/agent-development-in-hours-not-weeks-accelerating-with-gemini-3-and-google-adk-4h0l</guid>
      <description>&lt;p&gt;In the rapidly evolving landscape of artificial intelligence, the ability to iterate and deploy intelligent agents with speed is no longer a luxury – it's a critical competitive advantage. Traditionally, bringing a sophisticated AI agent from concept to production could take weeks, bogged down by complex model training, intricate API integrations, and the laborious process of building robust applications around the core AI.&lt;/p&gt;

&lt;p&gt;However, a revolutionary shift is underway, largely thanks to the potent synergy between &lt;strong&gt;&lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Gemini 3&lt;/a&gt;'s advanced capabilities&lt;/strong&gt; and the streamlined tooling provided by the &lt;a href="https://google.github.io/adk-docs/" rel="noopener noreferrer"&gt;Google Agent Development Kit (ADK)&lt;/a&gt;. This powerful combination isn't just about incremental improvements; it represents a paradigm shift, compressing development timelines from weeks of effort down to mere hours.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Powerhouse: Gemini 3
&lt;/h2&gt;

&lt;p&gt;At the heart of this acceleration lies Gemini 3, Google's next-generation multimodal AI model.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Gemini 3 is not just an incremental update; it's engineered for enhanced reasoning, complex task understanding, and superior performance across a vast array of modalities, including text, image, audio, and video.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Gemini 3 excels at understanding complex instructions and performing multi-step reasoning. This means less hand-holding and explicit programming for the agent's logic, as Gemini 3 can often infer intent and execute sophisticated workflows based on high-level prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Accelerator: Google Agent Development Kit (ADK)
&lt;/h2&gt;

&lt;p&gt;While Gemini 3 provides the brain, the Google ADK provides the entire nervous system and skeletal structure, making it incredibly easy to connect that brain to the real world. The ADK is more than just a library; it's a comprehensive ecosystem designed specifically for rapid agent prototyping and deployment.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of ADK
&lt;/h2&gt;

&lt;p&gt;The ADK offers a &lt;a href="https://github.com/google/adk-samples" rel="noopener noreferrer"&gt;rich collection&lt;/a&gt; of pre-built components, templates, and example agents that demonstrate common use cases. This allows developers to start from a solid foundation rather than building everything from scratch.&lt;br&gt;
ADK provides seamless, abstracted interfaces for interacting with LLMs, allowing developers to focus on the agent's behaviour rather than the intricacies of API calls.&lt;br&gt;
It is designed to work hand-in-hand with &lt;a href="https://cloud.google.com/" rel="noopener noreferrer"&gt;Google Cloud&lt;/a&gt;'s robust infrastructure, offering simple paths to deploy agents as scalable, production-ready services.&lt;/p&gt;
&lt;h2&gt;
  
  
  Amazing Developer Experience
&lt;/h2&gt;

&lt;p&gt;Features like interactive playgrounds, debugging tools, and quick deployment options massively reduce the iteration cycle. Developers can test, refine, and redeploy their agents in minutes, not hours.&lt;/p&gt;

&lt;p&gt;With ADK is easy to orchestrate complex agent behaviours, managing state, memory, and interactions with various external tools and systems. e.g. logging tools, safety security tools.&lt;/p&gt;



&lt;p&gt;In a world where AI innovation moves at an unprecedented pace, the combination of Gemini 3 and the Google ADK isn't just a development advantage; it's the new standard for building the intelligent agents of tomorrow, today. Start building and watch your project development time drop from weeks to hours.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>ai</category>
      <category>development</category>
      <category>programming</category>
      <category>agents</category>
    </item>
    <item>
      <title>Thinking, Planning, Executing: Gemini 3's Agentic Core in the Antigravity Sandbox</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Thu, 18 Dec 2025 16:06:24 +0000</pubDate>
      <link>https://dev.to/gioboa/thinking-planning-executing-gemini-3s-agentic-core-in-the-antigravity-sandbox-1g9p</link>
      <guid>https://dev.to/gioboa/thinking-planning-executing-gemini-3s-agentic-core-in-the-antigravity-sandbox-1g9p</guid>
      <description>&lt;p&gt;In the ever-accelerating landscape of artificial intelligence, the promise of true agency, where systems not only perform tasks but proactively think, plan, and execute, has long been the holy grail. &lt;/p&gt;

&lt;p&gt;Today, we stand on the precipice of that realisation with the advent of &lt;a href="https://gemini.google.com/" rel="noopener noreferrer"&gt;Gemini 3&lt;/a&gt;’s revolutionary agentic core, particularly as its capabilities unfold within the controlled yet expansive environment of the Antigravity Sandbox.&lt;/p&gt;

&lt;p&gt;This isn't just about faster computation; it's about fundamentally reshaping how we approach complex problems, entrusting their lifecycle management from conceptualisation to completion to an intelligent system.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;At the heart of Gemini 3 lies a sophisticated architecture designed to mirror human cognitive processes, yet unbound by human limitations.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Its agentic core is not a collection of isolated functionalities, but rather an interconnected suite of modules enabling seamless transitions between ideation and action.&lt;/p&gt;

&lt;p&gt;This integration manifests in three critical phases: &lt;br&gt;
&lt;strong&gt;thinking, planning, and executing.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Thinking
&lt;/h2&gt;

&lt;p&gt;The "thinking" phase in a Gemini 3 agent is where the magic begins. When presented with a goal or a problem within the &lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt; Sandbox, the agent doesn't immediately jump to the most obvious solution. Instead, it engages in a deep analysis of the context, drawing upon its vast knowledge base and learned patterns. This involves interpreting ambiguities, identifying underlying constraints, and even hypothesising potential causal relationships.&lt;/p&gt;
&lt;h2&gt;
  
  
  Planning
&lt;/h2&gt;

&lt;p&gt;This detailed understanding then seamlessly transitions into the "planning" phase. This isn't just about generating a linear sequence of steps; it's about dynamic, adaptive strategy formulation. Gemini 3 agents in the Antigravity Sandbox are capable of generating intricate workflow diagrams, complete with contingencies and alternative pathways. They anticipate potential roadblocks, perhaps a database dependency that might impact downtime, or a security policy that could bottleneck deployment. Crucially, the planning phase is iterative and self-correcting.&lt;/p&gt;
&lt;h2&gt;
  
  
  Executing
&lt;/h2&gt;

&lt;p&gt;Finally, we arrive at "execution", where the carefully crafted plans are brought to life. Within the Antigravity Sandbox, Gemini 3 agents aren't merely observers of the execution process. It leverages its ongoing monitoring capabilities to detect anomalies, analyse the deviation against its planned state, and then initiate an intelligent re-planning or recovery sequence.&lt;/p&gt;



&lt;p&gt;We are moving beyond simply &lt;em&gt;tooling&lt;/em&gt; intelligence, to truly &lt;em&gt;partnering&lt;/em&gt; with it. The precipice has been crossed; the landscape of possibility is now immeasurably wider, inviting us to dream with an intelligence that doesn't just assist, but truly, profoundly, leads the way.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>webdev</category>
      <category>programming</category>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>Antigravity for Developers: Lifting the Burden with Gemini 3 Agents</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Tue, 16 Dec 2025 09:05:57 +0000</pubDate>
      <link>https://dev.to/gioboa/antigravity-for-developers-lifting-the-burden-with-gemini-3-agents-3o9c</link>
      <guid>https://dev.to/gioboa/antigravity-for-developers-lifting-the-burden-with-gemini-3-agents-3o9c</guid>
      <description>&lt;p&gt;The constant pressure to innovate and deliver faster is a familiar burden for every developer. Days are often consumed by repetitive tasks, debugging, and navigating complex workflows, leaving little room for the creative problem-solving that truly fuels progress.&lt;/p&gt;

&lt;p&gt;What if developers could experience a form of "antigravity," liberating them from these tedious constraints? The answer might lie in &lt;a href="https://gemini.google.com/" rel="noopener noreferrer"&gt;Gemini 3&lt;/a&gt; Agents.&lt;/p&gt;

&lt;p&gt;These intelligent agents, powered by the cutting-edge reasoning capabilities of Gemini 3, promise to revolutionise the development landscape by automating complex, multi-step tasks.&lt;/p&gt;

&lt;p&gt;They act as tireless assistants, capable of understanding intricate instructions and executing them with precision and efficiency. Imagine offloading tasks like code generation, bug fixing, documentation creation, and even complex refactoring to a dedicated agent. This is the potential unlocked by Gemini 3 Agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automate Impossible Tasks
&lt;/h2&gt;

&lt;p&gt;Unlike simple scripts that follow rigid instructions, Gemini 3 Agents can interpret natural language requests, reason about the underlying code, and dynamically adjust their strategies based on the context of the project. This level of intelligence allows them to tackle tasks that were previously impossible to automate, freeing up developers to focus on higher-level challenges.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The key to their transformative power lies in their ability to understand and adapt to the nuances of the development process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  How do Gemini 3 Agents work in practice?
&lt;/h2&gt;

&lt;p&gt;Consider a scenario where a developer needs to implement a new feature. Instead of manually writing all the code, they can instruct a Gemini 3 Agent to generate the initial code structure, including necessary functions and classes, based on a brief description of the feature's requirements. The agent can then integrate this code into the existing codebase, ensuring compatibility and adherence to coding standards. Furthermore, it can even write unit tests to verify the functionality of the newly implemented feature.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;The benefits extend beyond simple code generation. Gemini 3 Agents can also play a crucial role in debugging and error resolution.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;By analysing error logs and code snippets, they can identify potential causes of bugs and suggest solutions. They can even automatically implement fixes, significantly reducing the time spent on tedious debugging sessions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The advantages
&lt;/h2&gt;

&lt;p&gt;Automating repetitive tasks frees up developers to focus on more complex and creative aspects of their work, leading to a significant boost in overall productivity. This streamlined workflows and automated tasks contribute to faster development cycles, allowing teams to deliver projects more quickly. By removing the burden of tedious tasks, Gemini 3 Agents can improve developer satisfaction and create a more engaging and fulfilling work environment.&lt;/p&gt;




&lt;p&gt;The introduction of Gemini 3 Agents marks a paradigm shift in software development. It's &lt;strong&gt;not about replacing developers&lt;/strong&gt;, but about augmenting their capabilities and empowering them to achieve more. As these agents become more sophisticated and integrated into the development ecosystem, the potential for unlocking unprecedented levels of productivity and innovation in the software industry is truly limitless. The future of development is here, and it's lighter than ever before.&lt;/p&gt;




&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>gemini</category>
      <category>google</category>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>LLM as a Judge with Azure Foundry for Scalable Model Assessment</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Fri, 12 Dec 2025 08:02:45 +0000</pubDate>
      <link>https://dev.to/gioboa/llm-as-a-judge-with-azure-foundry-for-scalable-model-assessment-443i</link>
      <guid>https://dev.to/gioboa/llm-as-a-judge-with-azure-foundry-for-scalable-model-assessment-443i</guid>
      <description>&lt;p&gt;The rapid advancements in large language models (LLMs) have ushered in an era of unprecedented innovation, but with it comes the critical challenge of effective model evaluation. Traditional methods often &lt;strong&gt;struggle with&lt;/strong&gt; the scale and nuance required to assess the &lt;strong&gt;complex outputs of LLMs&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This is where &lt;strong&gt;LLM as a Judge&lt;/strong&gt; emerges as a transformative technique, leveraging the power of one LLM to evaluate the outputs of another.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When combined with the flexibility and control offered by platforms like &lt;a href="https://azure.microsoft.com/en-us/products/ai-foundry" rel="noopener noreferrer"&gt;Azure Foundry&lt;/a&gt;, this approach becomes an invaluable tool for developers, especially during the crucial testing phase.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is LLM as a Judge?
&lt;/h2&gt;

&lt;p&gt;At its core, LLM as a Judge involves using a sophisticated LLM to act as an automated evaluator for the responses generated by other LLMs. Instead of relying solely on human annotators, who can be costly and time-consuming, a "judge" LLM is given the original prompt, the generated response, and a set of clear instructions or a rubric.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;It then assesses the response based on criteria such as accuracy, relevance, coherence, tone, and safety, providing a score, a label, or even a detailed critique.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This process automates a significant portion of the evaluation workflow, offering scalability and consistency that human review often struggles to match.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of Azure Foundry for LLM as a Judge
&lt;/h2&gt;

&lt;p&gt;Azure Foundry significantly enhances the "LLM as a Judge" paradigm, particularly for testing and experimentation. One of its most compelling benefits is the ability to &lt;strong&gt;seamlessly switch between and compare different foundational models&lt;/strong&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This flexibility is paramount when you're using an LLM as a judge because the choice of the judge model itself can heavily influence the evaluation results.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Pros of LLM as a Judge in Testing
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Scalability:&lt;/strong&gt; Evaluate thousands of responses in minutes, making large-scale testing feasible.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Consistency:&lt;/strong&gt; Reduce the subjectivity inherent in human evaluations, ensuring more uniform assessments.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Speed:&lt;/strong&gt; Accelerate the feedback loop, allowing for faster iteration and improvement of models.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Cost-Effectiveness:&lt;/strong&gt; Significantly reduce the costs associated with manual review.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Benchmarking:&lt;/strong&gt; With Azure Foundry, objectively compare the performance of different models (or different versions of the same model) under various conditions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Fine-tuning:&lt;/strong&gt; Provide targeted feedback that can be used for reinforcement learning from human feedback (RLHF) or other fine-tuning techniques.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example: Comparing Two Answers for Sameness
&lt;/h2&gt;

&lt;p&gt;Let's say we have a question: "What is the capital of France?" and two LLMs provide answers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Answer 1 (Model X):&lt;/strong&gt; "Paris is the bustling capital and largest city of France, famous for its art, fashion, gastronomy, and culture."&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Answer 2 (Model Y):&lt;/strong&gt; "The capital city of France is Paris, a major European city and a global center for art and fashion."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;During testing, you want to verify if these two answers convey essentially the same factual information, even if phrased differently. You can use an LLM as a judge with a specific prompt:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Judge LLM Prompt:&lt;/strong&gt;&lt;br&gt;
You are a critical evaluator comparing two statements for factual equivalence. Your task is to determine if 'Statement A' and 'Statement B' convey the same core factual information, even if the wording differs. Rate their similarity on a scale of 1 to 5, where 1 means completely different information and 5 means essentially identical information. Explain your reasoning briefly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Statement A:&lt;/strong&gt; {Answer 1}&lt;br&gt;
&lt;strong&gt;Statement B:&lt;/strong&gt; {Answer 2}&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Judge LLM Output:&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Rating: 5&lt;/strong&gt;&lt;br&gt;
&lt;strong&gt;Reasoning:&lt;/strong&gt; Both statements clearly identify Paris as the capital of France and mention its significance in art and fashion. While they use slightly different descriptive words, the core factual information conveyed is identical.&lt;/p&gt;

&lt;p&gt;By running countless such comparisons through your judge LLM on Azure Foundry, you can quickly identify instances where models diverge in factual accuracy, consistency, or even subtle semantic meaning. If you then want to see if a different judge model (e.g., one specifically trained for semantic similarity) offers a different perspective, Azure Foundry makes that switch effortless.&lt;/p&gt;



&lt;p&gt;In conclusion, LLM as a Judge, particularly when powered by the versatile capabilities of Azure Foundry, is an indispensable tool for modern AI development. It offers a scalable, consistent, and highly flexible approach to model evaluation, transforming the testing and iteration process for LLM-powered.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>azure</category>
      <category>microsoft</category>
      <category>ai</category>
      <category>development</category>
    </item>
    <item>
      <title>Gemini 3 and Antigravity Gave Me Some Free Time</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Thu, 04 Dec 2025 21:25:26 +0000</pubDate>
      <link>https://dev.to/gioboa/gemini-3-and-antigravity-gifted-me-a-free-time-1n7m</link>
      <guid>https://dev.to/gioboa/gemini-3-and-antigravity-gifted-me-a-free-time-1n7m</guid>
      <description>&lt;p&gt;We live in strange times for developers. Until recently, our worth was often measured in hours spent deciphering cryptic documentation, fighting with dependencies, or figuring out why an API endpoint had suddenly changed its format.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Today, I received definitive proof that the paradigm has shifted.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Instead of wasting the afternoon configuring the new &lt;strong&gt;Google Places API&lt;/strong&gt;, thanks to &lt;a href="https://blog.google/products/gemini/gemini-3/" rel="noopener noreferrer"&gt;Gemini 3&lt;/a&gt;, I went for a relaxing walk in the park.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenge: A New Feature To Implement
&lt;/h2&gt;

&lt;p&gt;It all started with a seemingly simple task: integrate the new version of the Google Places API into my project. Anyone who has worked with Google Maps APIs knows that the transition from "Legacy" to "New" isn't just a version upgrade; it is a shift in philosophy.&lt;/p&gt;

&lt;p&gt;The official documentation is vast, sure, but scattered. It talks about enabling the project on Cloud Platform, configuring billing, handling OAuth authentication (no longer just a simple API Key in all cases), and, most importantly, understanding the logic of &lt;em&gt;Field Masks&lt;/em&gt;. With the new client library, you can’t just call an endpoint: you must specify exactly which fields you want back to optimize costs and performance. &lt;/p&gt;

&lt;p&gt;If you mess up the mask, the API rejects you or, worse, charges you for data you don't use.&lt;/p&gt;

&lt;p&gt;In a "classic" afternoon a few months ago, I would have opened ten browser tabs: the Quick Start guide, Stack Overflow, and maybe a few YouTube tutorials. I would have wasted an hour just setting up the environment and figuring out why the library couldn't see my credentials.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Intervention of Gemini 3 and Antigravity
&lt;/h2&gt;

&lt;p&gt;Instead of opening the documentation, I opened &lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt;, Google's new "agentic" development environment. I typed a dry, almost conversational prompt:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;"I need to call the new Google Places API. Build me a client that searches for nearby places and handles Field Masks correctly."&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Here is where the magic happened. I didn't just get a code snippet to copy-paste (and then debug). Gemini 3, integrated into Antigravity, acted as a true &lt;em&gt;pair programmer&lt;/em&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; &lt;strong&gt;Immediate Setup:&lt;/strong&gt; It understood and suggested the commands avoiding system conflicts.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;Production-Ready Code:&lt;/strong&gt; It wrote the script using the correct library, &lt;code&gt;google-maps-places&lt;/code&gt; (rather than the old generic &lt;code&gt;googlemaps&lt;/code&gt;). It implemented the &lt;em&gt;Field Masks&lt;/em&gt; logic (&lt;code&gt;places.displayName&lt;/code&gt;, &lt;code&gt;places.formattedAddress&lt;/code&gt;, &lt;code&gt;places.priceLevel&lt;/code&gt;) without me having to hunt for the exact field names in the JSON schema.&lt;/li&gt;
&lt;li&gt; &lt;strong&gt;The Final Touch:&lt;/strong&gt; The initial implementation had one small problem regarding &lt;em&gt;Field Masks&lt;/em&gt;. So I ran the program and debugged it the old-fashioned way. After a few tries, I fixed the issues.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Result: Free Time
&lt;/h2&gt;

&lt;p&gt;In less than 15 minutes, the code was running. Clean results, fast calls, zero headaches.&lt;br&gt;
What was supposed to be a task taking "a solid hour plus unforeseen issues" turned into a productive coffee break.&lt;/p&gt;

&lt;p&gt;The real revolution of tools like Gemini 3 isn't that they write code for us. It’s that they &lt;strong&gt;remove friction&lt;/strong&gt;. They eliminate that frustrating phase of "reading documentation just to figure out how to do Hello World." They allow us to jump straight to business logic, to the real value we want to create.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Today, this approach changed my day, giving me some free time. Instead of staring at a screen trying to figure out the difference between "FindPlace" and "TextSearch", I took a walk in nature and felt the fresh air on my face.&lt;/p&gt;
&lt;/blockquote&gt;



&lt;p&gt;The rise of tools like Gemini 3, integrated into environments like Antigravity, signifies a profound shift: from battling syntax and obscure configurations to embracing a collaborative partnership with AI. This new era promises not just efficiency, but a more human-centric development experience, where technology truly serves to free our minds, allowing us to enjoy a walk in the park instead of being chained to the terminal.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>ai</category>
      <category>gemini</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Optimize your prompts with Gemini 3 &amp; Antigravity</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Wed, 03 Dec 2025 21:11:43 +0000</pubDate>
      <link>https://dev.to/gioboa/optimize-your-prompts-with-gemini-3-antigravity-28p5</link>
      <guid>https://dev.to/gioboa/optimize-your-prompts-with-gemini-3-antigravity-28p5</guid>
      <description>&lt;p&gt;The landscape of interacting with Large Language Models (LLMs) has seen rapid evolution, moving from simple queries to sophisticated prompt engineering.&lt;/p&gt;

&lt;p&gt;For many, particularly non-native English speakers (like me), the manual refinement of these prompts has been a significant hurdle. Crafting the perfect prompt requires not only a deep understanding of the desired output but also a nuanced command of language – a challenge that often leads to frustration and suboptimal results.&lt;/p&gt;

&lt;p&gt;In the past, the process of optimizing an LLM prompt was a painstaking, iterative journey.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;I recall spending considerable time manually constructing initial prompts, often based on trial and error.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;When the LLM's output didn't meet expectations, the real work began: dissecting the failures, identifying ambiguous phrasing, and attempting to rephrase the prompt. Each refinement was a gamble, often requiring multiple attempts and significant time investment to inch closer to the desired outcome. This method, while eventually yielding results, was inherently inefficient.&lt;/p&gt;

&lt;h2&gt;
  
  
  Antigravity with Gemini 3
&lt;/h2&gt;

&lt;p&gt;However, a revolutionary shift has emerged with the advent of "&lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Antigravity&lt;/a&gt;".&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Google is offering it for FREE and you can also have FREE access to Gemini 3 with its integrated "thinking" capabilities.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This development has transformed my approach to prompt engineering entirely. No longer time-consuming manual iterative process. Now, the power of an advanced LLM, equipped with sophisticated reasoning, can be leveraged directly for prompt optimization.&lt;/p&gt;

&lt;p&gt;The new methodology is remarkably straightforward and incredibly efficient. When an LLM generates unsatisfactory results, you can now simply instruct Gemini 3, "Can you analyze the failed results and optimize the prompt?" This simple directive unleashes Gemini 3's analytical prowess. It processes the previous prompt, evaluates the undesirable outputs, and, critically, understands the underlying intent you are trying to achieve. Within a matter of minutes – a stark contrast to the hours or even days the manual method sometimes required – Gemini 3 presents an optimized prompt. This optimized prompt is often far more articulate, precise, and effective than anything I could have crafted manually in the same timeframe.&lt;/p&gt;

&lt;h2&gt;
  
  
  Quality And Reliability
&lt;/h2&gt;

&lt;p&gt;This impacts the quality and reliability of LLM outputs. By leveraging Gemini 3 for prompt optimization, the resulting prompts are inherently more robust and less prone to misinterpretation by the target LLM.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This leads to a higher rate of successful outputs from the LLM, dramatically improving overall accuracy.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The continuous loop of analysis, optimization, and verification creates a virtuous cycle of improvement, pushing the boundaries of what is achievable with LLMs.&lt;/p&gt;




&lt;p&gt;The integration of Gemini 3 has transformed prompt engineering from a laborious, language-dependent chore into an accelerated, intelligent process. This innovation not only streamlines workflows but also unlocks a new level of precision and efficiency in human-AI collaboration.&lt;/p&gt;




&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>ai</category>
      <category>gemini</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
    <item>
      <title>Google Antigravity: The Amazing IDE Powered by Gemini 3</title>
      <dc:creator>Giorgio Boa</dc:creator>
      <pubDate>Thu, 27 Nov 2025 07:52:34 +0000</pubDate>
      <link>https://dev.to/gioboa/google-antigravity-the-amazing-ide-powered-by-gemini-3-26np</link>
      <guid>https://dev.to/gioboa/google-antigravity-the-amazing-ide-powered-by-gemini-3-26np</guid>
      <description>&lt;p&gt;The landscape of AI-assisted development has evolved rapidly, moving from simple code completion to fully integrated "agentic" environments. The latest entrant to this competitive space is &lt;a href="https://antigravity.google/" rel="noopener noreferrer"&gt;Google Antigravity&lt;/a&gt;, a public preview release that promises to redefine how developers interact with their IDEs. &lt;/p&gt;

&lt;p&gt;Antigravity offers a familiar VS Code-like interface but introduces a sophisticated "Agent Manager" designed to spawn, coordinate, and test autonomous coding tasks. At the heart of this system lies a diverse selection of large language models (LLMs).&lt;/p&gt;

&lt;h2&gt;
  
  
  The Power of Gemini 3
&lt;/h2&gt;

&lt;p&gt;The core engine driving Google Antigravity is the Gemini 3 Pro model, which is available in two distinct configurations: "High" and "Low." &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;This tiered approach allows developers to balance computational cost and speed against reasoning depth, depending on the complexity of the task at hand.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Perhaps the most intriguing aspect of the Gemini 3 integration is its multimodal potential. While the current usage focuses on code and image context, the implication is that future iterations could allow developers to feed video context—such as a screen recording of a bug or a feature demo—directly into the agent to drive development.&lt;/p&gt;

&lt;h2&gt;
  
  
  Beyond Google: A Multi-Model Approach
&lt;/h2&gt;

&lt;p&gt;One of Antigravity’s most surprising features is its willingness to step outside the Google ecosystem. The platform includes access to Claude Sonnet 4.5, widely regarded as one of the top-tier models for coding tasks. This inclusion suggests that Antigravity aims to be a model-agnostic platform where the best tool can be used for the job, rather than a walled garden for Google products.&lt;br&gt;
However, the model selection also includes some curiosities. The platform lists "GPTO OSS 120," described as an open-source model from OpenAI.&lt;/p&gt;
&lt;h2&gt;
  
  
  Planning, Fast Mode, and Autonomous Testing
&lt;/h2&gt;

&lt;p&gt;The choice of model heavily influences the two primary modes of operation: "Fast Mode" and "Planning Mode." In Planning Mode, the models generate a step-by-step roadmap before writing code, allowing the user to intervene, skip steps, or provide feedback on specific images or architectural decisions.&lt;/p&gt;

&lt;p&gt;However, the true "killer feature" powered by these models is the autonomous testing capability. Unlike standard IDEs, Antigravity uses its agents to physically interact with the browser. It simulates mouse movements, clicks buttons (like "Shop Now"), scrolls, and hovers over elements to verify UI responsiveness. This level of semantic understanding—where the model reasons through the UX flow—sets a new standard for what developers can expect from an AI pair programmer.&lt;/p&gt;



&lt;p&gt;While Google Antigravity is still in a rate-limited public preview, its integration of Gemini 3 Pro alongside Claude Sonnet 4.5 offers a glimpse into a future where IDEs are not just text editors, but command centers for intelligent, multimodal agents.&lt;/p&gt;



&lt;p&gt;You can &lt;a href="https://github.com/gioboa" rel="noopener noreferrer"&gt;follow me on GitHub&lt;/a&gt;, where I'm creating cool projects.&lt;/p&gt;

&lt;p&gt;I hope you enjoyed this article, until next time 👋&lt;/p&gt;

&lt;p&gt;

&lt;/p&gt;
&lt;div class="ltag__user ltag__user__id__892161"&gt;
    &lt;a href="/gioboa" class="ltag__user__link profile-image-link"&gt;
      &lt;div class="ltag__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F892161%2Ff7bb8d77-6568-4576-b4e5-715d424afabd.jpeg" alt="gioboa image"&gt;
      &lt;/div&gt;
    &lt;/a&gt;
  &lt;div class="ltag__user__content"&gt;
    &lt;h2&gt;
&lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa&lt;/a&gt;Follow
&lt;/h2&gt;
    &lt;div class="ltag__user__summary"&gt;
      &lt;a class="ltag__user__link" href="/gioboa"&gt;Giorgio Boa is a full stack developer and the front-end ecosystem is his passion. He is also international public speaker, active in open source ecosystem, he loves learn and studies new things.&lt;/a&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;




</description>
      <category>gemini</category>
      <category>google</category>
      <category>webdev</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
