Flow DevOps Toolkit - Langflow 1.9 Feature The Flow DevOps Toolkit (LFX SDK) provides a CLI-based workflow for versioning, testing, and deploying Langflow flows. Instead of exporting and importing flow JSON through the UI, flows are managed as code within a project structure. Using the CLI, you can: - pull flows from a Langflow instance into local files - track changes through version control - validate flows before deployment - push updates back to the server - generate dependencies (requirements.txt) - serve flows as API endpoints A typical workflow: lfx pull lfx status lfx validate lfx push Flows are stored as JSON and can be versioned and tested alongside application code. The toolkit supports multiple environments (local, staging, production) using environment variables for API keys and instance configuration. Flows move across environments using the same deployment patterns as application services. What the video shows: In the video, the workflow is executed in a local environment using the CLI. A project is initialized, generating a structured directory with folders for flows, configuration, and test scaffolding. A flow is then pulled from a Langflow instance into the flows/ directory, where it is stored as a JSON file. An API key is configured as an environment variable, enabling authenticated communication with the Langflow server. The flow is then validated locally, ensuring that its structure and dependencies are correct before execution. After validation, the flow is served locally, exposing it as a runnable endpoint. Finally, the same flow is opened in the Langflow interface, confirming that the version managed via CLI is synchronized with the runtime. Available in Langflow: https://lnkd.in/diHc5mWn
Langflow
Software Development
Uberlândia, Minas Gerais 15,917 followers
Langflow is a low-code app builder for RAG and multi-agent AI applications. It’s Python-based and agnostic to any model,
About us
Langflow is a new, visual way to build, iterate and deploy AI apps.
- Website
-
https://www.langflow.org/
External link for Langflow
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- Uberlândia, Minas Gerais
- Type
- Self-Owned
- Founded
- 2020
- Specialties
- AI, Generative AI, GenAI, RAG, and Machine Learning
Locations
-
Primary
Get directions
Uberlândia, Minas Gerais, BR
Employees at Langflow
Updates
-
Block Custom Components - Langflow 1.9 Feature Langflow allows custom components to execute Python code inside the runtime. The Block Custom Components setting disables this capability at the server level. When the environment variable is set: LANGFLOW_ALLOW_CUSTOM_COMPONENTS=false the system prevents: - creating new custom components - editing component code in the visual editor This restricts arbitrary code execution within the Langflow environment. When unset or set to true, custom components remain enabled and behave as before. In controlled environments, this can be combined with an allow-list by specifying trusted component paths, ensuring that only predefined components are available while blocking others. This feature is useful when running Langflow in shared or production environments where executing untrusted or generated code is not allowed. Available in Langflow. https://lnkd.in/diHc5mWn
-
-
Token Usage Display - Langflow 1.9 Feature LLM components now expose token usage directly in the flow interface after execution. For each run, the system displays: - input tokens - output tokens This removes the need for external logging or manual estimation when analyzing model usage. Token counts are tied to each component execution, making it possible to inspect usage at specific steps within the flow. This allows you to: - identify high-cost components - compare token usage across different steps - debug prompt size and response behavior - optimize flows based on real usage data The information is available immediately after execution, without requiring additional configuration. Available in Langflow: https://lnkd.in/d2QDaJCv
-
Langflow Assistant - Langflow 1.9 Feature The Langflow Assistant generates and updates custom components from natural language prompts directly within the Playground. It has access to the current flow structure, allowing it to produce components that integrate with existing inputs, outputs, and data types. Instead of writing component code manually, you define the behavior in a prompt, and the assistant generates executable code that can be inspected, added to the canvas, and connected to the flow. In the example shown, a prompt defines a component that: - extracts and validates URLs - fetches page titles with concurrency control - handles timeouts and per-URL errors - returns a structured DataFrame The generated component is immediately available in the canvas, where it can be connected, executed, and iterated on. If changes are needed, the prompt can be updated and a new version of the component is generated, replacing or extending the previous implementation. This introduces a fast iteration loop: prompt → component → run → refine The assistant runs a separate internal flow with its own language model, using the currently active flow as context. Available in Langflow 1.9: https://lnkd.in/dGdEws_s
-
-
🚀 Langflow 1.9 is live Langflow 1.9 introduces new capabilities for building, operating, and integrating AI workflows, with updates focused on in-product AI assistance, flow deployment tooling, and MCP-based interoperability. This release adds native support for AI-assisted component generation, standardized tooling for managing flows outside the visual builder, and new interfaces that allow external agents to create and execute Langflow flows programmatically. What’s new in this release: 🔹 Langflow Assistant A native AI assistant embedded directly into Langflow that allows users to generate custom components from natural language, troubleshoot flows in context, and get real-time guidance directly on the canvas—turning Langflow into an interactive AI-assisted development environment. 🔹 Flow DevOps Toolkit A new SDK and tooling layer that brings software engineering practices to Langflow flows, enabling exportable flow artifacts, version control, CI validation, automated testing, and structured deployment workflows beyond the visual builder. 🔹 MCP support for IDEs and coding agents Langflow can now be used programmatically by coding agents such as Claude Code, Cursor, and Copilot, enabling them to create, configure, and execute flows through MCP-based integrations. 🔹 Token Usage Display LLM components now expose input and output token counts directly in the flow interface after execution, giving developers immediate visibility into token consumption for cost monitoring and prompt optimization. Updates in 1.9: 🔹 Policies Component (Beta) A new ToolGuard-powered component that converts natural language business policies into executable guard code, enabling runtime validation over tool execution without requiring custom rule scripting. 🔹 Environment variables to block custom component execution Adds runtime controls that allow administrators to disable execution of custom Python components through environment configuration, improving governance and security in controlled deployments. 🔹 Gemini 3 tool calling support Adds native tool-calling compatibility for Gemini 3 models, enabling Langflow agents to invoke tools directly through Gemini-powered workflows. 🔹 Data renamed to JSON / DataFrame renamed to Table Updated naming improves semantic clarity in flow design, making structured data components easier to understand and more intuitive for developers building complex pipelines. 👉 Explore Langflow 1.9: https://lnkd.in/djy7q85z
-
-
Use Google Sheets inside Langflow agents With Composio components in Langflow, you can connect Google Sheets directly into a Langflow pipeline and let agents read from and write to spreadsheets as part of their execution logic. Typical operations include: - Reading rows from spreadsheets for processing - Appending new structured data automatically - Updating existing records based on agent output - Creating rows from form submissions, emails, or API events - Using spreadsheet data as input context for LLM workflows All of this can be composed visually in a flow, alongside models, APIs, parsers, and other tools. Example: automated structured data workflow A common pattern is using Google Sheets as a live data layer for operational workflows: - Read incoming rows from a spreadsheet - Process each row with an LLM or agent logic - Classify, enrich, or transform the data - Write results back into new columns automatically This makes Google Sheets useful for workflows such as lead qualification, ticket triage, CRM enrichment, reporting pipelines, and structured data tracking. 👉 Learn more about Langflow: https://lnkd.in/dTacbYrM
-
-
How to Build Structured AI Outputs for Image Analysis in Langflow Language models often return unstructured text, making outputs inconsistent and harder to process. This workflow solves that by forcing the model to generate structured, predictable results for image sentiment analysis. What this flow solves: Free-form model responses can be difficult to parse and automate. With this approach, you can: • Standardize model outputs into fixed fields • Improve consistency across responses • Format results for downstream automation • Turn raw model reasoning into structured data Step-by-step Setup Chat Input Receives the image-related input or image description to analyze. Prompt Template Defines the task instructions for the model. Example: classify the image into positive, neutral, or negative sentiment. Language Model Processes the prompt and generates the initial response. Structured Output Transforms the model response into a defined schema. Example schema: • Sentiment • Description Parser Formats the structured data into readable text using a template. How It Works Instead of returning unpredictable free-text answers, the model is constrained into a structured schema. This makes outputs easier to validate, display, and integrate into applications. Key Takeaway Structured Output makes AI workflows more reliable by transforming flexible model responses into predictable data formats. That’s how you build production-ready AI systems with consistency and control. How to get started This template is already available inside Langflow. Simply click New Flow, select the Image Sentiment Analysis template, and follow the same structure shown above. Learn more about Langflow: https://lnkd.in/diHc5mWn
-
-
Langflow reposted this
There have been so many deep dives into how Claude Code works so well. I tried to gather a few of them, along with a bunch of other useful links, in this week's AI++ newsletter.
Sure, Anthropic leaked the source to Claude Code, but what can we learn from it when building our own agents? Also, MCP tool annotations, agentic RAG and more in this week's AI++ newsletter.