Argilla’s cover photo
Argilla

Argilla

Software Development

Madrid, MADRID 10,907 followers

The Platform where experts improve AI models

About us

Human and AI-powered feedback for AI engineers

Website
https://www.argilla.io
Industry
Software Development
Company size
11-50 employees
Headquarters
Madrid, MADRID
Type
Self-Owned
Founded
2017
Specialties
NLP, artificial intelligence, Data science, and Open Source

Products

Locations

Employees at Argilla

Updates

  • Argilla reposted this

    You can now edit datasets directly on the Hugging Face Hub. The future of datasets for AI is here. 🔥 No more downloading a 500MB CSV just to fix 3 mislabeled rows. The workflow: 1. Spot an error in your dataset 2. Click edit in Data Studio 3. Fix the cells 4. Commit with a message 5. Repeat! Every change is versioned like code. Full traceability. The interesting part: collaborative curation. Your team can make commits to the same dataset, review each other's changes, and improve data quality together. I just published a blog post with a practical example: https://lnkd.in/d3JVvvh9 This will change how we maintain datasets for AI. Please leave a comment with your feedback and ideas to help us shape the future.

    • No alternative text description for this image
  • Argilla reposted this

    Looking for the best open model for your use case? Want to evaluate different inference providers? Search no more I am excited to announce the integration of Inference Providers into the fastest-growing eval framework, used by Anthropic, Groq, and many others. Run evals across thousands of open models and inference providers with Hugging Face Inference Providers and InspectAI. Get started https://lnkd.in/diCQqK_M

    • No alternative text description for this image
  • Argilla reposted this

    🚀 Launching a new tutorial: Extract text and knowledge from your photos with Open Vision Language Models With the recent surge of OCR open models, there has been a new wave of content and tutorials for document-oriented use cases. In this blog post, I focus on using general VLMs in less structured images, like handwritten recipes, where you want to perform several transformations, ask questions, and combine them with LLMs. https://lnkd.in/d9JDSEnz

    • No alternative text description for this image
  • Argilla reposted this

    What do you do with the photos you take to remember something? I usually snap quick photos at exhibitions, museums, or conferences to keep track of what interests me. It could be a slide, a poster, a piece of art, or even a short note I want to remember. With AI Sheets, I can organize these fragments and turn them into practical knowledge instead of letting them pile up. With just a few clicks and short prompts, I can extract text from the images (even the hard-to-read ones) and add context about the artwork. The result is a structured spreadsheet: a personal knowledge base built without endless prompt iterations. Another advantage is that AI Sheets can run different open-source models within the same workflow. Some models handle complex tasks like text extraction or image interpretation better, while others are more efficient for simpler tasks such as translation or classification. Curious to see how your photos might be useful to you :) We wrote a post about it in case you want to explore it further and try it out. https://lnkd.in/dP53dsZ4

    • No alternative text description for this image
  • Argilla reposted this

    We’re releasing vision support in AI Sheets 🚀 Import your photos from events, scanned manuscripts, receipt tickets, image corpus… and unlock the information they contain. Adding vision models to AI Sheets enables you to extract, analyze, expand, and organize data directly from your images. You can: • Describe and categorize images • Extract text • Detect objects Like you did with text data, you can iterate on prompts, edit outputs, and use feedback as few-shot examples to improve results. The outcome is a structured dataset that can serve as a knowledge base, a foundation for fine-tuning datasets, or a resource to support document creation. Read the announcement: https://lnkd.in/dP53dsZ4

  • Argilla reposted this

    🔥 Excited to announce Hugging Face AI Sheets 2.0: Extract text, detect objects, edit, and classify images with vision and language models AI Sheets is an open-source tool for supercharging datasets with open AI models, no code required. Now with vision support: extract data from images (receipts, documents), generate visuals from text, and edit images—all in a spreadsheet. Powered by thousands of open models via Inference Providers. Read all about it: https://lnkd.in/dDK7kzmR

    • No alternative text description for this image
  • Argilla reposted this

    ✨ New community resource on Hugging Face: Complete HunyuanImage 3 multilingual prompting guide with images Tencent's HunyuanImage 3 shows impressive results as an open text-to-image model, but the official prompt handbook was Chinese-only. I've created a translated dataset with: - 129 prompt templates across 15+ categories - Original Chinese + English translations - Generated images for both versions (quality verification) - Full config files for reproduction Technical stack: - AI Sheets for data transformation - Kimi K2-Instruct for translation - HunyuanImage 3.0 for image generation - Hugging Face Jobs for running the complete data generation pipeline Perfect for developers building image generation applications or researchers comparing multilingual prompt performance. It's free, open and full of details on how I built it: https://lnkd.in/dDicm23k

  • Argilla reposted this

    🔥 Let's go! Combining LLMs, Vision Language Models, and text-to-image, with the best open models on Hugging Face. Starting today, you can use VLMs directly on Hugging Face AISheets. Try it: https://lnkd.in/gdKeV-zW Here's an example: One of the most exciting use cases of VLMs is assisting people who are blind or have low vision. I asked myself the following: Can I use VLMs to describe artworks helpfully? If so, how can I test whether the descriptions are accurate or useful without reading them all? So I built the following workflow without writing a line of code: 1. Import the wikiart dataset from HF 2. Write a prompt to describe the image using Qwen-2.5-VL 3. Use the description to generate images using Qwen-Image The results are pretty surprising and fun! Stay tuned in the coming days, I'll share the dataset and prompts to reproduce it.

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Argilla 3 total rounds

Last Round

Seed

US$ 5.5M

See more info on crunchbase