Timbr.ai’s cover photo
Timbr.ai

Timbr.ai

Software Development

Model Smarter. Query Faster. Decide Better.

About us

Timbr is the ontology-based semantic layer used by leading enterprises to make faster, better decisions with ontologies that transform structured data into AI-ready knowledge. By unifying enterprise data into a SQL-queryable knowledge graph, Timbr makes relationships, metrics, and context explicit, enabling both humans and AI to reason over data with accuracy and speed. Its open, modular architecture connects directly to existing data sources, virtualizing and governing them without replication. The result is a dynamic, easily accessible model that powers analytics, automation, and LLMs through SQL, APIs, SDKs, and natural language. Timbr lets organizations operationalize AI on their data - securely, transparently, and without dependence on proprietary stacks - maximizing data ROI and enabling teams to focus on solving problems instead of managing complexity.

Website
https://timbr.ai/
Industry
Software Development
Company size
11-50 employees
Headquarters
Raanana
Type
Privately Held
Founded
2018
Specialties
SQL, Big Data, Artificial Intelligence, Business Intelligence, Knowledge Representation, Ontologies, Semantic SQL, SQL Ontologies, AI, Data Management, Data Virtualization, Semantic Layer, Metric Store, Virtualization, Federation, Data Modeling, Knowledge Graph, and LLM NL2SQL

Locations

Employees at Timbr.ai

Updates

  • Databricks has made serious investments in agentic AI. Mosaic AI, Agent Bricks, Genie Code. The tools to build, deploy, and orchestrate agents are all there. But every one of those agents still queries raw tables. And raw tables don't know what "revenue" means. They don't know which joins are valid, which metric definition is authoritative, or how your business concepts actually relate to each other. The agent generates SQL. It runs. The number looks reasonable. But it reasoned from schema, not from meaning. Databricks tells agents where data is and who can access it. Timbr tells agents what the data means, how business concepts relate to each other, and how to reason across your entire data estate, including the 60% that isn't in the Lakehouse. We just published a breakdown of where the gap actually lives, why Databricks' native AI tools need help, and what changes when agents operate with the context provided by an ontology-based semantic layer. 👉 https://lnkd.in/dMwbwGhG #Databricks #SemanticLayer #AIAgents #EnterpriseAI #NL2SQL #KnowledgeGraph #Ontology

    • No alternative text description for this image
  • When SQL ontologies make a difference...

    A drunk walks into a cockfight arena and asks, "Which one's the good one?" He bets on the white rooster, the "good one", and loses everything. Turns out, "good" meant good-natured. Not good at fighting. It's a joke. But it's also the exact problem we're ignoring in agentic AI. When an AI agent is asked to "pull the best customer accounts," what does "best" mean? Highest revenue? Lowest churn? Longest tenure? The agent picks one interpretation and runs with full confidence. Nobody told it there was another option. This is the semantic gap - and prompt engineering alone won't fix it. In my latest article, I break down why semantics are the biggest blind spot in enterprise AI, and how ontology-based semantic layers give agents something they desperately need: a shared, governed map of meaning. Because when your AI agent bets on the wrong rooster, the consequences are a lot worse than a lost wager. 👇 Full article in the comments #AgenticAI #KnowledgeGraphs #SemanticLayer #NL2SQL #DataEngineering #Ontology #LLM #AI #Timbr #EnterpriseAI

    • No alternative text description for this image
  • Interested in Context Graphs? A well designed SQL ontology is the ideal context graph to drive AI and contextual intelligence. #ontologies #knowledgegraph #contextgraph #agenticai #llm #aiarchitecture

    The real shift isn’t from semantic layer to ontology. It’s from description to execution. A recent and excellent Metadata Weekly article by Jessica Talisman explores the shift from traditional semantic layers to ontologies and context graphs for AI. It makes a valid point: Many semantic layers were built for dashboards. They define metrics and standardize business logic, but primarily for human reporting. AI systems require something deeper. They need structured meaning: entities, relationships, constraints, inheritance, and context. In short, they need ontology. But there’s an important nuance in this evolution. Traditional ontologies (such as OWL-based models) are excellent at describing and reasoning over domain knowledge. Yet they weren’t designed to operationalize enterprise metrics at warehouse scale. If metrics are to power AI, not just dashboards, they must be more than definitions. They must be executable. That’s where SQL ontologies introduce a meaningful shift: • Relationships are modeled as first-class constructs • Metrics are governed and reusable • Measures execute natively in the data warehouse • AI consumes executable business logic, not just metadata The conversation is no longer “semantic layer vs ontology.” It’s about building a semantic foundation where meaning and execution coexist. As AI becomes embedded in enterprise workflows, treating metrics as first-class, SQL-native semantic objects may be the real differentiator. #Ontology #SemanticLayer #EnterpriseAI #KnowledgeGraph #DataArchitecture #GenAI

    • No alternative text description for this image
  • Claude in Excel is genuinely impressive. ⚡ But there’s a quiet problem underneath the excitement. When AI works inside Microsoft Excel, it doesn’t see your business. It sees a spreadsheet that has already lost its relationships, definitions, and rules in transit from the data warehouse. That’s not an AI limitation. It’s a data foundation problem. AI agents like Claude can reason, explain, and model beautifully if the data carries meaning. Most enterprise spreadsheets don’t. That's why the Gold Layer still isn't enough for AI. And why the Diamond Layer matters if you want Excel to be reliable, not just impressive. The teams moving fastest with AI aren’t chasing smarter agents. They’re fixing the foundation first. 🔗 Read the full breakdown here: Why Claude in Excel Needs Ontologies, And How Timbr Delivers Them - https://lnkd.in/d5vaRsEU #EnterpriseAI #DataStrategy #SemanticLayer #MicrosoftExcel #Ontology #DataEngineering #Claude

    • No alternative text description for this image
  • Most enterprise ontologies fail after initial success. Not because they were modeled incorrectly. But because they were architecturally isolated from the systems they were meant to govern. Here's what happens: The data warehouse schema changes. A migration runs. DBT models update. Everything stays aligned. Except the ontology. It sits in a separate triplestore, unchanged. Nobody on the data engineering team knows it exists. The knowledge engineer who maintains it doesn't see the schema migration PRs. Over time, "Active Contract" means one thing in the warehouse, another in the ontology, and something different in the BI layer. Nothing breaks. Queries still run. Dashboards still load. But Finance and Sales now show different numbers for the same metric. This is semantic divergence. And it's not a modeling problem. It's an operational design problem. Traditional ontology architectures treat semantics as separate from data systems. That separation creates a sync gap that widens over time. SQL-based, co-located ontologies solve this structurally. When the ontology lives in the same operational world as the data stack, it evolves using the same migration workflows, version control, and CI/CD practices teams already rely on. The ontology doesn't need manual synchronization. It changes in the same commit as the schema. 🔗 Read the full article: https://lnkd.in/dcXerswE #SemanticLayer #DataArchitecture #EnterpriseData #KnowledgeGraphs #DataEngineering #OntologyManagement #DataGovernance

    • No alternative text description for this image
  • Build governed enterprise AI data agents with Timbr ontologies. 🤖 Timbr now delivers native data agent creation and management, built directly into the platform. Most AI agent frameworks focus on how agents act - chat, workflows, tool orchestration. Timbr focuses on what agents know about your data. Timbr Data Agents are built on a governed semantic ontology that understands business entities, relationships, and metrics by design. How data agents differ from generic agent frameworks 👇 • Agents reason over business concepts (Customer, Order, SLA) - not raw tables or schemas • Data access is governed through a semantic layer, not prompt rules • Metrics and business logic are defined once and reused consistently • Queries are generated deterministically using SQL ontologies • Agents operate safely across warehouses, lakes, and operational systems Instead of asking an agent to figure out your data model, Timbr gives agents a clear, structured understanding of enterprise data from day one. This is how teams move from experimental copilots to production-grade data agents. 👀 Want to see how data agents work in practice? See how Timbr turns your data into an agent-ready semantic layer: 👉 https://lnkd.in/dNk4xfN3 #DataAgents #SemanticLayer #KnowledgeGraph #EnterpriseAI #LLM #NL2SQL

    • No alternative text description for this image
  • 🚀 Timbr now supports ServiceNow as a native data source. Automatically generate a semantic ontology from your ServiceNow schema. Incidents, Problems, Changes, Assets, and CMDB tables become business concepts with explicit relationships and meaning - not just operational records. What this means for your IT operations: ✔️ See the full picture - ServiceNow stores relationships, but you query flat tables. The ontology exposes how incidents connect to services, assets, owners, and SLAs explicitly. ✔️ Define once, reuse everywhere - Build metrics and operational logic in one place. Reuse them consistently across analytics, BI, and AI agents without rebuilding per use case. ✔️ Query semantically - Ask "Which critical services have repeated incidents impacting SLAs?" without writing complex joins. ✔️ Enterprise integration - Connect ServiceNow to data warehouses, data lakes, and business systems through one unified semantic model. Your ServiceNow data becomes part of a complete enterprise knowledge graph. 👀 Want to see it in action? Join a live demo: https://lnkd.in/dKDPjHSe #ServiceNow #ITSM #Ontology #SemanticLayer #ITOperations #EnterpriseData #KnowledgeGraph

    • No alternative text description for this image
  • Knowledge graphs didn't fail in enterprises. They got isolated. SPARQL-based semantic platforms delivered governed meaning, explicit relationships, and strong modeling foundations. Technically, they worked. But analysts didn't query them. BI tools worked around them. And AI systems never learned to use them. Meanwhile, everything else in the enterprise data stack standardized on SQL. Not because SQL is perfect. But because it became the interface. That interface decision quietly determined adoption. In a new Medium article, we look at SQL vs. SPARQL in the age of AI, why semantics stayed isolated for years, why SQL quietly won, and why bringing semantic intelligence into standard SQL may be the shift that finally makes knowledge graphs operational at scale. 🔗 Read the article: https://lnkd.in/dPyXNpHg #SemanticLayer #KnowledgeGraphs #SQL #AI #EnterpriseData

    • No alternative text description for this image
  • You open Snowflake and see hundreds of views, dozens of dashboards, and at least three definitions of "active customer." Someone asks: "What's our Q4 revenue by region?" Five different teams give five different answers. Same data. Same timeframe. Different SQL. Now GenAI enters the picture. The LLM generates a query. It runs. The number looks reasonable. And no one can verify if it's right because the business logic lives in scattered JOIN clauses that were never modeled explicitly. This isn't a Snowflake problem. It's not even an AI problem. It's a meaning problem. We just published why ontology-based semantic layers are becoming critical for Snowflake environments where AI needs to be trustworthy, not just fast. 👉 https://lnkd.in/dqtvRtGH #Snowflake #SemanticLayer #GenAI #Analytics

    • No alternative text description for this image
  • View organization page for Timbr.ai

    2,169 followers

    Relationships are first class citizens in SQL ontologies. This is key for LLM accurate data retrieval by SQL queries for large scale implementations. Why? read on. Your database has 847 views. 23 are actively used. 114 haven't been queried in over a year. The rest? No one's entirely sure. Someone created vw_customer_analysis_final_v3 in 2021. It's still running. The person who built it left in 2022. You're afraid to delete it because something might break. This is the view graveyard problem. Every new use case gets a new view. Every change spawns another version. The namespace gets cluttered. Documentation falls behind. And eventually, your data team spends more time managing views than actually analyzing data. The issue isn't the views themselves. It's that they don't scale as an architecture. When every analysis requires a custom view, you're not building infrastructure. You're accumulating technical debt with a SQL extension. This is why we built Timbr around relationships instead of views. Define the connections once, let users traverse them dynamically. No view sprawl. No version creep. No archaeological digs through stored procedures. How many views does your production database have? And how many can you confidently explain? We wrote about why relationships scale where views don't (and what happens when you try to manage 1,000+ views manually). 👉 https://lnkd.in/dupgqY8b #DataEngineering #SemanticLayer #DataArchitecture #Analytics

    • No alternative text description for this image

Similar pages

Browse jobs