Welcome to the CKAN Ecosystem Catalog!

Discover the portals, extensions, and communities using CKAN to share open data worldwide.

The CKAN Ecosystem, All in One Place

With data on 1066 extensions and 691 sites from around the globe.
Click points to see site details

CKAN is a trusted open-source platform for sharing and managing open data, used by communities, governments, and organizations around the world.

The Ecosystem Catalog was created to showcase the data, tools, and sites built with CKAN — and to support the people building them. Whether you're here to explore what's possible or to keep your own metadata up to date, this is where you can discover, share, and stay connected.

Highlighted extensions

ckanext-datatablesplus

Star Stars: 0
Forked Forks: 0
extended datatables.net CKAN resource viewer
  • datatables
  • dataviewer
  • usability

ckanext-scheming

Star Stars: 100
Forked Forks: 178
Easy, shareable custom CKAN schemas
  • configuration
  • metadata
  • schema

ckanext-harvest

Star Stars: 140
Forked Forks: 209
Remote harvesting extension for CKAN
  • data harvesting
  • metadata aggregation
  • remote CKAN

ckanext-dcat

Star Stars: 192
Forked Forks: 152
The **DCAT** extension for CKAN enhances data portals by enabling the exposure and consumption of metadata using the DCAT vocabulary, facilitating interoperability with other data catalogs. It provides tools for serializing CKAN datasets as RDF documents and harvesting RDF data from external sources, promoting data sharing and reuse. The extension supports various DCAT Application Profiles and includes features for adapting schemas, validating data, and integrating with search engines like Google Dataset Search.
  • DCAT
  • Metadata
  • RDF

ckanext-spatial

Star Stars: 135
Forked Forks: 203
Geospatial extension for CKAN
  • geospatial
  • harvesting
  • metadata

datapusher-plus

Star Stars: 43
Forked Forks: 33
A standalone web service that pushes data into the CKAN Datastore fast & reliably. It pushes real good!
  • data-processing
  • data-validation
  • metadata-generation
  • performance
  • postgresql-optimization
  • qsv
  • scheming-integration
  • type-inference

Highlighted sites

Western Pennsylvania Regional Data Center (WPRDC)

Location: Pittsburgh, Pennsylvania, USA
The Western Pennsylvania Regional Data Center (WPRDC) provides a shared technological and legal infrastructure to support research, analysis, decision making, and community engagement. It was created in 2015 and is managed by the University of Pittsburgh Center for Urban and Social Research, in partnership with Allegheny County and the City of Pittsburgh. WPRDC would not be possible without the trust of our partners and support from the Richard King Mellon Foundation, The Heinz Endowments, and the University of Pittsburgh.
  • api_enabled
  • ckan_based
  • demographic_data
  • economic_indicators
  • government_portal
  • metadata_standards
  • municipal_data
  • public_sector_information
  • transparency_initiative
  • us_data

datHere Data Catalog

Location: USA
datHere Data Catalog datHere is a leading provider of data infrastructure engineering services. It offers standards-based, best-of-breed, and open-source solutions to make data useful, usable, and used. Focusing on data integration, data application development, and data engineering, datHere helps businesses harness the power of their data to drive informed decision-making and achieve their strategic goals. Key Solutions and Services: Solutions: CKAN DMS-as-a-Service: datHere offers a Data Management System (DMS) based on CKAN, a comprehensive open-source platform for cataloging, securing, versioning, and collaborating on disparate data assets. This solution helps organizations manage their data assets effectively, ensuring data governance and facilitating data-driven decision-making. Data Enrichment: To enhance data quality, datHere provides data enrichment services. They link and enrich client data with various data feeds, ensuring high data quality while minimizing the need for manual data cleaning and preparation. Data Feeds: datHere offers API-ready, high-quality data and metadata sourced from diverse and reliable sources such as the US Census, Bureau of Labor Statistics (BLS), Centers for Disease Control and Prevention (CDC), data.gov, USAFacts, and client-specific systems. These data feeds are regularly updated, providing clients with valuable insights for their analytics and reporting needs. Services: Data Integration: datHere specializes in ingesting, transforming, and integrating raw data, delivering it into clients' preferred business intelligence (BI) platforms. They leverage a library of ETL (Extract, Transform, Load) tools built on industry-leading open-source technologies such as Apache Airflow, Great Expectations, OpenRefine, and pandas. Data Application Development: datHere develops self-service data applications and visualizations designed for non-analysts. These applications enable users to explore large datasets using popular tools like Snowflake, Power BI, Carto, plotly, tableau, and Superset, making data analysis accessible and intuitive. Data Engineering: The team at datHere creates reusable and maintainable data pipelines, bringing together data from diverse sources, including high-value sources maintained by datHere. They convert raw data into API-ready data feeds for analytical and operational use, ensuring data is readily available and actionable. datHere's competitive advantage is its expertise in open-source data solutions. They are a leading provider of CKAN, an open-source data portal platform. datHere also has a deep understanding of data standards and best practices

Humanitarian Data Exchange

Location: International
The HDX repository, where data providers can upload their raw data spreadsheets for others to find and use. HDX analytics, a database of high-value data that can be compared across countries and crises, with tools for analysis and visualisation. Standards to help share humanitarian data through the use of a consensus Humanitarian Exchange Language. We are designing the HDX system with the following principles in mind: HDX will aggregate data that already exists. We are not working on primary data collection or the creation of new indicators. HDX will provide technical support for (a) sharing any data, and (b) allowing data providers to decide not to share some data for privacy, security or ethical reasons. Read our Terms of Service. As selected high-value data moves from the dataset repository into the curated analytic database, we will take it through a quality-review process to ensure that it is sourced, trusted, and can be combined and compared with data from other sources. HDX will use open-source, open content, and open data as often as possible to reduce costs and in the spirit of transparency. We are using an open-source software called CKAN for the dataset repository. We partner with ScraperWiki for data transformation and operations support. You can find all of our code on GitHub. The plan for 2014 is to create a place where users can easily find humanitarian data and understand the data's source, collection methodology, and license for reuse. We will be working with three countries - Colombia, Kenya(for Eastern Africa) and Yemen - to introduce the platform to partners and to integrate local systems. Our initial public beta will allow users to find and share data through the HDX repository. We will continue to build on this foundation into 2015, eventually adding functionality for data visualization and custom analytics. The HDX project is ambitious, but it presents an excellent opportunity to change the way humanitarians share, access and use data, with positive implications for those who need assistance. We want to ensure that users are at the centre of our design process, so please join the conversation on our blog, follow us on twitter and send us feedback.
  • api_enabled
  • ckan_based
  • data_visualization
  • disaster_response
  • emergency_management
  • global_coverage
  • international_organization
  • metadata_standards
  • non_profit_portal
  • open_source_tools

data.gov.au

Location: Canberra, Australian Capital Territory, Australia
About Data.gov.au is the central source of Australian open government data. Anyone can access the public data published by federal, state and local government agencies. Data.gov.au are not responsible for creating, maintaining and updating these published datasets. Agencies are the custodians of the data they collect, and they make decisions about how it can be shared safely. They are often guided by legislation and policy requirements that can define what and how data can be shared. This data is a national resource that holds considerable value for growing the economy, improving service delivery and transforming policy outcomes. In addition to government data, you can also find publicly-funded research data and datasets from private institutions that are in the public interest. The federal government's Data and Digital Government Strategy requires all government agencies to make non-sensitive data open by default.
  • api_enabled
  • asia_pacific_focus
  • citizen_oriented
  • ckan_based
  • data_visualization
  • government_portal
  • national_data
  • open_data_charter
  • public_sector_information
  • transparency_initiative

Data.gov

Location: USA
The Home of the U.S. Government's Open Data Here you will find data, tools, and resources to conduct research, develop web and mobile applications, design data visualizations, and more.

NASA Open Data Portal

Location: Washington, DC 20546 United States
Data.nasa.gov: A catalog of publicly available NASA datasets What is Data.nasa.gov? Data.nasa.gov is NASA’s publicly available metadata repository, hosting diverse datasets related to science, space exploration, aeronautics, and more. Making NASA’s metadata publicly accessible, in compliance with the OPEN Government Data Act, fosters transparency, collaboration, and scientific advancement. Most dataset pages only house metadata for each dataset. Typically, actual data is hosted on other NASA archive sites but now data.nasa.gov will have that metadata and the links to data that exists in other locations.