Skip to content

Releases: pinecone-io/cli

v0.3.1

12 Feb 17:00
213c636

Choose a tag to compare

Changelog

  • 1654bb0 Fix --file/--body aliasing in pc vector upsert (#64)
  • 213c636 make sure NewIndexConnection uses index.private_host if it's available (#65)

v0.3.0

19 Dec 21:03
9a4ea15

Choose a tag to compare

This release introduces a number of new features:

Backup and restore serverless indexes

You can now backup and restore serverless indexes using the pc backup command. A backup is a static copy of a serverless index that only consumes storage. It is a non-queryable representation of a set of records. You can create a backup of a serverless index, and you can create a new serverless index from a backup. This allows you to restore the index with the same or different configurations.

# Create a backup from an existing index
pc backup create --index-name my-index --name my-index-backup --description "my index backup"

# List all backups in the current project, or filter by index
pc backup list
pc backup list --index-name my-index

# Describe a specific backup
pc backup describe --id backup-id-123

# Restore an index from a backup
pc backup restore --id backup-id-123 --name my-index-restored

# List all restore jobs for the current project
pc backup restore list

# Describe a specific restore job
pc backup restore describe --id restore-id-123

# Delete a backup
pc backup delete --id backup-id-123

Work with index namespaces

The pc index namespace command allows you to explicitly work with namespaces within an index.

# Create a namespace
pc index namespace create --index-name my-index --name ns-1

# Describe a namespace
pc index namespace describe --index-name my-index --name ns-1

# List index namespaces
pc index namespace list --index-name my-index

# Delete a namespace, including all of its data
pc index namespace delete --index-name my-index --name ns-1

Indexes with Dedicated Read Node configuration

The CLI now supports creating indexes with dedicated read node configurations. Indexes built on dedicated read nodes use provisioned read hardware to provide predictable, consistent performance at sustained, high query volumes. They’re designed for large-scale vector workloads such as semantic search, recommendation engines, and mission-critical services.

# Create a dedicated serverless index, and an on demand index
pc index create \
  --name dedicated-index \
  --dimension 1824 \
  --metric cosine \
  --region us-east-1 \
  --cloud aws \
  --read-node-type b1 \
  --read-shards 1 \
  --read-replicas 1

pc index create \
  --name on-demand-index \
  --dimension 1824 \
  --metric cosine \
  --region us-east-1 \
  --cloud aws \

# Convert a dedicated index to an on demand index
pc index configure --name dedicated-index --read-mode ondemand

# Convert an on demand index to a dedicated index
pc index configure --name on-demand-index --read-mode dedicated

BYOC Indexes

If you have gone through the process of setting up your own environment for deploying Pinecone, you can create a BYOC index using the --byoc-environment flag.

$ pc index create --name byoc-index --byoc-environment aws-us-east-1-b921 --metric cosine --dimension 1824

Serverless index metadata schema

You can now create serverless indexes with defined metadata schemas.

pc index create \
  --name on-demand-index \
  --dimension 1824 \
  --metric cosine \
  --region us-east-1 \
  --cloud aws \
  --schema genre,year,director

Changelog

  • 9a4ea15 Allow attribution tags through go-pinecone (PINECONE_CLI_ATTRIBUTION_TAG) (#63)
  • aeec157 Implement BYOC, DRN, and Metadata Indexing (#60)
  • d091eea Implement backup command (serverless index backups / restore jobs) (#62)
  • 254adb4 Implement pc index namespace command (#61)

v0.2.0

27 Nov 04:16
132b30f

Choose a tag to compare

Vector Data Operations

This release introduces the pc index vector command suite which supports the ability to manage data inside of your Pinecone indexes via the CLI.

Vector Command Suite

Work with data inside an index. These commands require and --index-name, and optionally --namespace. Use the --help flag with any command to get detailed documentation on flags and usage:

Manage vector records

  • pc index vector upsert - insert or update vectors from JSON/JSONL
  • pc index vector list - list vectors (with pagination)
  • pc index vector fetch - fetch by IDs or metadata filter
  • pc index vector update - update a vector by ID or update many via metadata filter
  • pc index vector delete - delete by IDs, by filter, or delete all in a namespace
  • pc index vector query - nearest-neighbor search by values or vector ID

Index statistics

  • pc index stats - show dimension, total vector count, and namespace summaries for an index

JSON Input Formats & Flag Ergonomics

Many vector commands accept JSON input through three different formats:

1. Inline JSON (smaller payloads)

pc index vector fetch \
  --index-name my-index \
  --namespace my-namespace \
  --ids '["vec-1","vec-2"]'

2. JSON or JSONL files (.json, .jsonl)

You can pass files with JSON data via file path. JSONL files can be used instead of JSON for vector upsert operations.

pc index vector upsert \
  --index-name my-index \
  --namespace my-namespace \
  --body ./vectors.jsonl

3. From stdin using -

Passing a - for a flag requests stdin for that value. Only one flag can use stdin per command.

cat vectors.jsonl | pc index vector upsert \
  --index-name my-index \
  --namespace my-namespace \
  --body -

JSON Schemas

Commands that support a --body flag use types in the vector package, which wrap types in the go-pinecone SDK. The --body flag allows you to provide JSON payloads in lieu of flags:

  • UpsertBody — object with an array vectors of pinecone.Vector objects
  • QueryBody — fields: id, vector, sparse_values, filter, top_k, include_values, include_metadata
  • FetchBody — fields: ids, filter, limit, pagination_token
  • UpdateBody — fields: id, values, sparse_values, metadata, filter, dry_run

Example vectors.json (UpsertBody - dense vectors)

{
  "vectors": [
    {
      "id": "vec-1",
      "values": [0.1, 0.2, 0.3],
      "metadata": { "genre": "sci-fi", "title": "Voyager" }
    },
    {
      "id": "vec-2",
      "values": [0.3, 0.1, 0.2],
      "metadata": { "genre": "fantasy", "title": "Dragon" }
    }
  ]
}

Example JSONL format

{"id":"vec-1","values":[0.1,0.2,0.3],"metadata":{"genre":"sci-fi","title":"Voyager"}}
{"id":"vec-2","values":[0.3,0.1,0.2],"metadata":{"genre":"fantasy","title":"Dragon"}}

Usage Examples

Upsert data

pc index vector upsert \
  --index-name my-index \
  --namespace my-namespace \
  --body ./vectors.json

List Vectors

pc index vector list --index-name my-index --namespace my-namespace

Fetch vectors by ID and metadata filter

pc index vector fetch \
  --index-name my-index \
  --namespace my-namespace \
  --ids '["vec-1"]'

pc index vector fetch \
  --index-name my-index \
  --namespace my-namespace \
  --filter '{"genre":{"$eq":"drama"}}'

Query by vector and existing vector ID

pc index vector query \
  --index-name my-index \
  --namespace my-namespace \
  --vector '[0.1,0.2,0.3]' \
  --top-k 3 \
  --include-metadata

pc index vector query \
  --index-name my-index \
  --namespace my-namespace \
  --id vec-1 \
  --top-k 3

Changelog

  • 132b30f Clean up presenters pointer handling (#59)
  • 63b72c1 Finalize README for new vector operations (#58)
  • 845bd1c Implement Vector Upsert, Query, Fetch, List, Delete, and Update (#54)
  • 723dd6c Implement sdk.NewIndexConnection, clean up context.Context passing (#55)
  • 8b2d473 Refactor ingestion for file/stdin (#56)
  • 65859ab Rename describe-stats -> stats (#57)

Full Changelog: v0.1.3...v0.2.0

v0.1.3

15 Nov 03:09
587914f

Choose a tag to compare

Changelog

  • 587914f Refactor / fix exit package (#53)
  • a36e3d8 Update to go-pinecone@v5.0.0, add e2e testing harness (#52)

v0.1.2

28 Oct 06:21
94f4bc8

Choose a tag to compare

Changelog

  • 94f4bc8 Add TokenError, improve exit package utilities, improve error logging (#50)
  • 1e572b9 Fix --tags not being applied to integrated indexes, refactor /index/create.go for unit tests (#51)

v0.1.1

21 Oct 03:15
44c51c0

Choose a tag to compare

Changelog

  • 44c51c0 [Bug] Resolve invalid memory address or nil pointer dereference when building clients (#49)

v0.1.0

09 Oct 18:21
08bfb80

Choose a tag to compare

We've released v0.1.0 of the Pinecone CLI. The CLI lets you manage Pinecone infrastructure (organizations, projects, indexes, and API keys) directly from your terminal and in CI/CD.

This feature is in public preview. We'll be adding more features to the CLI over time, and we'd love your feedback on this early version.

For more information, see the CLI overview.

Changelog

v0.0.60

09 Oct 07:07
a44fc31

Choose a tag to compare

Changelog

  • a44fc31 Fix man pages symlink during brew install (#47)

v0.0.59

09 Oct 06:09
d191dea

Choose a tag to compare

Changelog

  • d191dea Add --project-id and --organization-id flags to target command (#44)
  • e83067f Add man pages generation utility, include in packaging (#42)
  • b72536b Add more robust Long help text for commands (#43)
  • 33be8e4 Clean up Cobra Example blocks (#41)
  • f58beec Update docs URLs in help text (#45)

v0.0.58

26 Sep 07:04
f136d97

Choose a tag to compare

This release overhauls working with different authentication credentials when interacting with Pinecone resources through the CLI. There are now three commands for authenticating with Pinecone services.

User Token $ pc auth login

Logging in via browser with your Pinecone account will give you access to the admin API allowing you to work with organizations, projects, and API keys. Logging in will clear any previous service account credentials that may have been configured. After logging in you will be able to target an organization and a project, and re-target using $ pc target.

Service Account $ pc auth configure --client-id --client-secret

Service account credentials (client ID and secret) can be configured for accessing the admin API. Service accounts are scoped to a single organization, and you can work with projects and API keys inside of that organization. Configuring a service account will clear any previous user login tokens.

The organization the service account belongs to will be set in the target context, and you will be able to select a target project either interactively, or using the --project-id flag with $ pc auth configure.

Global API Key $ pc auth configure --global-api-key

Configuring an API key override allows working directly with index and collection resources. API keys are scoped to a single project, and any existing organization and project target context will be ignored in favor of the API key override.

Working with project API keys

You can store API keys locally for use by the CLI by using the --store flag when calling $ pc api-key create --project-id your-project-id --store. This will store the key value locally, allowing you to work with index and collection commands. If you do not explicitly associate an API key with a project, the CLI will handle this for you.

There have been new sub-commands introduced to the auth command, which allow you to list and prune API keys the CLI is locally managing for projects: $ pc auth local-keys list, $ pc auth local-keys prune. These utilities alongside the general $ pc api-key command offer flexibility in working with project resources.

$ pc auth configure --client-id my-client-id --client-secret my-client-secret --projectid my-project-id
$ pc project list

$ pc target --project test-staging-1
$ pc api-key create --name api-key-1 --store
$ pc index list

# List all of the API keys the CLI has stored locally
$ pc auth local-keys list

# Prune / cleanup managed keys
$ pc auth local-keys prune --skip-confirmation --origin 

Changelog

  • f136d97 Handle various auth credentials, remove dashboard and network packages (#39)