Releases: pinecone-io/cli
v0.3.1
v0.3.0
This release introduces a number of new features:
- Serverless index backup and restore job management.
- Index namespace management.
- BYOC (Bring Your Own Cloud) Indexes.
- Indexes with dedicated read nodes.
- Index metadata schema support.
Backup and restore serverless indexes
You can now backup and restore serverless indexes using the pc backup command. A backup is a static copy of a serverless index that only consumes storage. It is a non-queryable representation of a set of records. You can create a backup of a serverless index, and you can create a new serverless index from a backup. This allows you to restore the index with the same or different configurations.
# Create a backup from an existing index
pc backup create --index-name my-index --name my-index-backup --description "my index backup"
# List all backups in the current project, or filter by index
pc backup list
pc backup list --index-name my-index
# Describe a specific backup
pc backup describe --id backup-id-123
# Restore an index from a backup
pc backup restore --id backup-id-123 --name my-index-restored
# List all restore jobs for the current project
pc backup restore list
# Describe a specific restore job
pc backup restore describe --id restore-id-123
# Delete a backup
pc backup delete --id backup-id-123Work with index namespaces
The pc index namespace command allows you to explicitly work with namespaces within an index.
# Create a namespace
pc index namespace create --index-name my-index --name ns-1
# Describe a namespace
pc index namespace describe --index-name my-index --name ns-1
# List index namespaces
pc index namespace list --index-name my-index
# Delete a namespace, including all of its data
pc index namespace delete --index-name my-index --name ns-1Indexes with Dedicated Read Node configuration
The CLI now supports creating indexes with dedicated read node configurations. Indexes built on dedicated read nodes use provisioned read hardware to provide predictable, consistent performance at sustained, high query volumes. They’re designed for large-scale vector workloads such as semantic search, recommendation engines, and mission-critical services.
# Create a dedicated serverless index, and an on demand index
pc index create \
--name dedicated-index \
--dimension 1824 \
--metric cosine \
--region us-east-1 \
--cloud aws \
--read-node-type b1 \
--read-shards 1 \
--read-replicas 1
pc index create \
--name on-demand-index \
--dimension 1824 \
--metric cosine \
--region us-east-1 \
--cloud aws \
# Convert a dedicated index to an on demand index
pc index configure --name dedicated-index --read-mode ondemand
# Convert an on demand index to a dedicated index
pc index configure --name on-demand-index --read-mode dedicatedBYOC Indexes
If you have gone through the process of setting up your own environment for deploying Pinecone, you can create a BYOC index using the --byoc-environment flag.
$ pc index create --name byoc-index --byoc-environment aws-us-east-1-b921 --metric cosine --dimension 1824Serverless index metadata schema
You can now create serverless indexes with defined metadata schemas.
pc index create \
--name on-demand-index \
--dimension 1824 \
--metric cosine \
--region us-east-1 \
--cloud aws \
--schema genre,year,directorChangelog
v0.2.0
Vector Data Operations
This release introduces the pc index vector command suite which supports the ability to manage data inside of your Pinecone indexes via the CLI.
Vector Command Suite
Work with data inside an index. These commands require and --index-name, and optionally --namespace. Use the --help flag with any command to get detailed documentation on flags and usage:
Manage vector records
pc index vector upsert- insert or update vectors from JSON/JSONLpc index vector list- list vectors (with pagination)pc index vector fetch- fetch by IDs or metadata filterpc index vector update- update a vector by ID or update many via metadata filterpc index vector delete- delete by IDs, by filter, or delete all in a namespacepc index vector query- nearest-neighbor search by values or vector ID
Index statistics
pc index stats- show dimension, total vector count, and namespace summaries for an index
JSON Input Formats & Flag Ergonomics
Many vector commands accept JSON input through three different formats:
1. Inline JSON (smaller payloads)
pc index vector fetch \
--index-name my-index \
--namespace my-namespace \
--ids '["vec-1","vec-2"]'2. JSON or JSONL files (.json, .jsonl)
You can pass files with JSON data via file path. JSONL files can be used instead of JSON for vector upsert operations.
pc index vector upsert \
--index-name my-index \
--namespace my-namespace \
--body ./vectors.jsonl3. From stdin using -
Passing a - for a flag requests stdin for that value. Only one flag can use stdin per command.
cat vectors.jsonl | pc index vector upsert \
--index-name my-index \
--namespace my-namespace \
--body -JSON Schemas
Commands that support a --body flag use types in the vector package, which wrap types in the go-pinecone SDK. The --body flag allows you to provide JSON payloads in lieu of flags:
UpsertBody— object with an arrayvectorsofpinecone.VectorobjectsQueryBody— fields:id,vector,sparse_values,filter,top_k,include_values,include_metadataFetchBody— fields:ids,filter,limit,pagination_tokenUpdateBody— fields:id,values,sparse_values,metadata,filter,dry_run
Example vectors.json (UpsertBody - dense vectors)
{
"vectors": [
{
"id": "vec-1",
"values": [0.1, 0.2, 0.3],
"metadata": { "genre": "sci-fi", "title": "Voyager" }
},
{
"id": "vec-2",
"values": [0.3, 0.1, 0.2],
"metadata": { "genre": "fantasy", "title": "Dragon" }
}
]
}Example JSONL format
{"id":"vec-1","values":[0.1,0.2,0.3],"metadata":{"genre":"sci-fi","title":"Voyager"}}
{"id":"vec-2","values":[0.3,0.1,0.2],"metadata":{"genre":"fantasy","title":"Dragon"}}Usage Examples
Upsert data
pc index vector upsert \
--index-name my-index \
--namespace my-namespace \
--body ./vectors.jsonList Vectors
pc index vector list --index-name my-index --namespace my-namespaceFetch vectors by ID and metadata filter
pc index vector fetch \
--index-name my-index \
--namespace my-namespace \
--ids '["vec-1"]'
pc index vector fetch \
--index-name my-index \
--namespace my-namespace \
--filter '{"genre":{"$eq":"drama"}}'Query by vector and existing vector ID
pc index vector query \
--index-name my-index \
--namespace my-namespace \
--vector '[0.1,0.2,0.3]' \
--top-k 3 \
--include-metadata
pc index vector query \
--index-name my-index \
--namespace my-namespace \
--id vec-1 \
--top-k 3Changelog
- 132b30f Clean up
presenterspointer handling (#59) - 63b72c1 Finalize README for new vector operations (#58)
- 845bd1c Implement Vector Upsert, Query, Fetch, List, Delete, and Update (#54)
- 723dd6c Implement
sdk.NewIndexConnection, clean upcontext.Contextpassing (#55) - 8b2d473 Refactor ingestion for file/stdin (#56)
- 65859ab Rename
describe-stats->stats(#57)
Full Changelog: v0.1.3...v0.2.0
v0.1.3
v0.1.2
v0.1.1
v0.1.0
We've released v0.1.0 of the Pinecone CLI. The CLI lets you manage Pinecone infrastructure (organizations, projects, indexes, and API keys) directly from your terminal and in CI/CD.
This feature is in public preview. We'll be adding more features to the CLI over time, and we'd love your feedback on this early version.
For more information, see the CLI overview.
Changelog
v0.0.60
v0.0.59
Changelog
v0.0.58
This release overhauls working with different authentication credentials when interacting with Pinecone resources through the CLI. There are now three commands for authenticating with Pinecone services.
User Token $ pc auth login
Logging in via browser with your Pinecone account will give you access to the admin API allowing you to work with organizations, projects, and API keys. Logging in will clear any previous service account credentials that may have been configured. After logging in you will be able to target an organization and a project, and re-target using $ pc target.
Service Account $ pc auth configure --client-id --client-secret
Service account credentials (client ID and secret) can be configured for accessing the admin API. Service accounts are scoped to a single organization, and you can work with projects and API keys inside of that organization. Configuring a service account will clear any previous user login tokens.
The organization the service account belongs to will be set in the target context, and you will be able to select a target project either interactively, or using the --project-id flag with $ pc auth configure.
Global API Key $ pc auth configure --global-api-key
Configuring an API key override allows working directly with index and collection resources. API keys are scoped to a single project, and any existing organization and project target context will be ignored in favor of the API key override.
Working with project API keys
You can store API keys locally for use by the CLI by using the --store flag when calling $ pc api-key create --project-id your-project-id --store. This will store the key value locally, allowing you to work with index and collection commands. If you do not explicitly associate an API key with a project, the CLI will handle this for you.
There have been new sub-commands introduced to the auth command, which allow you to list and prune API keys the CLI is locally managing for projects: $ pc auth local-keys list, $ pc auth local-keys prune. These utilities alongside the general $ pc api-key command offer flexibility in working with project resources.
$ pc auth configure --client-id my-client-id --client-secret my-client-secret --projectid my-project-id
$ pc project list
$ pc target --project test-staging-1
$ pc api-key create --name api-key-1 --store
$ pc index list
# List all of the API keys the CLI has stored locally
$ pc auth local-keys list
# Prune / cleanup managed keys
$ pc auth local-keys prune --skip-confirmation --origin