Skip to content

feature: idiomatic table backup management implementation#761

Closed
zamnuts wants to merge 18 commits intomasterfrom
feature/introduce-backups
Closed

feature: idiomatic table backup management implementation#761
zamnuts wants to merge 18 commits intomasterfrom
feature/introduce-backups

Conversation

@zamnuts
Copy link
Contributor

@zamnuts zamnuts commented Jul 13, 2020

This is the hand-written implementation that introduces backup support for tables.

  • A backup represents a table on a specific cluster
  • Backups reside local to a cluster
  • A backup is restored as a table to the cluster from where it originated
  • proto methods have been mirrored to the relevant classes based on parent resource requirements:
    • createBackup, getBackup, listBackups, updateBackup, and deleteBackup are in Cluster
    • restoreTable is in Instance
  • Friendlier methods that are more intuitive wrap the aforementioned proto proxies requiring less input criteria:
    • table.backup() to create a backup
    • Introduced a Backup class with delete, get, restore, and update methods
    • Dates are JS-friendly and params/responses convert to/from Timestamp Struct, Date, and PreciseDate
  • Some methods return a Backup or Backup[] where applicable, but Longrunning Operations (LROs) do not
    • LROs do not transform their result into a Backup, this is left to the user
    • LROs return POJOs instead
    • Added cluster.asBackup casting helper to convert POJOs into Backup
  • Metadata results were sparse for the response structs for backups, the "backup" is the metadata in most cases
  • Aside: wrote tests using async/await
    • promisify is refactored as pass through to allow for promises
    • Changed this in unrelated test suites and fixed signature tests where they were incorrect
  • When testing, instances cannot be deleted if they have backups defined
    • Remove all backups before removing the instances
    • A fully parallel backup reaper was introduced to help, it runs before the instance reaper

PR checklist:

  • Make sure to open an issue as a bug/issue before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea
  • Ensure the tests and linter pass
  • Code coverage does not decrease (if any source code was changed)
  • Appropriate docs were updated (if necessary)

Fixes ? 🦕

@zamnuts zamnuts added type: enhancement priority: p0 Highest priority. Critical issue. P0 implies highest priority. api: bigtable Issues related to the googleapis/nodejs-bigtable API. labels Jul 13, 2020
@zamnuts zamnuts requested a review from a team as a code owner July 13, 2020 11:43
@zamnuts zamnuts self-assigned this Jul 13, 2020
@googlebot googlebot added the cla: yes This human has signed the Contributor License Agreement. label Jul 13, 2020
@zamnuts
Copy link
Contributor Author

zamnuts commented Jul 13, 2020

System and samples tests are both currently failing due to:

Error: 12 UNIMPLEMENTED: Method not found.

All necessary methods are present (in master) within v2.BigtableTableAdminClient, and these work locally against my test cluster. I expect this is an issue with the RPC version available to kokoro?

@AVaksman
Copy link
Contributor

It seems the Backup class structure here is a diversion from the pattern followed by other classes (Instance, Cluster, Table, AppProfile) in the library, where the instance of a class represents an object that can interact with a Cloud Bigtable object.

Factory reference usually would take a pathway to service (like instance object, instance.bigtable.request) and an id, from here an object can interact with Cloud Bigtable.

const backup = instance.backup('my-backup', 'my-cluster');
backup.getMetadata();
backup.update(...);

src/backup.ts Outdated
* @param {Bigtable} bigtable
* @param {google.bigtable.admin.v2.IBackup} backup A Backup or
*/
constructor(bigtable: Bigtable, backup: google.bigtable.admin.v2.IBackup) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the pattern of other classes in the library, the constructor should take an instance of its parent class (seems Cluster in this case) and a string id.

User may pass just an 'my-backup' as an id or a full name as 'projects/{my-project}/instances/{my-instance}/clusters/{my-cluster}/backups/{my-backup}'. Then the pattern is to

let name;
    if (id.includes('/')) {
     if (id.startsWith(`${cluster.name}/backups/`)) {
        name = id;
      } else {
        throw new Error(`Backup id '${id}' is not formatted correctly.
Please use the format 'my-backup' or '${cluster.name}/backups/my-backup'.`);
      }
    } else {
      name = `${cluster.name}/backups/${id}`;
    }

    this.id = name.split('/').pop()!;
    this.name = name;

src/backup.ts Outdated
constructor(bigtable: Bigtable, backup: google.bigtable.admin.v2.IBackup) {
Object.assign(this, backup);

this.metadata = backup.valueOf();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per the pattern this.metadata usually gets set/updated during getMetadata call.

src/backup.ts Outdated
Object.assign(this, backup);

this.metadata = backup.valueOf();
this._bigtable = bigtable;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be cluster.bigtable if following suggestion above.

also this.cluster = cluster is useful to delegate backup.create to.

src/backup.ts Outdated
Comment on lines +33 to +38
sourceTable?: string | null;
expireTime?: google.protobuf.ITimestamp | null;
startTime?: google.protobuf.ITimestamp | null;
endTime?: google.protobuf.ITimestamp | null;
sizeBytes?: number | Long | string | null;
state?:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If follow the pattern of other classes in the library, these values are located in this.metadata?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AVaksman I don't quite follow this comment. Could you explain a bit? Or maybe this comment is showing in the wrong place?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I meant to say it usually all proto properties of a resource are located in the metadata property of the resource object.
Instance.metadata has type, labels, displayName, state
Cluster.metadata has defaultStorageType, serveNodes, location, state
Table.metadata has clusterStates, columnFamilies, granularity
Class objects usually have properties id, name, metadata, bigtable, instance(if it is not an instance of class Instance).

src/backup.ts Outdated
* @readonly
* @return {string}
*/
get projectId(): string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

projectId might not be available at the time of instantiation of the object and usually is updated during the first network call in prepareGaxRequest via getProjectId_, and subsequently updated across reqOpts.

src/backup.ts Outdated
*/
update(
fields: ModifiableBackupFields,
options?: UpdateBackupOptions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could probably use top level gaxOptions?: CallOptions as in other setMetadata methods

src/backup.ts Outdated
update(
fields: ModifiableBackupFields,
options?: UpdateBackupOptions
): Promise<UpdateBackupResponse> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would need overloads here as well

src/backup.ts Outdated
Comment on lines +283 to +287
return this._bigtable
.instance(this.instanceId)
.cluster(this.clusterId)
.updateBackup(this.backupId, fields, options);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually setMetadata functions belong to the object itself.

src/backup.ts Outdated

return this._bigtable
.instance(this.instanceId)
.restoreTable(this.backupId, this.clusterId, tableId, options);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please refer to comment for instance.restoreTable

src/backup.ts Outdated
* A plain-old-javascript-object representation of this backup.
* @return {google.bigtable.admin.v2.IBackup}
*/
valueOf(): google.bigtable.admin.v2.IBackup {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think we utilize this method in other classes. Usually this value would be assigned to this.metadata

src/cluster.ts Outdated
Comment on lines +104 to +106
export interface CreateBackupOptions {
gaxOptions?: CallOptions;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we should package all but id params in one object

export interface CreateBackupOptions {
  tableId: string; // tableId might be enough since we can build full table path
  expireTime: BackupTimestamp;
  gaxOptions?: CallOptions;
}

Similar to creteInstance, createCluster, createTable functions.

src/cluster.ts Outdated
* minimum 6 hours from the time of the backup request and a maximum of 30
* days.
*/
expireTime?: BackupTimestamp;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

expireTime could be required here since it is required for creation of a backup, and is the only field that can be updated on an existing backup.

src/instance.ts Outdated
Comment on lines +142 to +151
export type RestoreTableCallback = (
err: ServiceError | null,
apiResponse?: LROperation<
google.bigtable.admin.v2.ITable,
google.bigtable.admin.v2.IRestoreTableMetadata
>
) => void;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Long running operation functions that return a resource object usually have the return signatures as in this example

export interface LongRunningResourceCallback<Resource> {
(
err: ServiceError | null,
resource?: Resource,
operation?: Operation,
apiResponse?: IOperation
): void;
}
export type CreateInstanceCallback = LongRunningResourceCallback<Instance>;
export type CreateInstanceResponse = [Instance, Operation, IOperation];

src/cluster.ts Outdated
Comment on lines +107 to +119
export type CreateBackupCallback = (
err: ServiceError | null,
apiResponse?: LROperation<
google.bigtable.admin.v2.IBackup,
google.bigtable.admin.v2.ICreateBackupMetadata
>
) => void;
export type CreateBackupResponse = [
LROperation<
google.bigtable.admin.v2.IBackup,
google.bigtable.admin.v2.ICreateBackupMetadata
>
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar LRO function return signatures as here

src/cluster.ts Outdated
* described at https://aip.dev/132#ordering.
*/
orderBy?: string;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should have autoPaginate, pageSize and pageToken as well.

src/cluster.ts Outdated
Comment on lines +175 to +179
export type ListBackupsResponse = [Backup[]];
export type ListBackupsCallback = (
err: ServiceError | null,
metadata?: Backup[]
) => void;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Paged functions callbacks/responses usually are of the form

export interface PagedCallback {
  (
    err: ServiceError | null,
    results?: Backup[] | null,
    nextQuery?: GetBackupsOptions | null,
    response?: Response | null
  ): void;
}

src/cluster.ts Outdated
* `name` should be supplied to be useful in most cases.
* @return {Backup}
*/
asBackup(backup: google.bigtable.admin.v2.IBackup): Backup {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we introduce a factory backup function similar to Instance.table or Insance.cluster?

src/cluster.ts Outdated
Comment on lines +619 to +622
table: Table | string,
id: string,
fields: Required<ModifiableBackupFields>,
optionsOrCallback?: CreateBackupOptions | CreateBackupCallback,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented above about possibly combining params.

src/cluster.ts Outdated
);
}

listBackups(options?: ListBackupsOptions): Promise<ListBackupsResponse>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should name getBackups as other similar paged functions getTables, getClusters ...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are the other underlying rpcs similarly listTable and not getTables? Either way this may make sense for interface consistency.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, API has getInstances, getClusters, getTables, getAppProfiles, ... that wrap underlying list RPCs.

src/cluster.ts Outdated
* @example <caption>include:samples/document-snippets/cluster.js</caption>
* region_tag:bigtable_cluster_list_backups
*/
listBackups(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggesting to implement getBackups in the Instance class to get all backups across all clusters on the instance by specifying reqOpts.parent as projects/{project}/instances/{instance}/clusters/- and have cluster.getBackups delegate to instance.getBackupswith specific cluster name as reqOpts.parent

src/cluster.ts Outdated
Comment on lines +875 to +878
id: string,
fields: ModifiableBackupFields,
optionsOrCallback?: UpdateBackupOptions | UpdateBackupCallback,
cb?: UpdateBackupCallback
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pattern-wise it seems we do not have similar methods in other classes, just object.setMetadata

Otherwise would propose to bundle up the params in one options object.

src/cluster.ts Outdated
*
* These methods can be auto-paginated.
*/
paginator.extend(Cluster, ['listBackups']);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe gapic supports auto-pagination now. I do not think this is needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went through pagination thoroughly and recall finding that gapic was incompatible with what was expected from the backups RPC, at least per the generated types and protobufs. The implementation as it stands using the paginator library works as expected.

src/instance.ts Outdated
reqOpts,
gaxOpts: options.gaxOptions,
},
callback // TODO cast as Table
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should splice in a table Object in the callback.

src/backup.ts Outdated

/**
* @param {Bigtable} bigtable
* @param {google.bigtable.admin.v2.IBackup} backup A Backup or
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems this comment is incomplete?


// Alternatively, if a backup object is available, use the `delete` helper.
const backupPartial = cluster.asBackup({
name: backupFromTable.name, // used from another example
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand this comment?

cluster
.deleteBackup(backupId, {
gaxOptions: {
timeout: 50 * 1000, // this op takes a good 30-40 seconds
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: time cannot itself be good or bad ;) also, probably shouldn't abbreviate op

src/backup.ts Outdated
Comment on lines +33 to +38
sourceTable?: string | null;
expireTime?: google.protobuf.ITimestamp | null;
startTime?: google.protobuf.ITimestamp | null;
endTime?: google.protobuf.ITimestamp | null;
sizeBytes?: number | Long | string | null;
state?:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AVaksman I don't quite follow this comment. Could you explain a bit? Or maybe this comment is showing in the wrong place?

src/cluster.ts Outdated
/**
* The ITimestamp (Date or PreciseDate will be converted) representing
* when the backup will automatically be deleted. This must be at a
* minimum 6 hours from the time of the backup request and a maximum of 30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this min/max is mentioned but I didn't see anything validating this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from what i found, this min/max was not documented. the grpc response will respond with an error outside of these bounds. i opted to not perform client-side validation in case this changes in the future.

src/cluster.ts Outdated
);
}

listBackups(options?: ListBackupsOptions): Promise<ListBackupsResponse>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are the other underlying rpcs similarly listTable and not getTables? Either way this may make sense for interface consistency.

zamnuts added 16 commits July 28, 2020 05:10
…expire to field encapsulation for flexibility
…for backup timestamps, show Backup usage in sample
…le name instead of reference, orig backup as public metadata
@zamnuts zamnuts force-pushed the feature/introduce-backups branch from 4d8a372 to 1b69be9 Compare July 28, 2020 12:11
@codecov
Copy link

codecov bot commented Jul 28, 2020

Codecov Report

Merging #761 into master will increase coverage by 0.04%.
The diff coverage is 99.81%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #761      +/-   ##
==========================================
+ Coverage   99.14%   99.18%   +0.04%     
==========================================
  Files          17       18       +1     
  Lines       15880    16938    +1058     
  Branches      955     1039      +84     
==========================================
+ Hits        15744    16800    +1056     
- Misses        133      135       +2     
  Partials        3        3              
Impacted Files Coverage Δ
src/cluster.ts 99.79% <99.63%> (-0.21%) ⬇️
src/backup.ts 100.00% <100.00%> (ø)
src/index.ts 100.00% <100.00%> (ø)
src/instance.ts 100.00% <100.00%> (ø)
src/table.ts 99.64% <100.00%> (+0.01%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c694ae5...1be71e2. Read the comment docs.

"api-documenter": "api-documenter yaml --input-folder=temp"
},
"dependencies": {
"@google-cloud/paginator": "^3.0.0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@google-cloud/paginator was explicitly removed here:
8a2a74e

@AVaksman want to work with @zamnuts on what we're supposed to be doing here now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@google-cloud/paginator was explicitly removed here:
8a2a74e

@AVaksman want to work with @zamnuts on what we're supposed to be doing here now?

I noticed that commit. I've successfully refactored the stream portion, but am still having trouble understanding the pagination.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure,

  • add gapic method name (listBackupsStream: true) to gapicStreamingMethods
  • pipe, rather pumpify received objects through a Transform stream that will convert proto objects into this library Backup objects. Similar reference below.

const self = this;
const transformToAppProfile = (
chunk: google.bigtable.admin.v2.IAppProfile,
enc: string,
callback: Function
) => {
const appProfile = self.appProfile(chunk.name!.split('/').pop()!);
appProfile.metadata = chunk;
callback(null, appProfile);
};
return pumpify.obj([
this.bigtable.request({
client: 'BigtableInstanceAdminClient',
method: 'listAppProfilesStream',
reqOpts,
gaxOpts: gaxOptions,
}),
new Transform({objectMode: true, transform: transformToAppProfile}),

const uuid = require('uuid');
const {inspect} = require('util');

function generateBackupId() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These almost always belong in the samples test, and are passed in as a parameter to the main function (will keep reading though)

return `gcloud-tests-${uuid.v4()}`.substr(0, 48);
}

async function runBackupOperations(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's important that we have an outer main function, with an inner async function due to the way these get embedded in cloudsite. Please follow the pattern here:
https://github.com/googleapis/nodejs-bigtable/blob/master/samples/quickstart.js

});
console.log(
'Started a table backup operation:',
inspect(backupOperation.latestResponse.name, {depth: null, colors: true})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typically we don't drop down to util.inspect, but just drop individual properties in the logs. Since this is a name, is it just a string type?

// [END bigtable_delete_backup]
}

require('yargs')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our modern samples we do not use yargs, and instead use the model found in https://github.com/googleapis/nodejs-bigtable/blob/master/samples/quickstart.js

// [END bigtable_cluster_set_meta]
},

createBackup: (instanceId, clusterId, tableId, backupId) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These snippets are tech debt :/ I believe we don't have tests for many of them. For any new sample code, please create a new file, and follow https://github.com/googleapis/nodejs-bigtable/blob/master/samples/quickstart.js as a guide. One per file!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for reviewing the samples. There were competing patterns, and I wasn't sure what to implement. I'll revise based on your feedback.


describe('backups', () => {
before(async () => {
await instance.create({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I seem to remember there being a fairly low cap on the number of clusters you can have at a time. I created samples/test/util.js to help do a check-in/out re-use thing to keep the number of clusters we create down if possible.

});

it('should create a backup of a table', () => {
clusterSnippets.createBackup(INSTANCE_ID, CLUSTER_ID, TABLE_ID, BACKUP_ID);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are these all async functions? Do we need any assertions to make sure they ran successfully? Is there a chance these tests are all skipped in the describe block?

src/backup.ts Outdated
* @example <caption>include:samples/document-snippets/cluster.js</caption>
* region_tag:bigtable_cluster_delete_backup
*/
delete(options?: DeleteBackupOptions): Promise<DeleteBackupResponse> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should keep the overloads as is

delete(gaxOptions?: CallOptions): Promise<DeleteBackupResponse>;
delete(callback: DeleteBackupCallback): void;
delete(gaxOptions: CallOptions, callback: DeleteBackupCallback): void;

and use async for the function implementation

async delete(
    optionsOrCallback?: CallOptions | DeleteBackupCallback
  ): void | Promise<DeleteIBackupResponse> {

Provide callback support via callbackifyAll

src/instance.ts Outdated
export interface RestoreTableOptions {
gaxOptions?: CallOptions;
}
export type RestoreTableCallback = (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should these types be on table rather than instance?

@kolea2
Copy link
Contributor

kolea2 commented Aug 18, 2020

@zamnuts can you take a look at the build failures? Thanks!

@zamnuts
Copy link
Contributor Author

zamnuts commented Aug 18, 2020

@zamnuts can you take a look at the build failures? Thanks!

Build failures are due to automated tests not being update to reflect the refactored implementation.

Comment on lines +41 to +42
export type DeleteBackupCallback = GenericBackupCallback<IEmpty>;
export type DeleteBackupResponse = [Backup, IEmpty];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DeleteCallback/Response should just return IEmpty. Reference to same in Instance, Table

Comment on lines +48 to +60
export type CreateBackupCallback = (
err: ServiceError | null,
apiResponse?: LROperation<
google.bigtable.admin.v2.IBackup,
google.bigtable.admin.v2.ICreateBackupMetadata
>
) => void;
export type CreateBackupResponse = [
LROperation<
google.bigtable.admin.v2.IBackup,
google.bigtable.admin.v2.ICreateBackupMetadata
>
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Create backup callback/response should follow the following pattern

export interface LongRunningResourceCallback<Resource> {
(
err: ServiceError | null,
resource?: Resource,
operation?: Operation,
apiResponse?: IOperation
): void;
}
export type CreateInstanceCallback = LongRunningResourceCallback<Instance>;
export type CreateInstanceResponse = [Instance, Operation, IOperation];

where an Operation is imported from google-gax and IOperation is google.longrunning.IOperation

Use case

const [backup, operation] = await backup.create(...);
await operation.promise();
console.log('finished creating backup');

Or

backup.create(..., (err, backup, operation) => {
  if (err) {...}
  operation
    .on('error', (err) => {...})
    .on('complete', () => console.log('finished creating backup'));
})

Comment on lines +62 to +76
export type RestoreTableCallback = (
err: ServiceError | null,
table: Table | null,
apiResponse?: LROperation<
google.bigtable.admin.v2.ITable,
google.bigtable.admin.v2.IRestoreTableMetadata
>
) => void;
export type RestoreTableResponse = [
Table,
LROperation<
google.bigtable.admin.v2.ITable,
google.bigtable.admin.v2.IRestoreTableMetadata
>
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

restoreTable callback/response should follow the same pattern as createBackup callback/response above.

* described at https://aip.dev/132#ordering.
*/
orderBy?: string;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should add pageSize?: number and pageToken?:string here as well.
should respect (until pageSize and pageToken are removed from CallOptions) same values if past within gaxOptions
Top level values should take precedence

      reqOpts = extend(
        {},
        {
          pageSize: gaxOpts.pageSize,
          pageToken: gaxOpts.pageToken,
        },
        reqOpts
      );
      delete gaxOpts.pageSize;
      delete gaxOpts.pageToken;

Comment on lines +100 to +105
export type GetBackupsResponse = [Backup[], IBackup[]];
export type GetBackupsCallback = (
err: ServiceError | null,
backups?: Backup[],
apiResponse?: IBackup[]
) => void;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetBackupCallback/Response should follow the following pattern

export type GetBackupsCallback = (
    err: ServiceError | null,
    backups?: Backup[],
    nextQuery?: GetBackupsOptions | null,
    apiResponse?: google.bigtable.admin.v2.IListBackupsResponse | null
) => void;
export type GetBackupsResponse = [
  Backup[],
  GetBackupsOptions;
  google.bigtable.admin.v2.IListBackupsResponse;
]

backup(
idOrName: string,
cluster: string | Cluster,
fields?: ModifiableBackupFields
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per my comment above, I do not think fields are needed in Backup constructor

if (!clusterId) {
throw new Error('A complete backup name (path) is required.');
}
backupFrom = new Backup(this.cluster(clusterId), backup);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
backupFrom = new Backup(this.cluster(clusterId), backup);
backupFrom = this.backup(this.cluster(clusterId), backup);

Comment on lines +1107 to +1108
tableId: string,
backup: Backup | string,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should wrap it in options object

Comment on lines +1121 to +1123
const tableAdminClient = this.bigtable.api[
'BigtableTableAdminClient'
] as BigtableTableAdminClient;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this.bigtable.api may not contain a BigtableTableAdminClient at this point.

* @example <caption>include:samples/document-snippets/table.js</caption>
* region_tag:bigtable_create_table
*/
backup(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about this convenience function, as it internally sends additional RPC.

@stephenplusplus stephenplusplus added the do not merge Indicates a pull request not ready for merge, due to either quality or timing. label Sep 1, 2020
@stephenplusplus stephenplusplus mentioned this pull request Sep 2, 2020
10 tasks
@zamnuts zamnuts closed this Sep 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api: bigtable Issues related to the googleapis/nodejs-bigtable API. cla: yes This human has signed the Contributor License Agreement. do not merge Indicates a pull request not ready for merge, due to either quality or timing. priority: p0 Highest priority. Critical issue. P0 implies highest priority. type: enhancement

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants