Conversation
|
Caution Review failedThe pull request is closed. WalkthroughThe pull request introduces extensive modifications across several files related to database schema management. Key changes include the addition of numerous methods to the Changes
Possibly related PRs
Suggested reviewers
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (4)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #741 +/- ##
==========================================
- Coverage 69.47% 69.41% -0.07%
==========================================
Files 210 210
Lines 17257 17745 +488
==========================================
+ Hits 11990 12318 +328
- Misses 4610 4766 +156
- Partials 657 661 +4 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Actionable comments posted: 18
🧹 Outside diff range and nitpick comments (40)
database/schema/processors/sqlserver.go (2)
19-19: Consider using pointer receivers for consistencyThe method
ProcessColumnsuses a value receiver(r Sqlserver). To maintain consistency and avoid unnecessary copies, consider changing the receiver to a pointer(r *Sqlserver).
54-54: Ensure consistent use of receiversThe method
ProcessIndexesuses a value receiver(r Sqlserver). For consistency and to avoid unnecessary copying of the struct, consider changing it to a pointer receiver(r *Sqlserver).database/schema/processors/postgres.go (2)
19-19: Consider using pointer receivers for consistencyThe method
ProcessColumnsuses a value receiver(r Postgres). To maintain consistency and improve performance, consider changing the receiver to a pointer(r *Postgres).
68-68: Ensure consistent use of receiversThe method
ProcessIndexesuses a value receiver(r Postgres). For consistency and efficiency, consider changing it to a pointer receiver(r *Postgres).database/schema/schema.go (2)
31-32: Group related struct fields togetherThe newly added fields
processorandschemaare placed after other fields. For improved readability, consider grouping related fields together within theSchemastruct.
135-144: Enhance error handling inGetForeignKeysThe
GetForeignKeysmethod could benefit from enhanced error handling. Consider wrapping errors with more context or logging errors to aid in debugging if issues arise during foreign key retrieval.database/schema/grammars/sqlite_test.go (2)
10-11: Organize import statementsConsider grouping the imports to enhance readability and maintain consistency with the rest of the codebase.
160-272: Improve test structure inTestCompileRenameIndexThe
TestCompileRenameIndexfunction contains several subtests. Consider usings.Runwithin each subtest for better isolation and clearer test output.database/schema/grammars/sqlite.go (2)
10-11: Organize imports for clarityGroup the import statements to enhance readability and maintain consistency.
179-219: Refactor complex logic inCompileRenameIndexThe method
CompileRenameIndexcontains complex conditional logic and relies on logging warnings. Consider refactoring to simplify the logic, improve readability, and ensure all scenarios are properly handled and tested.database/schema/grammars/mysql.go (3)
Line range hint
284-299: Optimize type assertions inModifyOnUpdatemethodThere are redundant type assertions for
onUpdate. Refactor the switch statement to eliminate unnecessary assertions and improve readability.Suggested refactor:
func (r *Mysql) ModifyOnUpdate(_ schema.Blueprint, column schema.ColumnDefinition) string { onUpdate := column.GetOnUpdate() if onUpdate != nil { switch value := onUpdate.(type) { case Expression: return " on update " + string(value) case string: if value != "" { return " on update " + value } } } return "" }
300-302: Remove unused parameter name inTypeBigIntegermethodThe parameter name is unnecessary since it's not used within the method. Removing it improves code clarity.
Suggested change:
-func (r *Mysql) TypeBigInteger(_ schema.ColumnDefinition) string { +func (r *Mysql) TypeBigInteger(schema.ColumnDefinition) string { return "bigint" }
442-454: EnsurecompileKeymethod promotes code reuseThe
compileKeymethod is designed for reuse in key compilation. Verify that it's utilized effectively in methods likeCompileUniqueandCompileFullTextto reduce code duplication.database/schema/grammars/sqlserver.go (1)
115-126: Address the TODO inCompileDropDefaultConstraintmethodThere's a TODO comment indicating additional logic is needed. Implement the required logic or provide an explanation if it's intentionally left unimplemented.
Do you need assistance in implementing the logic for dropping default constraints?
database/schema/schema_test.go (2)
135-153: Consider renamingTestColumnMethodsfor clarityThe test seems to focus on dropping columns rather than general column methods. Renaming it to reflect its purpose would improve readability.
1451-1480: EnhanceTestFullTextto handle driver differencesThe
TestFullTextmethod has conditional assertions based on the driver. Consider expanding the tests to handle and document the behavior for drivers that do not support full-text indexing.database/schema/processors/utils.go (1)
Line range hint
9-19: LGTM with a suggestion for error handling.The type change and index processing logic look good. However, consider adding validation for empty column strings to prevent potential issues with the
strings.Splitoperation.Consider adding this validation:
func processIndexes(dbIndexes []schema.DBIndex) []schema.Index { var indexes []schema.Index for _, dbIndex := range dbIndexes { + if dbIndex.Columns == "" { + continue + } indexes = append(indexes, schema.Index{ Columns: strings.Split(dbIndex.Columns, ","), Name: strings.ToLower(dbIndex.Name),database/schema/constants/constants.go (1)
7-20: LGTM with a documentation suggestion.The new constants follow a consistent naming pattern and provide comprehensive coverage for index operations. Consider adding documentation comments for each constant group to explain their purpose and usage.
Consider adding documentation like this:
const ( + // Table operations CommandAdd = "add" CommandComment = "comment" CommandCreate = "create" CommandDrop = "drop" + + // Column and index operations CommandDropColumn = "dropColumn" CommandDropForeign = "dropForeign" CommandDropFullText = "dropFullText"database/schema/processors/utils_test.go (1)
Line range hint
13-31: Consider enhancing test coverage with additional cases.While the current tests cover basic functionality, consider adding more test cases using a table-driven approach to improve coverage.
Consider refactoring the tests like this:
func TestProcessIndexes(t *testing.T) { - // Test with valid indexes - input := []schema.DBIndex{ - {Name: "INDEX_A", Type: "BTREE", Columns: "a,b"}, - {Name: "INDEX_B", Type: "HASH", Columns: "c,d"}, - } - expected := []schema.Index{ - {Name: "index_a", Type: "btree", Columns: []string{"a", "b"}}, - {Name: "index_b", Type: "hash", Columns: []string{"c", "d"}}, - } - - result := processIndexes(input) - - assert.Equal(t, expected, result) - - // Test with empty input - input = []schema.DBIndex{} - - result = processIndexes(input) - - assert.Nil(t, result) + tests := []struct { + name string + input []schema.DBIndex + expected []schema.Index + }{ + { + name: "valid indexes", + input: []schema.DBIndex{ + {Name: "INDEX_A", Type: "BTREE", Columns: "a,b"}, + {Name: "INDEX_B", Type: "HASH", Columns: "c,d"}, + }, + expected: []schema.Index{ + {Name: "index_a", Type: "btree", Columns: []string{"a", "b"}}, + {Name: "index_b", Type: "hash", Columns: []string{"c", "d"}}, + }, + }, + { + name: "empty input", + input: []schema.DBIndex{}, + expected: nil, + }, + { + name: "empty columns", + input: []schema.DBIndex{ + {Name: "INDEX_A", Type: "BTREE", Columns: ""}, + }, + expected: []schema.Index{}, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + result := processIndexes(tt.input) + assert.Equal(t, tt.expected, result) + }) + } }contracts/database/schema/index.go (1)
18-20: Consider documenting PostgreSQL-specific featuresThe new methods
Deferrable()andInitiallyImmediate()appear to be PostgreSQL-specific features for constraint management. Consider adding documentation to clarify their intended use and database compatibility.database/schema/processors/mysql.go (1)
60-62: Document the processIndexes helper functionThe
ProcessIndexesmethod delegates to a helper functionprocessIndexes. Consider adding documentation to explain this shared functionality.database/schema/processors/sqlite.go (1)
35-49: Consider adding validation for foreign key actions and column namesWhile the implementation is generally sound, consider these improvements:
- Validate OnUpdate/OnDelete actions against allowed values (RESTRICT, CASCADE, SET NULL, etc.)
- Consider using a more robust column splitting mechanism in case column names contain commas
Here's a suggested improvement:
func (r Sqlite) ProcessForeignKeys(dbForeignKeys []schema.DBForeignKey) []schema.ForeignKey { + // Define allowed actions + allowedActions := map[string]bool{ + "restrict": true, + "cascade": true, + "set null": true, + "no action": true, + } + var foreignKeys []schema.ForeignKey for _, dbForeignKey := range dbForeignKeys { + onUpdate := strings.ToLower(dbForeignKey.OnUpdate) + onDelete := strings.ToLower(dbForeignKey.OnDelete) + + // Validate actions + if onUpdate != "" && !allowedActions[onUpdate] { + onUpdate = "restrict" // default to restrict if invalid + } + if onDelete != "" && !allowedActions[onDelete] { + onDelete = "restrict" // default to restrict if invalid + } + foreignKeys = append(foreignKeys, schema.ForeignKey{ Name: dbForeignKey.Name, Columns: strings.Split(dbForeignKey.Columns, ","), ForeignTable: dbForeignKey.ForeignTable, ForeignColumns: strings.Split(dbForeignKey.ForeignColumns, ","), - OnUpdate: strings.ToLower(dbForeignKey.OnUpdate), - OnDelete: strings.ToLower(dbForeignKey.OnDelete), + OnUpdate: onUpdate, + OnDelete: onDelete, }) }database/schema/index.go (1)
94-110: Consider database compatibility documentation for new index featuresThe new methods (
Deferrable,InitiallyImmediate,Language) appear to be PostgreSQL-specific features. Consider adding documentation comments to clarify database compatibility.Example documentation:
+// Deferrable sets the index to be deferrable. This is only supported in PostgreSQL. func (r *IndexDefinition) Deferrable() schema.IndexDefinition { r.command.Deferrable = convert.Pointer(true) return r } +// InitiallyImmediate sets the deferrable index to be initially immediate. This is only supported in PostgreSQL. func (r *IndexDefinition) InitiallyImmediate() schema.IndexDefinition { r.command.InitiallyImmediate = convert.Pointer(true) return r } +// Language sets the index language. This is only supported in PostgreSQL for full-text search indexes. func (r *IndexDefinition) Language(name string) schema.IndexDefinition { r.command.Language = name return r }database/schema/grammars/wrap.go (1)
46-54: Consider optimizing PrefixArray for large slicesWhile the current implementation using
collect.Mapis clean, for large slices, it might be more efficient to use a pre-allocated slice with direct assignment.Consider this alternative implementation:
func (r *Wrap) PrefixArray(prefix string, values []string) []string { - return collect.Map(values, func(value string, _ int) string { - return prefix + " " + value - }) + result := make([]string, len(values)) + for i, value := range values { + result[i] = prefix + " " + value + } + return result }database/schema/processors/mysql_test.go (1)
73-99: Enhance foreign key test coverageWhile the basic test cases are good, consider adding more test scenarios:
- Multiple foreign keys on a table
- Foreign keys with multiple columns
- Edge cases for ON UPDATE/DELETE actions (RESTRICT, NO ACTION)
Example additional test case:
tests := []struct { name string dbForeignKeys []schema.DBForeignKey expected []schema.ForeignKey }{ + { + name: "MultipleColumnsFK", + dbForeignKeys: []schema.DBForeignKey{ + { + Name: "fk_composite", + Columns: "tenant_id,user_id", + ForeignSchema: "public", + ForeignTable: "tenants_users", + ForeignColumns: "tenant_id,user_id", + OnUpdate: "RESTRICT", + OnDelete: "NO ACTION", + }, + }, + expected: []schema.ForeignKey{ + { + Name: "fk_composite", + Columns: []string{"tenant_id", "user_id"}, + ForeignSchema: "public", + ForeignTable: "tenants_users", + ForeignColumns: []string{"tenant_id", "user_id"}, + OnUpdate: "restrict", + OnDelete: "no action", + }, + }, + },database/schema/postgres_schema.go (1)
Line range hint
148-152: Consider error wrapping for better contextWhen scanning query results, consider wrapping errors with additional context to aid debugging.
var dbIndexes []contractsschema.DBIndex -if err := r.orm.Query().Raw(r.grammar.CompileIndexes(schema, table)).Scan(&dbIndexes); err != nil { +if err := r.orm.Query().Raw(r.grammar.CompileIndexes(schema, table)).Scan(&dbIndexes); err != nil { + return nil, fmt.Errorf("failed to scan indexes for table %s.%s: %w", schema, table, err) +}database/schema/processors/sqlite_test.go (2)
Line range hint
24-72: Consider adding edge cases to strengthen test coverage.The test cases are well-structured, but consider adding these scenarios:
- Column with maximum length values
- Column with special characters in name
- Column with empty/null default values
Line range hint
104-143: Refactor test structure to use table-driven tests consistently.Instead of separate test sections with comments, consider using a table-driven approach similar to other tests:
- // Test with valid indexes - input := []schema.DBIndex{ + tests := []struct { + name string + input []schema.DBIndex + expected []schema.Index + }{ + { + name: "ValidIndexes", + input: []schema.DBIndex{ + {Name: "INDEX_A", Type: "BTREE", Columns: "a,b"}, + {Name: "INDEX_B", Type: "HASH", Columns: "c,d"}, + {Name: "INDEX_C", Type: "HASH", Columns: "e,f", Primary: true}, + }, + expected: []schema.Index{ + {Name: "index_a", Columns: []string{"a", "b"}}, + {Name: "index_b", Columns: []string{"c", "d"}}, + {Name: "index_c", Columns: []string{"e", "f"}, Primary: true}, + }, + }, + // Add more test cases here + } + + for _, tt := range tests { + s.Run(tt.name, func() { + result := sqlite.ProcessIndexes(tt.input) + s.Equal(tt.expected, result) + }) + }database/schema/processors/postgres_test.go (2)
Line range hint
24-72: Add PostgreSQL-specific column type test cases.Consider adding test cases for PostgreSQL-specific features:
- Array types
- JSON/JSONB types
- Custom types/domains
- Interval types
Line range hint
103-151: Standardize test structure using table-driven tests.Similar to the suggestion for SQLite tests, consider refactoring to use a consistent table-driven approach:
- // ValidTypes_ReturnsProcessedTypes - input := []schema.Type{ + tests := []struct { + name string + input []schema.Type + expected []schema.Type + }{ + { + name: "ValidTypes", + input: []schema.Type{ + {Type: "b", Category: "a"}, + {Type: "c", Category: "b"}, + {Type: "d", Category: "c"}, + }, + expected: []schema.Type{ + {Type: "base", Category: "array"}, + {Type: "composite", Category: "boolean"}, + {Type: "domain", Category: "composite"}, + }, + }, + // Add more test cases here + } + + for _, tt := range tests { + s.Run(tt.name, func() { + result := postgres.ProcessTypes(tt.input) + s.Equal(tt.expected, result) + }) + }database/schema/processors/sqlserver_test.go (1)
Line range hint
21-103: Standardize SQL Server instance creation across tests.Currently, some tests use the instance from SetupTest while others create new instances. Standardize by using the suite's instance consistently:
- sqlserver := NewSqlserver() - for _, tt := range tests { + for _, tt := range tests { s.Run(tt.name, func() { - result := sqlserver.ProcessColumns(tt.dbColumns) + result := s.sqlserver.ProcessColumns(tt.dbColumns) s.Equal(tt.expected, result) }) }contracts/database/schema/grammar.go (1)
22-35: Consider adding validation parameters for drop operationsThe drop operations (DropColumn, DropForeign, DropFullText, DropIndex, DropPrimary, DropUnique) should include validation to prevent accidental drops of critical schema elements.
Consider adding a
forceparameter or implementing a validation mechanism in the grammar implementations to confirm these destructive operations, especially for production environments.contracts/database/schema/blueprint.go (2)
28-31: Consider adding safety checks for drop operationsThe Drop and DropColumn operations are destructive. Consider adding mechanisms to:
- Verify the existence of tables/columns before dropping
- Implement a dry-run option
- Add backup capabilities
48-57: Consider grouping related drop operationsThe drop operations for timestamps and soft deletes could be grouped into a more generic method to improve maintainability.
Consider something like:
+// DropColumns Drop multiple columns with a specific type +DropColumns(columnType string, columns ...string)database/schema/grammars/sqlserver_test.go (1)
85-92: Consider adding more test cases for edge scenarios.While the test covers the basic functionality of dropping multiple columns, consider adding test cases for:
- Dropping a single column
- Handling non-existent columns
- Case sensitivity in column names
Here's a suggested expansion of the test:
func (s *SqlserverSuite) TestCompileDropColumn() { mockBlueprint := mocksschema.NewBlueprint(s.T()) - mockBlueprint.EXPECT().GetTableName().Return("users").Twice() + tests := []struct { + name string + columns []string + tableName string + expected []string + }{ + { + name: "multiple columns", + columns: []string{"id", "name"}, + tableName: "users", + expected: []string{`DECLARE @sql NVARCHAR(MAX) = '';SELECT @sql += 'ALTER TABLE $table DROP CONSTRAINT ' + OBJECT_NAME([default_object_id]) + ';' FROM sys.columns WHERE [object_id] = OBJECT_ID('"goravel_users"') AND [name] in ("id", "name") AND [default_object_id] <> 0;EXEC(@sql); alter table "goravel_users" drop column "id", "name"`}, + }, + { + name: "single column", + columns: []string{"id"}, + tableName: "users", + expected: []string{`DECLARE @sql NVARCHAR(MAX) = '';SELECT @sql += 'ALTER TABLE $table DROP CONSTRAINT ' + OBJECT_NAME([default_object_id]) + ';' FROM sys.columns WHERE [object_id] = OBJECT_ID('"goravel_users"') AND [name] in ("id") AND [default_object_id] <> 0;EXEC(@sql); alter table "goravel_users" drop column "id"`}, + }, + } - s.Equal([]string{`DECLARE @sql NVARCHAR(MAX) = '';SELECT @sql += 'ALTER TABLE $table DROP CONSTRAINT ' + OBJECT_NAME([default_object_id]) + ';' FROM sys.columns WHERE [object_id] = OBJECT_ID('"goravel_users"') AND [name] in ("id", "name") AND [default_object_id] <> 0;EXEC(@sql); alter table "goravel_users" drop column "id", "name"`}, s.grammar.CompileDropColumn(mockBlueprint, &contractsschema.Command{ - Columns: []string{"id", "name"}, - })) + for _, test := range tests { + s.Run(test.name, func() { + mockBlueprint.EXPECT().GetTableName().Return(test.tableName).Twice() + result := s.grammar.CompileDropColumn(mockBlueprint, &contractsschema.Command{ + Columns: test.columns, + }) + s.Equal(test.expected, result) + }) + } }database/schema/grammars/mysql_test.go (1)
108-117: Consider expanding test coverage for drop column scenarios.Similar to the SQL Server tests, this test could benefit from additional test cases covering:
- Single column drops
- Error cases
- Different table name formats
Here's a suggested expansion:
func (s *MysqlSuite) TestCompileDropColumn() { - mockBlueprint := mocksschema.NewBlueprint(s.T()) - mockBlueprint.EXPECT().GetTableName().Return("users").Once() - - s.Equal([]string{ - "alter table `goravel_users` drop `id`, drop `name`", - }, s.grammar.CompileDropColumn(mockBlueprint, &contractsschema.Command{ - Columns: []string{"id", "name"}, - })) + tests := []struct { + name string + tableName string + columns []string + expected []string + }{ + { + name: "multiple columns", + tableName: "users", + columns: []string{"id", "name"}, + expected: []string{"alter table `goravel_users` drop `id`, drop `name`"}, + }, + { + name: "single column", + tableName: "users", + columns: []string{"id"}, + expected: []string{"alter table `goravel_users` drop `id`"}, + }, + { + name: "table with schema", + tableName: "public.users", + columns: []string{"id"}, + expected: []string{"alter table `public`.`goravel_users` drop `id`"}, + }, + } + + for _, test := range tests { + s.Run(test.name, func() { + mockBlueprint := mocksschema.NewBlueprint(s.T()) + mockBlueprint.EXPECT().GetTableName().Return(test.tableName).Once() + result := s.grammar.CompileDropColumn(mockBlueprint, &contractsschema.Command{ + Columns: test.columns, + }) + s.Equal(test.expected, result) + }) + } }database/schema/grammars/postgres_test.go (4)
119-126: Consider adding more test cases for CompileDropColumnWhile the current test case covers dropping multiple columns, consider adding test cases for:
- Dropping a single column
- Edge cases like dropping columns with special characters or reserved keywords
135-140: Add error case handling for CompileDropPrimaryConsider adding test cases to verify behavior when:
- The table doesn't have a primary key
- The table name contains special characters
189-197: Add test cases for different language configurations in CompileFullTextThe current test only covers the 'english' language configuration. Consider adding test cases for:
- Different language configurations
- Custom text search configurations
- Multiple columns with different weights
Line range hint
119-291: Improve test consistency and utilitiesConsider the following improvements for better maintainability:
- Use table-driven tests consistently across all test methods
- Extract common mock setup into helper methods
- Consider adding a test utility function for creating pointer values (used in deferrable/initially immediate flags)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
⛔ Files ignored due to path filters (5)
mocks/database/schema/Blueprint.gois excluded by!mocks/**mocks/database/schema/Grammar.gois excluded by!mocks/**mocks/database/schema/IndexDefinition.gois excluded by!mocks/**mocks/database/schema/Processor.gois excluded by!mocks/**mocks/database/schema/Schema.gois excluded by!mocks/**
📒 Files selected for processing (35)
contracts/database/schema/blueprint.go(3 hunks)contracts/database/schema/grammar.go(2 hunks)contracts/database/schema/index.go(1 hunks)contracts/database/schema/processor.go(2 hunks)contracts/database/schema/schema.go(3 hunks)database/schema/blueprint.go(6 hunks)database/schema/blueprint_test.go(1 hunks)database/schema/constants/constants.go(1 hunks)database/schema/grammars/mysql.go(13 hunks)database/schema/grammars/mysql_test.go(2 hunks)database/schema/grammars/postgres.go(6 hunks)database/schema/grammars/postgres_test.go(3 hunks)database/schema/grammars/sqlite.go(6 hunks)database/schema/grammars/sqlite_test.go(3 hunks)database/schema/grammars/sqlserver.go(5 hunks)database/schema/grammars/sqlserver_test.go(1 hunks)database/schema/grammars/wrap.go(2 hunks)database/schema/index.go(3 hunks)database/schema/mysql_schema.go(2 hunks)database/schema/postgres_schema.go(2 hunks)database/schema/processors/mysql.go(3 hunks)database/schema/processors/mysql_test.go(4 hunks)database/schema/processors/postgres.go(3 hunks)database/schema/processors/postgres_test.go(7 hunks)database/schema/processors/sqlite.go(2 hunks)database/schema/processors/sqlite_test.go(6 hunks)database/schema/processors/sqlserver.go(3 hunks)database/schema/processors/sqlserver_test.go(4 hunks)database/schema/processors/utils.go(1 hunks)database/schema/processors/utils_test.go(2 hunks)database/schema/schema.go(7 hunks)database/schema/schema_test.go(8 hunks)database/schema/sqlite_schema.go(2 hunks)database/schema/sqlserver_schema.go(2 hunks)errors/list.go(1 hunks)
🔇 Additional comments (52)
database/schema/processors/postgres.go (1)
56-57: Ensure correct splitting of column names
When splitting dbForeignKey.Columns and dbForeignKey.ForeignColumns using strings.Split, columns with commas in their names could cause issues. Verify that column names do not contain commas, or use a parsing method that accounts for this.
database/schema/schema.go (2)
41-42: Verify processor initialization for all drivers
In the NewSchema function, the processor is initialized based on the driver. Ensure that all supported drivers have a corresponding processor initialized, especially for drivers that might be added in the future.
244-244: Check for unintended side effects when passing Schema reference
Passing the entire Schema instance to NewBlueprint instead of just the prefix and table name might introduce unintended side effects. Verify that this change does not impact other parts of the system relying on Blueprint.
database/schema/grammars/sqlite_test.go (3)
17-17: Initialize mockLog field properly
The mockLog field is added to the SqliteSuite struct. Ensure that it is properly initialized and utilized in the test cases that require logging.
25-27: Pass mockLog to NewSqlite in SetupTest
The SetupTest method initializes the grammar with the mockLog. This ensures that logging within the Sqlite grammar can be tested and verified.
129-140: Add test coverage for CompileDropColumn
The TestCompileDropColumn function adds test coverage for the CompileDropColumn method. Ensure that it tests various scenarios, including dropping multiple columns and handling errors.
database/schema/grammars/sqlite.go (3)
16-16: Add log field to Sqlite struct
The addition of the log field allows for logging within the Sqlite grammar, aiding in debugging and error reporting.
23-26: Ensure logger is properly initialized
In the NewSqlite constructor, the log parameter is assigned to the log field. Verify that a non-nil logger is passed to prevent potential nil pointer dereferences when logging.
66-68: Implement CompileDrop method
The CompileDrop method is properly implemented to generate the SQL for dropping a table. Ensure that table names are correctly quoted to handle special characters.
database/schema/grammars/mysql.go (4)
50-52: Verify the necessity of the empty CompileComment method
The CompileComment method currently returns an empty string. If comments are not supported for MySQL in this context, consider adding a comment explaining why it's empty or implementing the functionality if needed.
74-76: Correct implementation of CompileDrop method
The CompileDrop method correctly generates the SQL statement to drop a table.
94-100: Accurate SQL generation in CompileDropColumn method
The CompileDropColumn method correctly constructs the SQL statement to drop specified columns from a table.
102-104: Ensure correct constraint name in CompileDropForeign method
The method drops a foreign key constraint using command.Index. Verify that command.Index accurately reflects the constraint name defined in the database to prevent errors during execution.
database/schema/grammars/sqlserver.go (6)
71-73: Correct implementation of CompileDrop method
The CompileDrop method correctly generates the SQL statement to drop a table in SQL Server.
105-113: Verify constraint existence in CompileDropColumn method
Ensure that the script generated by CompileDropDefaultConstraint correctly handles scenarios where default constraints may not exist for the specified columns, to prevent errors during execution.
132-134: Implement or document CompileDropFullText method
The CompileDropFullText method currently returns an empty string. If full-text index dropping is unsupported or unnecessary, document this; otherwise, consider implementing the functionality.
142-153: Ensure compatibility of CompileForeignKeys SQL query
The SQL query in CompileForeignKeys should be compatible with different versions of SQL Server. Verify that it functions correctly across supported versions.
241-245: Correct usage of sp_rename in CompileRenameIndex method
The CompileRenameIndex method appropriately utilizes sp_rename to rename indexes.
260-265: Consider using ALTER TABLE for unique constraints
Using CREATE UNIQUE INDEX in CompileUnique may differ from adding a unique constraint via ALTER TABLE. Confirm that this approach meets the application's requirements and consider using ALTER TABLE if necessary.
database/schema/blueprint.go (7)
17-17: Ensure all usages of Blueprint are updated with new schema field
Adding schema schema.Schema to the Blueprint struct may affect existing code. Verify that all instances where Blueprint is instantiated are updated accordingly.
95-99: Correct addition of Drop command
The Drop() method correctly adds a CommandDrop to the blueprint's commands.
101-106: Validate columns in DropColumn method
The DropColumn() method adds a CommandDropColumn with specified columns. Ensure that the columns exist in the table before attempting to drop them to prevent runtime errors.
128-130: Handle non-existent indexes in DropFullTextByName method
Ensure that dropping a full-text index by name handles cases where the index might not exist, possibly by adding error handling or informative messages.
290-292: Confirm parameter change in Primary method
The Primary method parameter has changed to column ...string. Verify that this change doesn't adversely affect existing implementations and that all usages are updated.
294-301: Addition of RenameIndex method
The new RenameIndex method correctly constructs a CommandRenameIndex. Ensure that this functionality is supported across all target database platforms.
426-453: Updated ToSql method correctly handles new commands
The ToSql method now includes the new command types, and the implementation appears accurate.
database/schema/grammars/postgres.go (5)
68-70: Correct implementation of CompileDrop method
The CompileDrop method appropriately generates the SQL statement to drop a table in PostgreSQL.
88-94: Accurate SQL construction in CompileDropColumn method
The CompileDropColumn method correctly creates the SQL statement to drop specified columns.
140-163: Verify compatibility of CompileForeignKeys SQL query
Ensure that the SQL used in CompileForeignKeys is compatible with all supported PostgreSQL versions, especially regarding functions like string_agg and lateral joins.
214-219: Correct usage of ALTER INDEX in CompileRenameIndex
The CompileRenameIndex method correctly implements index renaming in PostgreSQL.
112-118:
Potential issue with hardcoded index naming in CompileDropPrimary
The primary key constraint name is assumed to be <table_name>_pkey. This may not hold if the table name includes uppercase letters or special characters. Verify that constraint names are accurately determined.
database/schema/schema_test.go (2)
64-133: Comprehensive testing in TestColumnExtraAttributes
The test method effectively validates column attributes across different drivers.
Line range hint 701-1005: Ensure consistent test coverage in TestColumnTypes_Mysql
The MySQL-specific test provides detailed assertions. Verify that similar detailed tests exist for other database drivers to maintain consistent coverage.
Would you like help in reviewing or creating tests for other drivers to ensure consistency?
database/schema/processors/sqlite.go (2)
Line range hint 19-34: LGTM: Type standardization looks good
The change to use schema.DBColumn aligns with the standardization of types across the codebase while maintaining correct column processing logic.
Line range hint 51-77: LGTM: Robust index processing implementation
The implementation correctly handles primary key deduplication and uses functional programming patterns effectively.
database/schema/sqlserver_schema.go (2)
Line range hint 59-63: LGTM: Type alignment with schema package
The change to use contractsschema.DBColumn aligns with the standardization effort.
Line range hint 75-79: LGTM: Consistent type usage
The change to use contractsschema.DBIndex maintains consistency with the schema package.
database/schema/sqlite_schema.go (2)
Line range hint 72-76: LGTM: Schema package type alignment
The change to use schema.DBColumn maintains consistency with the schema package.
Line range hint 83-87: LGTM: Consistent type usage
The change to use schema.DBIndex aligns with the schema package standardization.
database/schema/mysql_schema.go (1)
86-86: LGTM! Type migration to contractsschema package
The change from processors.DBColumn/DBIndex to contractsschema.DBColumn/DBIndex aligns with the broader refactoring effort to use contract types consistently.
Let's verify the consistency of this type migration across the codebase:
Also applies to: 97-97
✅ Verification successful
Type migration to contractsschema package is consistent
The verification shows that all database schema files (MySQL, PostgreSQL, and SQL Server) are consistently using contractsschema.DBColumn and contractsschema.DBIndex types. There are no remaining references to the old processors.DBColumn/DBIndex types, confirming a complete migration.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check for any remaining references to processors.DBColumn/DBIndex
rg "processors\.(DBColumn|DBIndex)" -l
# Check for adoption of contractsschema.DBColumn/DBIndex
rg "contractsschema\.(DBColumn|DBIndex)" -l
Length of output: 189
Script:
#!/bin/bash
# Let's check the actual usage context in these files to ensure the migration is complete
rg "processors\.(DBColumn|DBIndex)" -A 2 -B 2
rg "contractsschema\.(DBColumn|DBIndex)" -A 2 -B 2
Length of output: 2298
database/schema/index.go (1)
18-75: LGTM! Consistent receiver variable renaming
The change from f to r as the receiver variable name improves consistency across the codebase.
database/schema/grammars/wrap.go (1)
32-37: LGTM! Improved slice handling in Columns method
The refactoring to use a new pre-allocated slice is more idiomatic and prevents mutation of the input slice.
contracts/database/schema/schema.go (1)
83-99: Consider validation for Command struct fields
The new fields in Command struct might need validation:
Algorithmshould probably be an enum/constDeferrableandInitiallyImmediatebeing pointers suggests nullable fields, but the use case isn't clearLanguagemight need validation against supported languages
database/schema/postgres_schema.go (1)
Line range hint 132-136: Verify SQL injection prevention for table and schema names
While the code uses Raw queries, ensure that table and schema names are properly escaped:
- Check if
grammar.CompileColumnsproperly escapes inputs - Consider using parameterized queries where possible
✅ Verification successful
Let me gather more information about the escaping implementation by checking the wrap.Quote functionality and any other relevant escaping mechanisms.
Let me check one more aspect - the actual implementation of how these schema and table parameters are used in the Postgres grammar's CompileColumns method.
Based on the gathered information, I can now provide the final response:
SQL injection is properly prevented through parameter escaping
The implementation in Postgres grammar shows proper SQL injection prevention:
- Schema and table parameters are escaped using
wrap.Quote()method - The
Quote()method wraps values in single quotes - The query uses system catalog tables (pg_attribute, pg_class, etc.) which are safe from injection
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check grammar implementation for escaping
rg -A 5 'func.*CompileColumns|func.*EscapeNames' --type go
Length of output: 4932
Script:
#!/bin/bash
# Check wrap.Quote implementation and related escaping mechanisms
rg -A 5 'func.*Quote|func.*Escape' --type go
# Also check if there are any direct usages of these schema/table parameters
ast-grep --pattern 'CompileColumns($schema, $table)'
Length of output: 4082
Script:
#!/bin/bash
# Get the full implementation of Postgres CompileColumns
rg -A 15 'func \(r \*Postgres\) CompileColumns' database/schema/grammars/postgres.go
# Also check if there's any specific table/schema name validation or sanitization
rg -A 5 'func.*ValidateTable|func.*ValidateSchema|func.*Sanitize' --type go
Length of output: 1193
database/schema/processors/sqlite_test.go (1)
11-22: LGTM! Well-structured test suite setup.
The test suite is properly organized using testify's suite package with appropriate setup method.
contracts/database/schema/grammar.go (2)
12-13: LGTM: CompileDrop method is well-defined
The method signature and documentation follow the established pattern.
38-41: Verify foreign key and fulltext index compilation methods
The new compilation methods for foreign keys and fulltext indexes need careful implementation across different database engines.
✅ Verification successful
Implementation of foreign keys and fulltext indexes verified
Based on the examination of the PostgreSQL implementation and tests:
- The
CompileFullTextmethod is properly implemented with language support and test coverage - The
CompileForeignKeysmethod is implemented with proper schema and table name handling - The test suite includes comprehensive test cases for both features, including edge cases
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if these features are consistently implemented across different database processors
ast-grep --pattern 'func $_(b Blueprint, command *Command) string {
$$$
CompileFullText($$$)
$$$
}'
Length of output: 107
Script:
#!/bin/bash
# Let's check for implementations of CompileFullText and CompileForeignKeys across different database processors
echo "=== Searching for CompileFullText implementations ==="
ast-grep --pattern 'func (p *$_) CompileFullText(blueprint Blueprint, command *Command) string {
$$$
}'
echo -e "\n=== Searching for CompileForeignKeys implementations ==="
ast-grep --pattern 'func (p *$_) CompileForeignKeys(schema string, table string) string {
$$$
}'
# Also check for any test files related to these features
echo -e "\n=== Searching for related test files ==="
fd -e go -E vendor . | grep -i "test" | xargs rg -l "CompileFullText|CompileForeignKeys"
Length of output: 682
Script:
#!/bin/bash
# Let's examine the postgres test file and check the actual implementations in the postgres grammar
echo "=== Checking postgres_test.go ==="
cat database/schema/grammars/postgres_test.go
echo -e "\n=== Checking postgres grammar implementation ==="
rg -A 5 "CompileFullText|CompileForeignKeys" database/schema/grammars/postgres.go
# Let's also check other database grammar implementations
echo -e "\n=== Checking other grammar implementations ==="
fd -e go -E vendor . | grep "grammars" | grep -v "test" | xargs rg -l "type.*Grammar"
Length of output: 17035
contracts/database/schema/blueprint.go (2)
32-45: LGTM: Well-structured index and foreign key operations
The methods for managing indexes and foreign keys are well-organized and follow consistent naming patterns.
98-99: Verify index operations across different database engines
The index operations (RenameIndex and Unique) need to be tested across different database engines to ensure compatibility.
Also applies to: 134-135
✅ Verification successful
Let me search for the database-specific implementations to verify the index operations across different engines.
Index operations are properly implemented across database engines
Based on the search results, the index operations are consistently implemented across all supported database engines:
- MySQL: Uses
alter table {table} rename index {from} to {to}for renaming andalter table {table} add unique {index} ({columns})for unique indexes - PostgreSQL: Uses
alter index {from} rename to {to}for renaming andalter table {table} add constraint {index} unique ({columns})for unique indexes - SQLite: Handles index renaming by dropping and recreating the index, and uses
create unique index {index} on {table} ({columns})for unique indexes - SQL Server: Uses
sp_renamefor renaming andcreate unique index {index} on {table} ({columns})for unique indexes
Each implementation follows the specific syntax requirements of its respective database engine.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check implementation consistency across database processors
ast-grep --pattern 'func $_(b Blueprint, command *Command) string {
$$$
CompileUnique($$$)
$$$
}'
Length of output: 105
Script:
#!/bin/bash
# Search for index-related operations in database implementations
rg -A 5 "RenameIndex|CompileRenameIndex" --type go
# Search for unique index operations
rg -A 5 "CompileUnique|CreateUnique" --type go
# Find all database processor implementations
fd -e go -E "test" . | rg "processor|grammar"
Length of output: 20267
errors/list.go (1)
117-117: LGTM: Error constant follows established pattern
The new error constant SchemaFailedToChangeTable is well-placed and follows the existing error message pattern.
database/schema/blueprint_test.go (1)
33-33: Verify schema parameter usage across the codebase.
The addition of the nil schema parameter suggests a change in the NewBlueprint constructor signature. Let's verify its usage.
✅ Verification successful
Let me gather more specific information about the schema parameter usage.
Schema parameter is correctly used as nil in the test
The verification shows that the NewBlueprint constructor in database/schema/blueprint.go accepts three parameters: schema schema.Schema, prefix, table string. The test's usage of nil for the schema parameter is valid as:
- The schema parameter is of interface type
schema.Schema - The implementation stores the schema in the Blueprint struct
- Passing nil for an interface parameter is a valid Go pattern for testing
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check NewBlueprint usage across the codebase to ensure consistent parameter usage
# Search for NewBlueprint calls
rg -A 2 'NewBlueprint\('
Length of output: 12455
Script:
#!/bin/bash
# Check the actual implementation of NewBlueprint and its schema parameter usage
ast-grep --pattern 'func NewBlueprint($_, $_, $_, $_) $_'
Length of output: 59
Script:
#!/bin/bash
# Let's check the actual implementation with correct number of parameters
ast-grep --pattern 'func NewBlueprint($_, $_, $_) $_'
# Also get the implementation details
rg -A 5 '^func NewBlueprint'
Length of output: 745
database/schema/grammars/mysql_test.go (1)
224-239: LGTM! Well-structured test for key compilation.
The test covers both cases (with and without algorithm) effectively. The assertions are clear and the mock expectations are properly set.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (2)
database/schema/grammars/postgres_test.go (2)
119-126: Consider enhancing test coverage with edge casesWhile the basic test case is good, consider adding more test cases using table-driven tests to cover:
- Single column drops
- Column names with special characters
- Schema-qualified tables
Here's a suggested refactor:
func (s *PostgresSuite) TestCompileDropColumn() { - mockBlueprint := mocksschema.NewBlueprint(s.T()) - mockBlueprint.EXPECT().GetTableName().Return("users").Once() - - s.Equal([]string([]string{`alter table "goravel_users" drop column "id", drop column "email"`}), s.grammar.CompileDropColumn(mockBlueprint, &contractsschema.Command{ - Columns: []string{"id", "email"}, - })) + var mockBlueprint *mocksschema.Blueprint + + beforeEach := func() { + mockBlueprint = mocksschema.NewBlueprint(s.T()) + } + + tests := []struct { + name string + tableName string + columns []string + expectSql []string + }{ + { + name: "multiple columns", + tableName: "users", + columns: []string{"id", "email"}, + expectSql: []string{`alter table "goravel_users" drop column "id", drop column "email"`}, + }, + { + name: "single column", + tableName: "users", + columns: []string{"id"}, + expectSql: []string{`alter table "goravel_users" drop column "id"`}, + }, + { + name: "schema qualified table", + tableName: "public.users", + columns: []string{"id"}, + expectSql: []string{`alter table "public"."goravel_users" drop column "id"`}, + }, + } + + for _, test := range tests { + s.Run(test.name, func() { + beforeEach() + mockBlueprint.EXPECT().GetTableName().Return(test.tableName).Once() + + sql := s.grammar.CompileDropColumn(mockBlueprint, &contractsschema.Command{ + Columns: test.columns, + }) + s.Equal(test.expectSql, sql) + }) + } }
135-140: Consider adding test cases for schema-qualified tablesThe test covers the basic case but could be enhanced to handle schema-qualified tables.
Here's a suggested refactor:
func (s *PostgresSuite) TestCompileDropPrimary() { - mockBlueprint := mocksschema.NewBlueprint(s.T()) - mockBlueprint.EXPECT().GetTableName().Return("users").Once() - - s.Equal(`alter table "goravel_users" drop constraint "goravel_users_pkey"`, s.grammar.CompileDropPrimary(mockBlueprint, &contractsschema.Command{})) + var mockBlueprint *mocksschema.Blueprint + + beforeEach := func() { + mockBlueprint = mocksschema.NewBlueprint(s.T()) + } + + tests := []struct { + name string + tableName string + expectSql string + }{ + { + name: "simple table", + tableName: "users", + expectSql: `alter table "goravel_users" drop constraint "goravel_users_pkey"`, + }, + { + name: "schema qualified table", + tableName: "public.users", + expectSql: `alter table "public"."goravel_users" drop constraint "goravel_users_pkey"`, + }, + } + + for _, test := range tests { + s.Run(test.name, func() { + beforeEach() + mockBlueprint.EXPECT().GetTableName().Return(test.tableName).Once() + + sql := s.grammar.CompileDropPrimary(mockBlueprint, &contractsschema.Command{}) + s.Equal(test.expectSql, sql) + }) + } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
database/schema/grammars/postgres_test.go(3 hunks)database/schema/grammars/sqlserver_test.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- database/schema/grammars/sqlserver_test.go
🔇 Additional comments (1)
database/schema/grammars/postgres_test.go (1)
250-291: 🛠️ Refactor suggestion
Refactor to use table-driven tests
A previous review already suggested this refactor. The current implementation is repetitive and could be more maintainable.
Applying the previously suggested refactor pattern would improve the test structure and make it easier to add new test cases.
📑 Description
Closes https://github.com/goravel/goravel/issues/?
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
✅ Checks