Fix all compilation errors#11612
Conversation
CodSpeed Performance ReportMerging #11612 will not alter performanceComparing Summary
|
7040795 to
a40012c
Compare
37faa87 to
68e781e
Compare
a40012c to
c137200
Compare
68e781e to
dc0ed20
Compare
c137200 to
b356439
Compare
dc0ed20 to
5f634e1
Compare
crates/ruff_linter/src/linter.rs
Outdated
| pycodestyle::rules::syntax_error(&mut diagnostics, &parse_error, locator); | ||
| error = Some(parse_error); | ||
| CommentRanges::default() |
There was a problem hiding this comment.
I'm not sure if it makes sense to consume the Program and include CommentRanges to LinterResult. I'm thinking of just cloning the ParseError here, it shouldn't affect performance at all.
There was a problem hiding this comment.
Yeah we could do that too. Or you could consider adding a take_errors method to Program.
|
|
||
| let program = | ||
| ruff_python_parser::parse_unchecked_source(source_kind.source_code(), source_type); | ||
| ruff_python_parser::parse_unchecked_source(transformed.source_code(), source_type); |
There was a problem hiding this comment.
Um, this is actually a bug. Sorry, it's difficult to separate all this changes very carefully
| /// Create a new [`Lexer`] for the given source code and [`Mode`]. | ||
| pub fn lex(source: &str, mode: Mode) -> Lexer { | ||
| Lexer::new(source, mode, TextSize::default()) | ||
| } | ||
|
|
There was a problem hiding this comment.
We need to expose the lexer because the benchmark crate requires it.
| fn tokenize_reverse(&self) -> Vec<SimpleToken> { | ||
| let program = parse_module(self.source).expect("Input to be a valid Python program"); | ||
| let program = parse_unchecked(self.source, Mode::Module); | ||
| BackwardsTokenizer::new(self.source, self.range, program.comment_ranges()).collect() |
There was a problem hiding this comment.
We're only interested in the comment ranges and the source can contain syntax errors.
| pub fn tokens(&self, contents: &str) -> Result<String, Error> { | ||
| let program = ruff_python_parser::parse_module(contents)?; | ||
| let program = parse_unchecked(contents, Mode::Module); | ||
|
|
There was a problem hiding this comment.
This is just maintaining the existing behavior where the playground displays the token stream even if it contains syntax errors.
5f634e1 to
b422554
Compare
crates/ruff_linter/src/linter.rs
Outdated
| pycodestyle::rules::syntax_error(&mut diagnostics, &parse_error, locator); | ||
| error = Some(parse_error); | ||
| CommentRanges::default() |
There was a problem hiding this comment.
Yeah we could do that too. Or you could consider adding a take_errors method to Program.
7aef3c9 to
39a9845
Compare
## Summary This PR fixes all the compilation error as raised by `clippy`. It doesn't look at the tests. Apart from that it also does the following: * Expose a `lex` method which creates the `Lexer` object. This is used in the benchmark crate. Ideally, we shouldn't expose the lexer and in the future we should move the lexer benchmark in the parser crate * Update the parser reference in `red_knot` crate * Change all references of `&[LexResult]` to `&Tokens` * Add `CommentRanges` to the `LinterResult` as it's required after the linter run. The `Program` is consumed because it requires an owned `ParseError`
This PR fixes all the compilation error as raised by `clippy`. It doesn't look at the tests. Apart from that it also does the following: * Expose a `lex` method which creates the `Lexer` object. This is used in the benchmark crate. Ideally, we shouldn't expose the lexer and in the future we should move the lexer benchmark in the parser crate * Update the parser reference in `red_knot` crate * Change all references of `&[LexResult]` to `&Tokens` * Add `CommentRanges` to the `LinterResult` as it's required after the linter run. The `Program` is consumed because it requires an owned `ParseError`
Summary
This PR fixes all the compilation error as raised by
clippy. It doesn't look at the tests. Apart from that it also does the following:lexmethod which creates theLexerobject. This is used in the benchmark crate. Ideally, we shouldn't expose the lexer and in the future we should move the lexer benchmark in the parser cratered_knotcrate&[LexResult]to&TokensCommentRangesto theLinterResultas it's required after the linter run. TheProgramis consumed because it requires an ownedParseError