Skip to content

Conversation

@numToStr
Copy link
Owner

@numToStr numToStr commented May 8, 2022

Previously, there was two lexer, 1) frontend and 2) emmy. The frontend parser was responsible for parse the file, take out the nodes that makes sense and convert it into emmy format as a string. In next step that emmy format is given to emmy parser which finally converts that to actual tokens. The main downside is that there are two steps to tokenize the file.

Now, that frontend parser is removed. The file is now tokenized in one step making it 1.5x faster than before.

@numToStr numToStr merged commit dd3abfe into master May 8, 2022
@numToStr numToStr deleted the merge branch May 8, 2022 06:41
numToStr added a commit that referenced this pull request May 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants