Implements the lexical analyzer, which converts source code into lexical tokens.

Specification: Lexical


Walter Bright

Source: lexer.d

  • Declaration

    class Lexer;

    • Declaration

      pure nothrow this(const(char)* filename, const(char)* base, size_t begoffset, size_t endoffset, bool doDocComment, bool commentToken);

      Creates a Lexer for the source code base[begoffset..endoffset+1]. The last character, base[endoffset], must be null (0) or EOF (0x1A).


      const(char)* filename

      used for error messages

      const(char)* base

      source code, must be terminated by a null (0) or EOF (0x1A) character

      size_t begoffset

      starting offset into base[]

      size_t endoffset

      the last offset to read into base[]

      bool doDocComment

      handle documentation comments

      bool commentToken

      comments become TOK.comment's

    • Declaration

      pure nothrow @safe Token* allocateToken();

      Return Value

      a newly allocated Token.

    • Declaration

      final nothrow TOK peekNext();

      Look ahead at next token's value.

    • Declaration

      final nothrow TOK peekNext2();

      Look 2 tokens ahead at value.

    • Declaration

      final nothrow void scan(Token* t);

      Turn next token in buffer into a token.

    • Declaration

      final nothrow Token* peekPastParen(Token* tk);

      tk is on the opening (. Look ahead and return token that is past the closing ).

    • Declaration

      static pure nothrow const(char)* combineComments(const(char)[] c1, const(char)[] c2, bool newParagraph);

      Combine two document comments into one, separated by an extra newline if newParagraph is true.