From 6e74f7b506935f18a33cd5918a1c6c4f4c9cddc4 Mon Sep 17 00:00:00 2001 From: Luca Di Sera Date: Thu, 30 Sep 2021 13:15:17 +0200 Subject: QDoc: Double the buffer size for Tokenizer `Tokenizer` uses a fixed-size buffer when parsing the sources. When the buffer is filled, the parsing continues but all characters from the currently parsed token that do not fit into the buffer are discarded. The limit was recently surpassed by some of the auto-generated QDoc comment-blocks from the Squish documentation. While the incriminated comment-blocks will be reduced in size in the future, it was decided to increase the buffer size to allow for some more breathing space. Hence, the size of the buffer was doubled, to about 1Mb. Pick-to: 6.2 Change-Id: I0962e367470e57386bc0e15f58be979e4a1a3692 Reviewed-by: Paul Wicking --- src/qdoc/tokenizer.h | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) (limited to 'src/qdoc/tokenizer.h') diff --git a/src/qdoc/tokenizer.h b/src/qdoc/tokenizer.h index 77b6bb193..a7b1728fb 100644 --- a/src/qdoc/tokenizer.h +++ b/src/qdoc/tokenizer.h @@ -127,13 +127,14 @@ private: void init(); void start(const Location &loc); /* - Represents the maximum amount of characters that can appear in a - block-comment. + Represents the maximum amount of characters that a token can be composed + of. - When a block-comment with more characters than the maximum amount is - encountered, a warning is issued. + When a token with more characters than the maximum amount is encountered, a + warning is issued and parsing continues, discarding all characters from the + currently parsed token that don't fit into the buffer. */ - enum { yyLexBufSize = 524288 }; + enum { yyLexBufSize = 1048576 }; int getch() { return m_pos == m_in.size() ? EOF : m_in[m_pos++]; } -- cgit v1.2.3