public class MockTokenizer extends Tokenizer
This tokenizer is a replacement for WHITESPACE, SIMPLE, and KEYWORD
tokenizers. If you are writing a component such as a TokenFilter, it's a great idea to test
it wrapping this tokenizer instead for extra checks. This tokenizer has the following behavior:
setEnableChecks(boolean).
| Modifier and Type | Field and Description |
|---|---|
static int |
DEFAULT_MAX_TOKEN_LENGTH |
static CharacterRunAutomaton |
KEYWORD
Acts Similar to KeywordTokenizer.
|
static CharacterRunAutomaton |
SIMPLE
Acts like LetterTokenizer.
|
static CharacterRunAutomaton |
WHITESPACE
Acts Similar to WhitespaceTokenizer
|
DEFAULT_TOKEN_ATTRIBUTE_FACTORY| Constructor and Description |
|---|
MockTokenizer()
|
MockTokenizer(AttributeFactory factory)
|
MockTokenizer(AttributeFactory factory,
CharacterRunAutomaton runAutomaton,
boolean lowerCase) |
MockTokenizer(AttributeFactory factory,
CharacterRunAutomaton runAutomaton,
boolean lowerCase,
int maxTokenLength) |
MockTokenizer(CharacterRunAutomaton runAutomaton,
boolean lowerCase) |
MockTokenizer(CharacterRunAutomaton runAutomaton,
boolean lowerCase,
int maxTokenLength) |
| Modifier and Type | Method and Description |
|---|---|
void |
close()
Releases resources associated with this stream.
|
void |
end()
This method is called by the consumer after the last token has been
consumed, after
TokenStream.incrementToken() returned false
(using the new TokenStream API). |
boolean |
incrementToken()
Consumers (i.e.,
IndexWriter) use this method to advance the stream to
the next token. |
protected boolean |
isTokenChar(int c) |
protected int |
normalize(int c) |
protected int |
readChar() |
protected int |
readCodePoint() |
void |
reset()
This method is called by a consumer before it begins consumption using
TokenStream.incrementToken(). |
void |
setEnableChecks(boolean enableChecks)
Toggle consumer workflow checking: if your test consumes tokenstreams normally you
should leave this enabled.
|
correctOffset, setReaderaddAttribute, addAttributeImpl, captureState, clearAttributes, cloneAttributes, copyTo, equals, getAttribute, getAttributeClassesIterator, getAttributeFactory, getAttributeImplsIterator, hasAttribute, hasAttributes, hashCode, reflectAsString, reflectWith, restoreState, toStringpublic static final CharacterRunAutomaton WHITESPACE
public static final CharacterRunAutomaton KEYWORD
public static final CharacterRunAutomaton SIMPLE
public static final int DEFAULT_MAX_TOKEN_LENGTH
public MockTokenizer(AttributeFactory factory, CharacterRunAutomaton runAutomaton, boolean lowerCase, int maxTokenLength)
public MockTokenizer(CharacterRunAutomaton runAutomaton, boolean lowerCase, int maxTokenLength)
public MockTokenizer(CharacterRunAutomaton runAutomaton, boolean lowerCase)
public MockTokenizer()
public MockTokenizer(AttributeFactory factory, CharacterRunAutomaton runAutomaton, boolean lowerCase)
public MockTokenizer(AttributeFactory factory)
public final boolean incrementToken()
throws IOException
TokenStreamIndexWriter) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate AttributeImpls with the attributes of the next
token.
The producer must make no assumptions about the attributes after the method
has been returned: the caller may arbitrarily change it. If the producer
needs to preserve the state for subsequent calls, it can use
AttributeSource.captureState() to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
AttributeSource.addAttribute(Class) and AttributeSource.getAttribute(Class),
references to all AttributeImpls that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
TokenStream.incrementToken().
incrementToken in class TokenStreamIOExceptionprotected int readCodePoint()
throws IOException
IOExceptionprotected int readChar()
throws IOException
IOExceptionprotected boolean isTokenChar(int c)
protected int normalize(int c)
public void reset()
throws IOException
TokenStreamTokenStream.incrementToken().
Resets this stream to a clean state. Stateful implementations must implement this method so that they can be reused, just as if they had been created fresh.
If you override this method, always call super.reset(), otherwise
some internal state will not be correctly reset (e.g., Tokenizer will
throw IllegalStateException on further usage).
reset in class TokenizerIOExceptionpublic void close()
throws IOException
Tokenizer
If you override this method, always call super.close(), otherwise
some internal state will not be correctly reset (e.g., Tokenizer will
throw IllegalStateException on reuse).
NOTE:
The default implementation closes the input Reader, so
be sure to call super.close() when overriding this method.
close in interface Closeableclose in interface AutoCloseableclose in class TokenizerIOExceptionpublic void end()
throws IOException
TokenStreamTokenStream.incrementToken() returned false
(using the new TokenStream API). Streams implementing the old API
should upgrade to use this feature.
This method can be used to perform any end-of-stream operations, such as setting the final offset of a stream. The final offset of a stream might differ from the offset of the last token eg in case one or more whitespaces followed after the last token, but a WhitespaceTokenizer was used.
Additionally any skipped positions (such as those removed by a stopfilter) can be applied to the position increment, or any adjustment of other attributes where the end-of-stream value may be important.
If you override this method, always call super.end().
end in class TokenStreamIOException - If an I/O error occurspublic void setEnableChecks(boolean enableChecks)
Copyright © 2000–2015 The Apache Software Foundation. All rights reserved.