Apache-Lucene.Net-3.0.3-RC2.bin/NET40/doc/Lucene.Net.XML
<?xml version="1.0"?>
<doc>
<assembly>
<name>Lucene.Net</name>
</assembly>
<members>
<member name="T:Lucene.Net.Analysis.Analyzer">
<summary>An Analyzer builds TokenStreams, which analyze text. It thus represents a
policy for extracting index terms from text.
<p/>
Typical implementations first build a Tokenizer, which breaks the stream of
characters from the Reader into raw Tokens. One or more TokenFilters may
then be applied to the output of the Tokenizer.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Analyzer.TokenStream(System.String,System.IO.TextReader)">
<summary>Creates a TokenStream which tokenizes all the text in the provided
Reader. Must be able to handle null field name for
backward compatibility.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Analyzer.ReusableTokenStream(System.String,System.IO.TextReader)">
<summary>Creates a TokenStream that is allowed to be re-used
from the previous time that the same thread called
this method. Callers that do not need to use more
than one TokenStream at the same time from this
analyzer should use this method for better
performance.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Analyzer.SetOverridesTokenStreamMethod``1">
<deprecated> This is only present to preserve
back-compat of classes that subclass a core analyzer
and override tokenStream but not reusableTokenStream
</deprecated>
<summary>
Java uses Class<? extends Analyer> to constrain <typeparamref name="TClass"/> to
only Types that inherit from Analyzer. C# does not have a generic type class,
ie Type<t>. The method signature stays the same, and an exception may
still be thrown, if the method doesn't exist.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Analyzer.GetPositionIncrementGap(System.String)">
<summary> Invoked before indexing a Fieldable instance if
terms have already been added to that field. This allows custom
analyzers to place an automatic position increment gap between
Fieldable instances using the same field name. The default value
position increment gap is 0. With a 0 position increment gap and
the typical default token position increment of 1, all terms in a field,
including across Fieldable instances, are in successive positions, allowing
exact PhraseQuery matches, for instance, across Fieldable instance boundaries.
</summary>
<param name="fieldName">Fieldable name being indexed.
</param>
<returns> position increment gap, added to the next token emitted from <see cref="M:Lucene.Net.Analysis.Analyzer.TokenStream(System.String,System.IO.TextReader)"/>
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Analyzer.GetOffsetGap(Lucene.Net.Documents.IFieldable)">
<summary> Just like <see cref="M:Lucene.Net.Analysis.Analyzer.GetPositionIncrementGap(System.String)"/>, except for
Token offsets instead. By default this returns 1 for
tokenized fields and, as if the fields were joined
with an extra space character, and 0 for un-tokenized
fields. This method is only called if the field
produced at least one token for indexing.
</summary>
<param name="field">the field just indexed
</param>
<returns> offset gap, added to the next token emitted from <see cref="M:Lucene.Net.Analysis.Analyzer.TokenStream(System.String,System.IO.TextReader)"/>
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Analyzer.Close">
<summary>Frees persistent resources used by this Analyzer </summary>
</member>
<member name="P:Lucene.Net.Analysis.Analyzer.PreviousTokenStream">
<summary>Used by Analyzers that implement reusableTokenStream
to retrieve previously saved TokenStreams for re-use
by the same thread.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.ASCIIFoldingFilter">
<summary> This class converts alphabetic, numeric, and symbolic Unicode characters
which are not in the first 127 ASCII characters (the "Basic Latin" Unicode
block) into their ASCII equivalents, if one exists.
Characters from the following Unicode blocks are converted; however, only
those characters with reasonable ASCII alternatives are converted:
<list type="bullet">
<item>C1 Controls and Latin-1 Supplement: <a href="http://www.unicode.org/charts/PDF/U0080.pdf">http://www.unicode.org/charts/PDF/U0080.pdf</a></item>
<item>Latin Extended-A: <a href="http://www.unicode.org/charts/PDF/U0100.pdf">http://www.unicode.org/charts/PDF/U0100.pdf</a></item>
<item>Latin Extended-B: <a href="http://www.unicode.org/charts/PDF/U0180.pdf">http://www.unicode.org/charts/PDF/U0180.pdf</a></item>
<item>Latin Extended Additional: <a href="http://www.unicode.org/charts/PDF/U1E00.pdf">http://www.unicode.org/charts/PDF/U1E00.pdf</a></item>
<item>Latin Extended-C: <a href="http://www.unicode.org/charts/PDF/U2C60.pdf">http://www.unicode.org/charts/PDF/U2C60.pdf</a></item>
<item>Latin Extended-D: <a href="http://www.unicode.org/charts/PDF/UA720.pdf">http://www.unicode.org/charts/PDF/UA720.pdf</a></item>
<item>IPA Extensions: <a href="http://www.unicode.org/charts/PDF/U0250.pdf">http://www.unicode.org/charts/PDF/U0250.pdf</a></item>
<item>Phonetic Extensions: <a href="http://www.unicode.org/charts/PDF/U1D00.pdf">http://www.unicode.org/charts/PDF/U1D00.pdf</a></item>
<item>Phonetic Extensions Supplement: <a href="http://www.unicode.org/charts/PDF/U1D80.pdf">http://www.unicode.org/charts/PDF/U1D80.pdf</a></item>
<item>General Punctuation: <a href="http://www.unicode.org/charts/PDF/U2000.pdf">http://www.unicode.org/charts/PDF/U2000.pdf</a></item>
<item>Superscripts and Subscripts: <a href="http://www.unicode.org/charts/PDF/U2070.pdf">http://www.unicode.org/charts/PDF/U2070.pdf</a></item>
<item>Enclosed Alphanumerics: <a href="http://www.unicode.org/charts/PDF/U2460.pdf">http://www.unicode.org/charts/PDF/U2460.pdf</a></item>
<item>Dingbats: <a href="http://www.unicode.org/charts/PDF/U2700.pdf">http://www.unicode.org/charts/PDF/U2700.pdf</a></item>
<item>Supplemental Punctuation: <a href="http://www.unicode.org/charts/PDF/U2E00.pdf">http://www.unicode.org/charts/PDF/U2E00.pdf</a></item>
<item>Alphabetic Presentation Forms: <a href="http://www.unicode.org/charts/PDF/UFB00.pdf">http://www.unicode.org/charts/PDF/UFB00.pdf</a></item>
<item>Halfwidth and Fullwidth Forms: <a href="http://www.unicode.org/charts/PDF/UFF00.pdf">http://www.unicode.org/charts/PDF/UFF00.pdf</a></item>
</list>
See: <a href="http://en.wikipedia.org/wiki/Latin_characters_in_Unicode">http://en.wikipedia.org/wiki/Latin_characters_in_Unicode</a>
The set of character conversions supported by this class is a superset of
those supported by Lucene's <see cref="T:Lucene.Net.Analysis.ISOLatin1AccentFilter"/> which strips
accents from Latin1 characters. For example, 'À' will be replaced by
'a'.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.TokenFilter">
<summary> A TokenFilter is a TokenStream whose input is another TokenStream.
<p/>
This is an abstract class; subclasses must override <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>.
</summary>
<seealso cref="T:Lucene.Net.Analysis.TokenStream">
</seealso>
</member>
<member name="T:Lucene.Net.Analysis.TokenStream">
<summary> A <c>TokenStream</c> enumerates the sequence of tokens, either from
<see cref="T:Lucene.Net.Documents.Field"/>s of a <see cref="T:Lucene.Net.Documents.Document"/> or from query text.
<p/>
This is an abstract class. Concrete subclasses are:
<list type="bullet">
<item><see cref="T:Lucene.Net.Analysis.Tokenizer"/>, a <c>TokenStream</c> whose input is a Reader; and</item>
<item><see cref="T:Lucene.Net.Analysis.TokenFilter"/>, a <c>TokenStream</c> whose input is another
<c>TokenStream</c>.</item>
</list>
A new <c>TokenStream</c> API has been introduced with Lucene 2.9. This API
has moved from being <see cref="T:Lucene.Net.Analysis.Token"/> based to <see cref="T:Lucene.Net.Util.IAttribute"/> based. While
<see cref="T:Lucene.Net.Analysis.Token"/> still exists in 2.9 as a convenience class, the preferred way
to store the information of a <see cref="T:Lucene.Net.Analysis.Token"/> is to use <see cref="T:Lucene.Net.Util.Attribute"/>s.
<p/>
<c>TokenStream</c> now extends <see cref="T:Lucene.Net.Util.AttributeSource"/>, which provides
access to all of the token <see cref="T:Lucene.Net.Util.IAttribute"/>s for the <c>TokenStream</c>.
Note that only one instance per <see cref="T:Lucene.Net.Util.Attribute"/> is created and reused
for every token. This approach reduces object creation and allows local
caching of references to the <see cref="T:Lucene.Net.Util.Attribute"/>s. See
<see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> for further details.
<p/>
<b>The workflow of the new <c>TokenStream</c> API is as follows:</b>
<list type="bullet">
<item>Instantiation of <c>TokenStream</c>/<see cref="T:Lucene.Net.Analysis.TokenFilter"/>s which add/get
attributes to/from the <see cref="T:Lucene.Net.Util.AttributeSource"/>.</item>
<item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>.</item>
<item>The consumer retrieves attributes from the stream and stores local
references to all attributes it wants to access</item>
<item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> until it returns false and
consumes the attributes after each call.</item>
<item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.End"/> so that any end-of-stream operations
can be performed.</item>
<item>The consumer calls <see cref="M:Lucene.Net.Analysis.TokenStream.Close"/> to release any resource when finished
using the <c>TokenStream</c></item>
</list>
To make sure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers are
not required to check for availability of attributes in
<see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>.
<p/>
You can find some example code for the new API in the analysis package level
Javadoc.
<p/>
Sometimes it is desirable to capture a current state of a <c>TokenStream</c>
, e. g. for buffering purposes (see <see cref="T:Lucene.Net.Analysis.CachingTokenFilter"/>,
<see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter"/>). For this usecase
<see cref="M:Lucene.Net.Util.AttributeSource.CaptureState"/> and <see cref="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)"/>
can be used.
</summary>
</member>
<member name="T:Lucene.Net.Util.AttributeSource">
<summary> An AttributeSource contains a list of different <see cref="T:Lucene.Net.Util.Attribute"/>s,
and methods to add and get them. There can only be a single instance
of an attribute in the same AttributeSource instance. This is ensured
by passing in the actual type of the Attribute (Class<Attribute>) to
the <see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute``1"/>, which then checks if an instance of
that type is already present. If yes, it returns the instance, otherwise
it creates a new instance and returns it.
</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.#ctor">
<summary> An AttributeSource using the default attribute factory <see cref="F:Lucene.Net.Util.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY"/>.</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.#ctor(Lucene.Net.Util.AttributeSource)">
<summary> An AttributeSource that uses the same attributes as the supplied one.</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory)">
<summary> An AttributeSource using the supplied <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/> for creating new <see cref="T:Lucene.Net.Util.IAttribute"/> instances.</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.GetAttributeTypesIterator">
<summary>Returns a new iterator that iterates the attribute classes
in the same order they were added in.
Signature for Java 1.5: <c>public Iterator<Class<? extends Attribute>> getAttributeClassesIterator()</c>
Note that this return value is different from Java in that it enumerates over the values
and not the keys
</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.GetAttributeImplsIterator">
<summary>Returns a new iterator that iterates all unique Attribute implementations.
This iterator may contain less entries that <see cref="M:Lucene.Net.Util.AttributeSource.GetAttributeTypesIterator"/>,
if one instance implements more than one Attribute interface.
Signature for Java 1.5: <c>public Iterator<AttributeImpl> getAttributeImplsIterator()</c>
</summary>
</member>
<member name="F:Lucene.Net.Util.AttributeSource.knownImplClasses">
<summary>a cache that stores all interfaces for known implementation classes for performance (slow reflection) </summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.AddAttributeImpl(Lucene.Net.Util.Attribute)">
<summary>
<b>Expert:</b> Adds a custom AttributeImpl instance with one or more Attribute interfaces.
<p><font color="red"><b>Please note:</b> It is not guaranteed, that <c>att</c> is added to
the <c>AttributeSource</c>, because the provided attributes may already exist.
You should always retrieve the wanted attributes using <see cref="M:Lucene.Net.Util.AttributeSource.GetAttribute``1"/> after adding
with this method and cast to your class.
The recommended way to use custom implementations is using an <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>
</font></p>
</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.AddAttribute``1">
<summary> The caller must pass in a Class<? extends Attribute> value.
This method first checks if an instance of that class is
already in this AttributeSource and returns it. Otherwise a
new instance is created, added to this AttributeSource and returned.
</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.HasAttribute``1">
<summary> The caller must pass in a Class<? extends Attribute> value.
Returns true, iff this AttributeSource contains the passed-in Attribute.
</summary>\
</member>
<member name="M:Lucene.Net.Util.AttributeSource.GetAttribute``1">
<summary>
The caller must pass in a Class<? extends Attribute> value.
Returns the instance of the passed in Attribute contained in this AttributeSource
</summary>
<throws>
IllegalArgumentException if this AttributeSource does not contain the Attribute.
It is recommended to always use <see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute``1"/> even in consumers
of TokenStreams, because you cannot know if a specific TokenStream really uses
a specific Attribute. <see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute``1"/> will automatically make the attribute
available. If you want to only use the attribute, if it is available (to optimize
consuming), use <see cref="M:Lucene.Net.Util.AttributeSource.HasAttribute``1"/>.
</throws>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.ClearAttributes">
<summary> Resets all Attributes in this AttributeSource by calling
<see cref="M:Lucene.Net.Util.Attribute.Clear"/> on each Attribute implementation.
</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.CaptureState">
<summary> Captures the state of all Attributes. The return value can be passed to
<see cref="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)"/> to restore the state of this or another AttributeSource.
</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)">
<summary> Restores this state by copying the values of all attribute implementations
that this state contains into the attributes implementations of the targetStream.
The targetStream must contain a corresponding instance for each argument
contained in this state (e.g. it is not possible to restore the state of
an AttributeSource containing a TermAttribute into a AttributeSource using
a Token instance as implementation).
Note that this method does not affect attributes of the targetStream
that are not contained in this state. In other words, if for example
the targetStream contains an OffsetAttribute, but this state doesn't, then
the value of the OffsetAttribute remains unchanged. It might be desirable to
reset its value to the default, in which case the caller should first
call <see cref="M:Lucene.Net.Util.AttributeSource.ClearAttributes"/> on the targetStream.
</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.CloneAttributes">
<summary> Performs a clone of all <see cref="T:Lucene.Net.Util.Attribute"/> instances returned in a new
AttributeSource instance. This method can be used to e.g. create another TokenStream
with exactly the same attributes (using <see cref="M:Lucene.Net.Util.AttributeSource.#ctor(Lucene.Net.Util.AttributeSource)"/>)
</summary>
</member>
<member name="P:Lucene.Net.Util.AttributeSource.Factory">
<summary>Returns the used AttributeFactory.</summary>
</member>
<member name="P:Lucene.Net.Util.AttributeSource.HasAttributes">
<summary>Returns true, iff this AttributeSource has any attributes </summary>
</member>
<member name="T:Lucene.Net.Util.AttributeSource.AttributeFactory">
<summary> An AttributeFactory creates instances of <see cref="T:Lucene.Net.Util.Attribute"/>s.</summary>
</member>
<member name="M:Lucene.Net.Util.AttributeSource.AttributeFactory.CreateAttributeInstance``1">
<summary> returns an <see cref="T:Lucene.Net.Util.Attribute"/> for the supplied <see cref="T:Lucene.Net.Util.IAttribute"/> interface class.</summary>
</member>
<member name="F:Lucene.Net.Util.AttributeSource.AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY">
<summary> This is the default factory that creates <see cref="T:Lucene.Net.Util.Attribute"/>s using the
class name of the supplied <see cref="T:Lucene.Net.Util.IAttribute"/> interface class by appending <c>Impl</c> to it.
</summary>
</member>
<member name="T:Lucene.Net.Util.AttributeSource.State">
<summary> This class holds the state of an AttributeSource.</summary>
<seealso cref="M:Lucene.Net.Util.AttributeSource.CaptureState">
</seealso>
<seealso cref="M:Lucene.Net.Util.AttributeSource.RestoreState(Lucene.Net.Util.AttributeSource.State)">
</seealso>
</member>
<member name="M:Lucene.Net.Analysis.TokenStream.#ctor">
<summary> A TokenStream using the default attribute factory.</summary>
</member>
<member name="M:Lucene.Net.Analysis.TokenStream.#ctor(Lucene.Net.Util.AttributeSource)">
<summary> A TokenStream that uses the same attributes as the supplied one.</summary>
</member>
<member name="M:Lucene.Net.Analysis.TokenStream.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory)">
<summary> A TokenStream using the supplied AttributeFactory for creating new <see cref="T:Lucene.Net.Util.IAttribute"/> instances.</summary>
</member>
<member name="M:Lucene.Net.Analysis.TokenStream.IncrementToken">
<summary> Consumers (i.e., <see cref="T:Lucene.Net.Index.IndexWriter"/>) use this method to advance the stream to
the next token. Implementing classes must implement this method and update
the appropriate <see cref="T:Lucene.Net.Util.Attribute"/>s with the attributes of the next
token.
The producer must make no assumptions about the attributes after the
method has been returned: the caller may arbitrarily change it. If the
producer needs to preserve the state for subsequent calls, it can use
<see cref="M:Lucene.Net.Util.AttributeSource.CaptureState"/> to create a copy of the current attribute state.
This method is called for every token of a document, so an efficient
implementation is crucial for good performance. To avoid calls to
<see cref="M:Lucene.Net.Util.AttributeSource.AddAttribute``1"/> and <see cref="M:Lucene.Net.Util.AttributeSource.GetAttribute``1"/>,
references to all <see cref="T:Lucene.Net.Util.Attribute"/>s that this stream uses should be
retrieved during instantiation.
To ensure that filters and consumers know which attributes are available,
the attributes must be added during instantiation. Filters and consumers
are not required to check for availability of attributes in
<see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>.
</summary>
<returns> false for end of stream; true otherwise</returns>
</member>
<member name="M:Lucene.Net.Analysis.TokenStream.End">
<summary> This method is called by the consumer after the last token has been
consumed, after <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> returned <c>false</c>
(using the new <c>TokenStream</c> API). Streams implementing the old API
should upgrade to use this feature.
<p/>
This method can be used to perform any end-of-stream operations, such as
setting the final offset of a stream. The final offset of a stream might
differ from the offset of the last token eg in case one or more whitespaces
followed after the last token, but a <see cref="T:Lucene.Net.Analysis.WhitespaceTokenizer"/> was used.
</summary>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Analysis.TokenStream.Reset">
<summary> Resets this stream to the beginning. This is an optional operation, so
subclasses may or may not implement this method. <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/> is not needed for
the standard indexing process. However, if the tokens of a
<c>TokenStream</c> are intended to be consumed more than once, it is
necessary to implement <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>. Note that if your TokenStream
caches tokens and feeds them back again after a reset, it is imperative
that you clone the tokens when you store them away (on the first pass) as
well as when you return them (on future passes after <see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>).
</summary>
</member>
<member name="M:Lucene.Net.Analysis.TokenStream.Close">
<summary>Releases resources associated with this stream. </summary>
</member>
<member name="F:Lucene.Net.Analysis.TokenFilter.input">
<summary>The source of tokens for this filter. </summary>
</member>
<member name="M:Lucene.Net.Analysis.TokenFilter.#ctor(Lucene.Net.Analysis.TokenStream)">
<summary>Construct a token stream filtering the given input. </summary>
</member>
<member name="M:Lucene.Net.Analysis.TokenFilter.End">
<summary>Performs end-of-stream operations, if any, and calls then <c>end()</c> on the
input TokenStream.<p/>
<b>NOTE:</b> Be sure to call <c>super.end()</c> first when overriding this method.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.TokenFilter.Reset">
<summary>Reset the filter as well as the input TokenStream. </summary>
</member>
<member name="M:Lucene.Net.Analysis.ASCIIFoldingFilter.FoldToASCII(System.Char[],System.Int32)">
<summary> Converts characters above ASCII to their ASCII equivalents. For example,
accents are removed from accented characters.
</summary>
<param name="input">The string to fold
</param>
<param name="length">The number of characters in the input string
</param>
</member>
<member name="T:Lucene.Net.Analysis.BaseCharFilter">
<summary>
* Base utility class for implementing a <see cref="T:Lucene.Net.Analysis.CharFilter"/>.
* You subclass this, and then record mappings by calling
* <see cref="M:Lucene.Net.Analysis.BaseCharFilter.AddOffCorrectMap(System.Int32,System.Int32)"/>, and then invoke the correct
* method to correct an offset.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.CharFilter">
<summary> Subclasses of CharFilter can be chained to filter CharStream.
They can be used as <see cref="T:System.IO.TextReader"/> with additional offset
correction. <see cref="T:Lucene.Net.Analysis.Tokenizer"/>s will automatically use <see cref="M:Lucene.Net.Analysis.CharFilter.CorrectOffset(System.Int32)"/>
if a CharFilter/CharStream subclass is used.
</summary>
<version> $Id$
</version>
</member>
<member name="T:Lucene.Net.Analysis.CharStream">
<summary> CharStream adds <see cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)"/>
functionality over <see cref="T:System.IO.TextReader"/>. All Tokenizers accept a
CharStream instead of <see cref="T:System.IO.TextReader"/> as input, which enables
arbitrary character based filtering before tokenization.
The <see cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)"/> method fixed offsets to account for
removal or insertion of characters, so that the offsets
reported in the tokens match the character offsets of the
original Reader.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)">
<summary> Called by CharFilter(s) and Tokenizer to correct token offset.
</summary>
<param name="currentOff">offset as seen in the output
</param>
<returns> corrected offset based on the input
</returns>
</member>
<member name="M:Lucene.Net.Analysis.CharFilter.Correct(System.Int32)">
<summary>Subclass may want to override to correct the current offset.</summary>
<param name="currentOff">current offset</param>
<returns>corrected offset</returns>
</member>
<member name="M:Lucene.Net.Analysis.CharFilter.CorrectOffset(System.Int32)">
<summary> Chains the corrected offset through the input
CharFilter.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.CachingTokenFilter">
<summary> This class can be used if the token attributes of a TokenStream
are intended to be consumed more than once. It caches
all token attribute states locally in a List.
<p/>CachingTokenFilter implements the optional method
<see cref="M:Lucene.Net.Analysis.TokenStream.Reset"/>, which repositions the
stream to the first Token.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.CharArraySet">
<summary> A simple class that stores Strings as char[]'s in a
hash table. Note that this is not a general purpose
class. For example, it cannot remove items from the
set, nor does it resize its hash table to be smaller,
etc. It is designed to be quick to test if a char[]
is in the set without the necessity of converting it
to a String first.
<p/>
<em>Please note:</em> This class implements <see cref="T:System.Collections.Generic.ISet`1"/> but
does not behave like it should in all cases. The generic type is
<see cref="T:System.Collections.Generic.ICollection`1"/>, because you can add any object to it,
that has a string representation. The add methods will use
<see cref="M:System.Object.ToString"/> and store the result using a <see cref="T:System.Char"/>
buffer. The same behaviour have the <see cref="M:Lucene.Net.Analysis.CharArraySet.Contains(System.Object)"/> methods.
The <see cref="M:Lucene.Net.Analysis.CharArraySet.GetEnumerator"/> method returns an <see cref="T:System.String"/> IEnumerable.
For type safety also {@link #stringIterator()} is provided.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.#ctor(System.Int32,System.Boolean)">
<summary>Create set with enough capacity to hold startSize
terms
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.#ctor(System.Collections.Generic.IEnumerable{System.Object},System.Boolean)">
<summary>Create set from a Collection of char[] or String </summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.#ctor(System.Char[][],System.Boolean,System.Int32)">
<summary>Create set from entries </summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.Contains(System.Char[],System.Int32,System.Int32)">
<summary>true if the <c>len</c> chars of <c>text</c> starting at <c>off</c>
are in the set
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.GetSlot(System.String)">
<summary>Returns true if the String is in the set </summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.Add(System.Char[])">
<summary>Add this char[] directly to the set.
If ignoreCase is true for this Set, the text array will be directly modified.
The user should never modify this text array after calling this method.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.UnmodifiableSet(Lucene.Net.Analysis.CharArraySet)">
<summary>
Returns an unmodifiable <see cref="T:Lucene.Net.Analysis.CharArraySet"/>. This allows to provide
unmodifiable views of internal sets for "read-only" use
</summary>
<param name="set">A Set for which the unmodifiable set it returns.</param>
<returns>A new unmodifiable <see cref="T:Lucene.Net.Analysis.CharArraySet"/></returns>
<throws>ArgumentNullException of the given set is <c>null</c></throws>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.Copy``1(System.Collections.Generic.ISet{``0})">
<summary>
returns a copy of the given set as a <see cref="T:Lucene.Net.Analysis.CharArraySet"/>. If the given set
is a <see cref="T:Lucene.Net.Analysis.CharArraySet"/> the ignoreCase property will be preserved.
</summary>
<param name="set">A set to copy</param>
<returns>a copy of the given set as a <see cref="T:Lucene.Net.Analysis.CharArraySet"/>. If the given set
is a <see cref="T:Lucene.Net.Analysis.CharArraySet"/> the ignoreCase property will be preserved.</returns>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.UnionWith(System.Collections.Generic.IEnumerable{System.String})">
<summary>Adds all of the elements in the specified collection to this collection </summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.AddAll(System.Collections.Generic.IEnumerable{System.String})">
<summary>Wrapper that calls UnionWith</summary>
</member>
<member name="T:Lucene.Net.Analysis.CharArraySet.CharArraySetEnumerator">
<summary>
The IEnumerator<String> for this set. Strings are constructed on the fly,
so use <c>nextCharArray</c> for more efficient access
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharArraySet.CharArraySetEnumerator.NextCharArray">
<summary>do not modify the returned char[] </summary>
</member>
<member name="T:Lucene.Net.Analysis.CharReader">
<summary> CharReader is a Reader wrapper. It reads chars from
Reader and outputs <see cref="T:Lucene.Net.Analysis.CharStream"/>, defining an
identify function <see cref="M:Lucene.Net.Analysis.CharReader.CorrectOffset(System.Int32)"/> method that
simply returns the provided offset.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.CharTokenizer">
<summary>An abstract base class for simple, character-oriented tokenizers.</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenizer">
<summary> A Tokenizer is a TokenStream whose input is a Reader.
<p/>
This is an abstract class; subclasses must override <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>
<p/>
NOTE: Subclasses overriding <see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> must call
<see cref="M:Lucene.Net.Util.AttributeSource.ClearAttributes"/> before setting attributes.
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Tokenizer.input">
<summary>The text source for this Tokenizer. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.#ctor">
<summary>Construct a tokenizer with null input. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(System.IO.TextReader)">
<summary>Construct a token stream processing the given input. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory)">
<summary>Construct a tokenizer with null input using the given AttributeFactory. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
<summary>Construct a token stream processing the given input using the given AttributeFactory. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource)">
<summary>Construct a token stream processing the given input using the given AttributeSource. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
<summary>Construct a token stream processing the given input using the given AttributeSource. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.CorrectOffset(System.Int32)">
<summary>Return the corrected offset. If <see cref="F:Lucene.Net.Analysis.Tokenizer.input"/> is a <see cref="T:Lucene.Net.Analysis.CharStream"/> subclass
this method calls <see cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)"/>, else returns <c>currentOff</c>.
</summary>
<param name="currentOff">offset as seen in the output
</param>
<returns> corrected offset based on the input
</returns>
<seealso cref="M:Lucene.Net.Analysis.CharStream.CorrectOffset(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Analysis.Tokenizer.Reset(System.IO.TextReader)">
<summary>Expert: Reset the tokenizer to a new reader. Typically, an
analyzer (in its reusableTokenStream method) will use
this to re-use a previously created tokenizer.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharTokenizer.IsTokenChar(System.Char)">
<summary>Returns true iff a character should be included in a token. This
tokenizer generates as tokens adjacent sequences of characters which
satisfy this predicate. Characters for which this is false are used to
define token boundaries and are not included in tokens.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.CharTokenizer.Normalize(System.Char)">
<summary>Called on each token character to normalize it before it is added to the
token. The default implementation does nothing. Subclasses may use this
to, e.g., lowercase tokens.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.ISOLatin1AccentFilter">
<summary> A filter that replaces accented characters in the ISO Latin 1 character set
(ISO-8859-1) by their unaccented equivalent. The case will not be altered.
<p/>
For instance, 'À' will be replaced by 'a'.
<p/>
</summary>
<deprecated> If you build a new index, use <see cref="T:Lucene.Net.Analysis.ASCIIFoldingFilter"/>
which covers a superset of Latin 1.
This class is included for use with existing indexes and will be removed
in a future release (possible Lucene 4.0)
</deprecated>
</member>
<member name="M:Lucene.Net.Analysis.ISOLatin1AccentFilter.RemoveAccents(System.Char[],System.Int32)">
<summary> To replace accented characters in a String by unaccented equivalents.</summary>
</member>
<member name="T:Lucene.Net.Analysis.KeywordAnalyzer">
<summary> "Tokenizes" the entire stream as a single token. This is useful
for data like zip codes, ids, and some product names.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.KeywordTokenizer">
<summary> Emits the entire input as a single token.</summary>
</member>
<member name="T:Lucene.Net.Analysis.LengthFilter">
<summary>Removes words that are too long or too short from the stream.</summary>
</member>
<member name="M:Lucene.Net.Analysis.LengthFilter.#ctor(Lucene.Net.Analysis.TokenStream,System.Int32,System.Int32)">
<summary> Build a filter that removes words that are too long or too
short from the text.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.LengthFilter.IncrementToken">
<summary> Returns the next input Token whose term() is the right len</summary>
</member>
<member name="T:Lucene.Net.Analysis.LetterTokenizer">
<summary>A LetterTokenizer is a tokenizer that divides text at non-letters. That's
to say, it defines tokens as maximal strings of adjacent letters, as defined
by java.lang.Character.isLetter() predicate.
Note: this does a decent job for most European languages, but does a terrible
job for some Asian languages, where words are not separated by spaces.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.LetterTokenizer.#ctor(System.IO.TextReader)">
<summary>Construct a new LetterTokenizer. </summary>
</member>
<member name="M:Lucene.Net.Analysis.LetterTokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
<summary>Construct a new LetterTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource"/>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.LetterTokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
<summary>Construct a new LetterTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.LetterTokenizer.IsTokenChar(System.Char)">
<summary>Collects only characters which satisfy
<see cref="M:System.Char.IsLetter(System.Char)"/>.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.LowerCaseFilter">
<summary>Normalizes token text to lower case.</summary>
</member>
<member name="T:Lucene.Net.Analysis.LowerCaseTokenizer">
<summary> LowerCaseTokenizer performs the function of LetterTokenizer
and LowerCaseFilter together. It divides text at non-letters and converts
them to lower case. While it is functionally equivalent to the combination
of LetterTokenizer and LowerCaseFilter, there is a performance advantage
to doing the two tasks at once, hence this (redundant) implementation.
<p/>
Note: this does a decent job for most European languages, but does a terrible
job for some Asian languages, where words are not separated by spaces.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.#ctor(System.IO.TextReader)">
<summary>Construct a new LowerCaseTokenizer. </summary>
</member>
<member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
<summary>Construct a new LowerCaseTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource"/>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
<summary>Construct a new LowerCaseTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.LowerCaseTokenizer.Normalize(System.Char)">
<summary>Converts char to lower case
<see cref="M:System.Char.ToLower(System.Char)"/>.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.MappingCharFilter">
<summary> Simplistic <see cref="T:Lucene.Net.Analysis.CharFilter"/> that applies the mappings
contained in a <see cref="T:Lucene.Net.Analysis.NormalizeCharMap"/> to the character
stream, and correcting the resulting changes to the
offsets.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.MappingCharFilter.#ctor(Lucene.Net.Analysis.NormalizeCharMap,Lucene.Net.Analysis.CharStream)">
Default constructor that takes a <see cref="T:Lucene.Net.Analysis.CharStream"/>.
</member>
<member name="M:Lucene.Net.Analysis.MappingCharFilter.#ctor(Lucene.Net.Analysis.NormalizeCharMap,System.IO.TextReader)">
Easy-use constructor that takes a <see cref="T:System.IO.TextReader"/>.
</member>
<member name="T:Lucene.Net.Analysis.NormalizeCharMap">
<summary> Holds a map of String input to String output, to be used
with <see cref="T:Lucene.Net.Analysis.MappingCharFilter"/>.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.NormalizeCharMap.Add(System.String,System.String)">
<summary>Records a replacement to be applied to the inputs
stream. Whenever <c>singleMatch</c> occurs in
the input, it will be replaced with
<c>replacement</c>.
</summary>
<param name="singleMatch">input String to be replaced
</param>
<param name="replacement">output String
</param>
</member>
<member name="T:Lucene.Net.Analysis.NumericTokenStream">
<summary> <b>Expert:</b> This class provides a <see cref="T:Lucene.Net.Analysis.TokenStream"/>
for indexing numeric values that can be used by <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/>
or <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/>.
<p/>Note that for simple usage, <see cref="T:Lucene.Net.Documents.NumericField"/> is
recommended. <see cref="T:Lucene.Net.Documents.NumericField"/> disables norms and
term freqs, as they are not usually needed during
searching. If you need to change these settings, you
should use this class.
<p/>See <see cref="T:Lucene.Net.Documents.NumericField"/> for capabilities of fields
indexed numerically.<p/>
<p/>Here's an example usage, for an <c>int</c> field:
<code>
Field field = new Field(name, new NumericTokenStream(precisionStep).setIntValue(value));
field.setOmitNorms(true);
field.setOmitTermFreqAndPositions(true);
document.add(field);
</code>
<p/>For optimal performance, re-use the TokenStream and Field instance
for more than one document:
<code>
NumericTokenStream stream = new NumericTokenStream(precisionStep);
Field field = new Field(name, stream);
field.setOmitNorms(true);
field.setOmitTermFreqAndPositions(true);
Document document = new Document();
document.add(field);
for(all documents) {
stream.setIntValue(value)
writer.addDocument(document);
}
</code>
<p/>This stream is not intended to be used in analyzers;
it's more for iterating the different precisions during
indexing a specific numeric value.<p/>
<p/><b>NOTE</b>: as token streams are only consumed once
the document is added to the index, if you index more
than one numeric field, use a separate <c>NumericTokenStream</c>
instance for each.<p/>
<p/>See <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> for more details on the
<a href="../search/NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>
parameter as well as how numeric fields work under the hood.<p/>
<p/><font color="red"><b>NOTE:</b> This API is experimental and
might change in incompatible ways in the next release.</font>
Since 2.9
</summary>
</member>
<member name="F:Lucene.Net.Analysis.NumericTokenStream.TOKEN_TYPE_FULL_PREC">
<summary>The full precision token gets this token type assigned. </summary>
</member>
<member name="F:Lucene.Net.Analysis.NumericTokenStream.TOKEN_TYPE_LOWER_PREC">
<summary>The lower precision tokens gets this token type assigned. </summary>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor">
<summary> Creates a token stream for numeric values using the default <c>precisionStep</c>
<see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4). The stream is not yet initialized,
before using set a value using the various set<em>???</em>Value() methods.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor(System.Int32)">
<summary> Creates a token stream for numeric values with the specified
<c>precisionStep</c>. The stream is not yet initialized,
before using set a value using the various set<em>???</em>Value() methods.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor(Lucene.Net.Util.AttributeSource,System.Int32)">
<summary> Expert: Creates a token stream for numeric values with the specified
<c>precisionStep</c> using the given <see cref="T:Lucene.Net.Util.AttributeSource"/>.
The stream is not yet initialized,
before using set a value using the various set<em>???</em>Value() methods.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.Int32)">
<summary> Expert: Creates a token stream for numeric values with the specified
<c>precisionStep</c> using the given
<see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>.
The stream is not yet initialized,
before using set a value using the various set<em>???</em>Value() methods.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.SetLongValue(System.Int64)">
<summary> Initializes the token stream with the supplied <c>long</c> value.</summary>
<param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
</param>
<returns> this instance, because of this you can use it the following way:
<c>new Field(name, new NumericTokenStream(precisionStep).SetLongValue(value))</c>
</returns>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.SetIntValue(System.Int32)">
<summary> Initializes the token stream with the supplied <c>int</c> value.</summary>
<param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
</param>
<returns> this instance, because of this you can use it the following way:
<c>new Field(name, new NumericTokenStream(precisionStep).SetIntValue(value))</c>
</returns>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.SetDoubleValue(System.Double)">
<summary> Initializes the token stream with the supplied <c>double</c> value.</summary>
<param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
</param>
<returns> this instance, because of this you can use it the following way:
<c>new Field(name, new NumericTokenStream(precisionStep).SetDoubleValue(value))</c>
</returns>
</member>
<member name="M:Lucene.Net.Analysis.NumericTokenStream.SetFloatValue(System.Single)">
<summary> Initializes the token stream with the supplied <c>float</c> value.</summary>
<param name="value_Renamed">the value, for which this TokenStream should enumerate tokens.
</param>
<returns> this instance, because of this you can use it the following way:
<c>new Field(name, new NumericTokenStream(precisionStep).SetFloatValue(value))</c>
</returns>
</member>
<member name="T:Lucene.Net.Analysis.PerFieldAnalyzerWrapper">
<summary> This analyzer is used to facilitate scenarios where different
fields require different analysis techniques. Use <see cref="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.AddAnalyzer(System.String,Lucene.Net.Analysis.Analyzer)"/>
to add a non-default analyzer on a field name basis.
<p/>Example usage:
<code>
PerFieldAnalyzerWrapper aWrapper =
new PerFieldAnalyzerWrapper(new StandardAnalyzer());
aWrapper.addAnalyzer("firstname", new KeywordAnalyzer());
aWrapper.addAnalyzer("lastname", new KeywordAnalyzer());
</code>
<p/>In this example, StandardAnalyzer will be used for all fields except "firstname"
and "lastname", for which KeywordAnalyzer will be used.
<p/>A PerFieldAnalyzerWrapper can be used like any other analyzer, for both indexing
and query parsing.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.#ctor(Lucene.Net.Analysis.Analyzer)">
<summary> Constructs with default analyzer.
</summary>
<param name="defaultAnalyzer">Any fields not specifically
defined to use a different analyzer will use the one provided here.
</param>
</member>
<member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.#ctor(Lucene.Net.Analysis.Analyzer,System.Collections.Generic.IEnumerable{System.Collections.Generic.KeyValuePair{System.String,Lucene.Net.Analysis.Analyzer}})">
<summary> Constructs with default analyzer and a map of analyzers to use for
specific fields.
</summary>
<param name="defaultAnalyzer">Any fields not specifically
defined to use a different analyzer will use the one provided here.
</param>
<param name="fieldAnalyzers">a Map (String field name to the Analyzer) to be
used for those fields
</param>
</member>
<member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.AddAnalyzer(System.String,Lucene.Net.Analysis.Analyzer)">
<summary> Defines an analyzer to use for the specified field.
</summary>
<param name="fieldName">field name requiring a non-default analyzer
</param>
<param name="analyzer">non-default analyzer to use for field
</param>
</member>
<member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.GetPositionIncrementGap(System.String)">
<summary>Return the positionIncrementGap from the analyzer assigned to fieldName </summary>
</member>
<member name="M:Lucene.Net.Analysis.PerFieldAnalyzerWrapper.GetOffsetGap(Lucene.Net.Documents.IFieldable)">
<summary> Return the offsetGap from the analyzer assigned to field </summary>
</member>
<member name="T:Lucene.Net.Analysis.PorterStemFilter">
<summary>Transforms the token stream as per the Porter stemming algorithm.
Note: the input to the stemming filter must already be in lower case,
so you will need to use LowerCaseFilter or LowerCaseTokenizer farther
down the Tokenizer chain in order for this to work properly!
<p/>
To use this filter with other analyzers, you'll want to write an
Analyzer class that sets up the TokenStream chain as you want it.
To use this with LowerCaseTokenizer, for example, you'd write an
analyzer like this:
<p/>
<code>
class MyAnalyzer extends Analyzer {
public final TokenStream tokenStream(String fieldName, Reader reader) {
return new PorterStemFilter(new LowerCaseTokenizer(reader));
}
}
</code>
</summary>
</member>
<member name="T:Lucene.Net.Analysis.PorterStemmer">
<summary>
Stemmer, implementing the Porter Stemming Algorithm
The Stemmer class transforms a word into its root form. The input
word can be provided a character at time (by calling add()), or at once
by calling one of the various stem(something) methods.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Reset">
<summary> reset() resets the stemmer so it can stem another word. If you invoke
the stemmer by calling add(char) and then stem(), you must call reset()
before starting another word.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Add(System.Char)">
<summary> Add a character to the word being stemmed. When you are finished
adding characters, you can call stem(void) to process the word.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.ToString">
<summary> After a word has been stemmed, it can be retrieved by toString(),
or a reference to the internal buffer can be retrieved by getResultBuffer
and getResultLength (which is generally more efficient.)
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.String)">
<summary> Stem a word provided as a String. Returns the result as a String.</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.Char[])">
<summary>Stem a word contained in a char[]. Returns true if the stemming process
resulted in a word different from the input. You can retrieve the
result with getResultLength()/getResultBuffer() or toString().
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.Char[],System.Int32,System.Int32)">
<summary>Stem a word contained in a portion of a char[] array. Returns
true if the stemming process resulted in a word different from
the input. You can retrieve the result with
getResultLength()/getResultBuffer() or toString().
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Stem(System.Char[],System.Int32)">
<summary>Stem a word contained in a leading portion of a char[] array.
Returns true if the stemming process resulted in a word different
from the input. You can retrieve the result with
getResultLength()/getResultBuffer() or toString().
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Stem">
<summary>Stem the word placed into the Stemmer buffer through calls to add().
Returns true if the stemming process resulted in a word different
from the input. You can retrieve the result with
getResultLength()/getResultBuffer() or toString().
</summary>
</member>
<member name="M:Lucene.Net.Analysis.PorterStemmer.Main(System.String[])">
<summary>Test program for demonstrating the Stemmer. It reads a file and
stems each word, writing the result to standard out.
Usage: Stemmer file-name
</summary>
</member>
<member name="P:Lucene.Net.Analysis.PorterStemmer.ResultLength">
<summary> Returns the length of the word resulting from the stemming process.</summary>
</member>
<member name="P:Lucene.Net.Analysis.PorterStemmer.ResultBuffer">
<summary> Returns a reference to a character buffer containing the results of
the stemming process. You also need to consult getResultLength()
to determine the length of the result.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.SimpleAnalyzer">
<summary>An <see cref="T:Lucene.Net.Analysis.Analyzer"/> that filters <see cref="T:Lucene.Net.Analysis.LetterTokenizer"/>
with <see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/>
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Standard.StandardAnalyzer">
<summary> Filters <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/> with <see cref="T:Lucene.Net.Analysis.Standard.StandardFilter"/>,
<see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/> and <see cref="T:Lucene.Net.Analysis.StopFilter"/>, using a list of English stop
words.
<a name="version"/>
<p/>
You must specify the required <see cref="T:Lucene.Net.Util.Version"/> compatibility when creating
StandardAnalyzer:
<list type="bullet">
<item>As of 2.9, StopFilter preserves position increments</item>
<item>As of 2.4, Tokens incorrectly identified as acronyms are corrected (see
<a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a>)</item>
</list>
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.DEFAULT_MAX_TOKEN_LENGTH">
<summary>Default maximum allowed token length </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.replaceInvalidAcronym">
<summary> Specifies whether deprecated acronyms should be replaced with HOST type.
See <a href="https://issues.apache.org/jira/browse/LUCENE-1068">https://issues.apache.org/jira/browse/LUCENE-1068</a>
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.enableStopPositionIncrements">
<summary> Specifies whether deprecated acronyms should be replaced with HOST type.
See <a href="https://issues.apache.org/jira/browse/LUCENE-1068">https://issues.apache.org/jira/browse/LUCENE-1068</a>
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS_SET">
<summary>An unmodifiable set containing some common English words that are usually not
useful for searching.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version)">
<summary>Builds an analyzer with the default stop words (<see cref="F:Lucene.Net.Analysis.Standard.StandardAnalyzer.STOP_WORDS_SET"/>).
</summary>
<param name="matchVersion">Lucene version to match see <see cref="T:Lucene.Net.Util.Version">above</see></param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Generic.ISet{System.String})">
<summary>Builds an analyzer with the given stop words.</summary>
<param name="matchVersion">Lucene version to match See <see cref="T:Lucene.Net.Util.Version">above</see> />
</param>
<param name="stopWords">stop words
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.FileInfo)">
<summary>Builds an analyzer with the stop words from the given file.</summary>
<seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
</seealso>
<param name="matchVersion">Lucene version to match See <see cref="T:Lucene.Net.Util.Version">above</see> />
</param>
<param name="stopwords">File to read stop words from
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)">
<summary>Builds an analyzer with the stop words from the given reader.</summary>
<seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
</seealso>
<param name="matchVersion">Lucene version to match See <see cref="T:Lucene.Net.Util.Version">above</see> />
</param>
<param name="stopwords">Reader to read stop words from
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardAnalyzer.TokenStream(System.String,System.IO.TextReader)">
<summary>Constructs a <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/> filtered by a <see cref="T:Lucene.Net.Analysis.Standard.StandardFilter"/>
, a <see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/> and a <see cref="T:Lucene.Net.Analysis.StopFilter"/>.
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Standard.StandardAnalyzer.MaxTokenLength">
<summary> Set maximum allowed token length. If a token is seen
that exceeds this length then it is discarded. This
setting only takes effect the next time tokenStream or
reusableTokenStream is called.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Standard.StandardFilter">
<summary>Normalizes tokens extracted with <see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardFilter.#ctor(Lucene.Net.Analysis.TokenStream)">
<summary>Construct filtering <i>in</i>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardFilter.IncrementToken">
<summary>Returns the next token in the stream, or null at EOS.
<p/>Removes <tt>'s</tt> from the end of words.
<p/>Removes dots from acronyms.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Standard.StandardTokenizer">
<summary>A grammar-based tokenizer constructed with JFlex
<p/> This should be a good tokenizer for most European-language documents:
<list type="bullet">
<item>Splits words at punctuation characters, removing punctuation. However, a
dot that's not followed by whitespace is considered part of a token.</item>
<item>Splits words at hyphens, unless there's a number in the token, in which case
the whole token is interpreted as a product number and is not split.</item>
<item>Recognizes email addresses and internet hostnames as one token.</item>
</list>
<p/>Many applications have specific tokenizer needs. If this tokenizer does
not suit your application, please consider copying this source code
directory to your project and maintaining your own grammar-based tokenizer.
<a name="version"/>
<p/>
You must specify the required <see cref="T:Lucene.Net.Util.Version"/> compatibility when creating
StandardAnalyzer:
<list type="bullet">
<item>As of 2.4, Tokens incorrectly identified as acronyms are corrected (see
<a href="https://issues.apache.org/jira/browse/LUCENE-1068">LUCENE-1608</a></item>
</list>
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.ACRONYM_DEP">
<deprecated> this solves a bug where HOSTs that end with '.' are identified
as ACRONYMs.
</deprecated>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.scanner">
<summary>A private instance of the JFlex-constructed scanner </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizer.TOKEN_TYPES">
<summary>String token types that correspond to token type int constants </summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)">
<summary> Creates a new instance of the
<see cref="T:Lucene.Net.Analysis.Standard.StandardTokenizer"/>. Attaches
the <c>input</c> to the newly created JFlex scanner.
</summary>
<param name="matchVersion"></param>
<param name="input">The input reader
See http://issues.apache.org/jira/browse/LUCENE-1068
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
<summary> Creates a new StandardTokenizer with a given <see cref="T:Lucene.Net.Util.AttributeSource"/>.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.#ctor(Lucene.Net.Util.Version,Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
<summary> Creates a new StandardTokenizer with a given
<see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.IncrementToken">
<summary>
(non-Javadoc)
<see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/>
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizer.SetReplaceInvalidAcronym(System.Boolean)">
<summary>
Remove in 3.X and make true the only valid value
See https://issues.apache.org/jira/browse/LUCENE-1068
</summary>
<param name="replaceInvalidAcronym">Set to true to replace mischaracterized acronyms as HOST.
</param>
</member>
<member name="P:Lucene.Net.Analysis.Standard.StandardTokenizer.MaxTokenLength">
<summary>Set the max allowed token length. Any token longer
than this is skipped.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Standard.StandardTokenizerImpl">
<summary> This class is a scanner generated by
<a href="http://www.jflex.de/">JFlex</a> 1.4.1
on 9/4/08 6:49 PM from the specification file
<tt>/tango/mike/src/lucene.standarddigit/src/java/org/apache/lucene/analysis/standard/StandardTokenizerImpl.jflex</tt>
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.YYEOF">
<summary>This character denotes the end of file </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_BUFFERSIZE">
<summary>initial size of the lookahead buffer </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.YYINITIAL">
<summary>lexical states </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_CMAP_PACKED">
<summary> Translates characters to character classes</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_CMAP">
<summary> Translates characters to character classes</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_ACTION">
<summary> Translates DFA states to action switch labels.</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_ROWMAP">
<summary> Translates a state to a row index in the transition table</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_TRANS">
<summary> The transition table of the DFA</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZZ_ATTRIBUTE">
<summary> ZZ_ATTRIBUTE[aState] contains the attributes of state <c>aState</c></summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzReader">
<summary>the input device </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzState">
<summary>the current state of the DFA </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzLexicalState">
<summary>the current lexical state </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzBuffer">
<summary>this buffer contains the current text to be matched and is
the source of the yytext() string
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzMarkedPos">
<summary>the textposition at the last accepting state </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzPushbackPos">
<summary>the textposition at the last state to be included in yytext </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzCurrentPos">
<summary>the current text position in the buffer </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzStartRead">
<summary>startRead marks the beginning of the yytext() string in the buffer </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzEndRead">
<summary>endRead marks the last character in the buffer, that has been read
from input
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.yyline">
<summary>number of newlines encountered up to the start of the matched text </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.yychar">
<summary>the number of characters up to the start of the matched text </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.yycolumn">
<summary> the number of characters from the last newline up to the start of the
matched text
</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzAtBOL">
<summary> zzAtBOL == true <=> the scanner is currently at the beginning of a line</summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.zzAtEOF">
<summary>zzAtEOF == true <=> the scanner is at the EOF </summary>
</member>
<member name="F:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ACRONYM_DEP">
<deprecated> this solves a bug where HOSTs that end with '.' are identified
as ACRONYMs.
</deprecated>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.GetText(Lucene.Net.Analysis.Token)">
<summary> Fills Lucene token with the current token text.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.GetText(Lucene.Net.Analysis.Tokenattributes.ITermAttribute)">
<summary> Fills TermAttribute with the current token text.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.#ctor(System.IO.TextReader)">
<summary> Creates a new scanner
There is also a java.io.InputStream version of this constructor.
</summary>
<param name="in_Renamed"> the java.io.Reader to read input from.
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.#ctor(System.IO.Stream)">
<summary> Creates a new scanner.
There is also java.io.Reader version of this constructor.
</summary>
<param name="in_Renamed"> the java.io.Inputstream to read input from.
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZzUnpackCMap(System.String)">
<summary> Unpacks the compressed character translation table.
</summary>
<param name="packed"> the packed character translation table
</param>
<returns> the unpacked character translation table
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZzRefill">
<summary> Refills the input buffer.
</summary>
<returns><c>false</c>, iff there was new input.
</returns>
<exception cref="T:System.IO.IOException"> if any I/O-Error occurs
</exception>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yyclose">
<summary> Closes the input stream.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yyreset(System.IO.TextReader)">
<summary> Resets the scanner to read from a new input stream.
Does not close the old reader.
All internal variables are reset, the old input stream
<b>cannot</b> be reused (internal buffer is discarded and lost).
Lexical state is set to <tt>ZZ_INITIAL</tt>.
</summary>
<param name="reader"> the new input stream
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yystate">
<summary> Returns the current lexical state.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yybegin(System.Int32)">
<summary> Enters a new lexical state
</summary>
<param name="newState">the new lexical state
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yytext">
<summary> Returns the text matched by the current regular expression.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yycharat(System.Int32)">
<summary> Returns the character at position <tt>pos</tt> from the
matched text.
It is equivalent to yytext().charAt(pos), but faster
</summary>
<param name="pos">the position of the character to fetch.
A value from 0 to yylength()-1.
</param>
<returns> the character at position pos
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yylength">
<summary> Returns the length of the matched text region.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.ZzScanError(System.Int32)">
<summary> Reports an error that occured while scanning.
In a wellformed scanner (no or only correct usage of
yypushback(int) and a match-all fallback rule) this method
will only be called with things that "Can't Possibly Happen".
If this method is called, something is seriously wrong
(e.g. a JFlex bug producing a faulty scanner etc.).
Usual syntax/scanner level error handling should be done
in error fallback rules.
</summary>
<param name="errorCode"> the code of the errormessage to display
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.Yypushback(System.Int32)">
<summary> Pushes the specified amount of characters back into the input stream.
They will be read again by then next call of the scanning method
</summary>
<param name="number"> the number of characters to be read again.
This number must not be greater than yylength()!
</param>
</member>
<member name="M:Lucene.Net.Analysis.Standard.StandardTokenizerImpl.GetNextToken">
<summary> Resumes scanning until the next regular expression is matched,
the end of input is encountered or an I/O-Error occurs.
</summary>
<returns> the next token
</returns>
<exception cref="T:System.IO.IOException"> if any I/O-Error occurs
</exception>
</member>
<member name="T:Lucene.Net.Analysis.StopAnalyzer">
<summary> Filters <see cref="T:Lucene.Net.Analysis.LetterTokenizer"/> with <see cref="T:Lucene.Net.Analysis.LowerCaseFilter"/> and
<see cref="T:Lucene.Net.Analysis.StopFilter"/>.
<a name="version"/>
<p/>
You must specify the required <see cref="T:Lucene.Net.Util.Version"/> compatibility when creating
StopAnalyzer:
<list type="bullet">
<item>As of 2.9, position increments are preserved</item>
</list>
</summary>
</member>
<member name="F:Lucene.Net.Analysis.StopAnalyzer.ENGLISH_STOP_WORDS_SET">
<summary>An unmodifiable set containing some common English words that are not usually useful
for searching.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version)">
<summary> Builds an analyzer which removes words in ENGLISH_STOP_WORDS.</summary>
</member>
<member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.Collections.Generic.ISet{System.String})">
<summary>Builds an analyzer with the stop words from the given set.</summary>
</member>
<member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.FileInfo)">
<summary> Builds an analyzer with the stop words from the given file.
</summary>
<seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
</seealso>
<param name="matchVersion">See <a href="#version">above</a>
</param>
<param name="stopwordsFile">File to load stop words from
</param>
</member>
<member name="M:Lucene.Net.Analysis.StopAnalyzer.#ctor(Lucene.Net.Util.Version,System.IO.TextReader)">
<summary>Builds an analyzer with the stop words from the given reader. </summary>
<seealso cref="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
</seealso>
<param name="matchVersion">See <a href="#Version">above</a>
</param>
<param name="stopwords">Reader to load stop words from
</param>
</member>
<member name="M:Lucene.Net.Analysis.StopAnalyzer.TokenStream(System.String,System.IO.TextReader)">
<summary>Filters LowerCaseTokenizer with StopFilter. </summary>
</member>
<member name="T:Lucene.Net.Analysis.StopAnalyzer.SavedStreams">
<summary>Filters LowerCaseTokenizer with StopFilter. </summary>
</member>
<member name="T:Lucene.Net.Analysis.StopFilter">
<summary> Removes stop words from a token stream.</summary>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Generic.ISet{System.String},System.Boolean)">
<summary> Construct a token stream filtering the given input.
If <c>stopWords</c> is an instance of <see cref="T:Lucene.Net.Analysis.CharArraySet"/> (true if
<c>makeStopSet()</c> was used to construct the set) it will be directly used
and <c>ignoreCase</c> will be ignored since <c>CharArraySet</c>
directly controls case sensitivity.
<p/>
If <c>stopWords</c> is not an instance of <see cref="T:Lucene.Net.Analysis.CharArraySet"/>,
a new CharArraySet will be constructed and <c>ignoreCase</c> will be
used to specify the case sensitivity of that set.
</summary>
<param name="enablePositionIncrements">true if token positions should record the removed stop words</param>
<param name="input">Input TokenStream</param>
<param name="stopWords">A Set of strings or strings or char[] or any other ToString()-able set representing the stopwords</param>
<param name="ignoreCase">if true, all words are lower cased first</param>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.#ctor(System.Boolean,Lucene.Net.Analysis.TokenStream,System.Collections.Generic.ISet{System.String})">
<summary> Constructs a filter which removes words from the input
TokenStream that are named in the Set.
</summary>
<param name="enablePositionIncrements">true if token positions should record the removed stop words</param>
<param name="in">Input stream</param>
<param name="stopWords">A Set of strings or char[] or any other ToString()-able set representing the stopwords</param>
<seealso cref="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[])"/>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[])">
<summary> Builds a Set from an array of stop words,
appropriate for passing into the StopFilter constructor.
This permits this stopWords construction to be cached once when
an Analyzer is constructed.
</summary>
<seealso cref="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[],System.Boolean)">passing false to ignoreCase</seealso>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.Collections.Generic.IList{System.Object})">
<summary> Builds a Set from an array of stop words,
appropriate for passing into the StopFilter constructor.
This permits this stopWords construction to be cached once when
an Analyzer is constructed.
</summary>
<param name="stopWords">A list of strings or char[] or any other ToString()-able list representing the stop words</param>
<seealso cref="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[],System.Boolean)">passing false to ignoreCase</seealso>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.String[],System.Boolean)">
<summary></summary>
<param name="stopWords">An array of stopwords</param>
<param name="ignoreCase">If true, all words are lower cased first.</param>
<returns> a Set containing the words</returns>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.MakeStopSet(System.Collections.Generic.IList{System.Object},System.Boolean)">
<summary> </summary>
<param name="stopWords">A List of Strings or char[] or any other toString()-able list representing the stopwords </param>
<param name="ignoreCase">if true, all words are lower cased first</param>
<returns>A Set (<see cref="T:Lucene.Net.Analysis.CharArraySet"/>)containing the words</returns>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.IncrementToken">
<summary> Returns the next input Token whose term() is not a stop word.</summary>
</member>
<member name="M:Lucene.Net.Analysis.StopFilter.GetEnablePositionIncrementsVersionDefault(Lucene.Net.Util.Version)">
<summary> Returns version-dependent default for enablePositionIncrements. Analyzers
that embed StopFilter use this method when creating the StopFilter. Prior
to 2.9, this returns false. On 2.9 or later, it returns true.
</summary>
</member>
<member name="P:Lucene.Net.Analysis.StopFilter.EnablePositionIncrements">
<summary> If <c>true</c>, this StopFilter will preserve
positions of the incoming tokens (ie, accumulate and
set position increments of the removed stop tokens).
Generally, <c>true</c> is best as it does not
lose information (positions of the original tokens)
during indexing.
<p/> When set, when a token is stopped
(omitted), the position increment of the following
token is incremented.
<p/> <b>NOTE</b>: be sure to also
set <see cref="P:Lucene.Net.QueryParsers.QueryParser.EnablePositionIncrements"/> if
you use QueryParser to create queries.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.TeeSinkTokenFilter">
<summary> This TokenFilter provides the ability to set aside attribute states
that have already been analyzed. This is useful in situations where multiple fields share
many common analysis steps and then go their separate ways.
<p/>
It is also useful for doing things like entity extraction or proper noun analysis as
part of the analysis workflow and saving off those tokens for use in another field.
<code>
TeeSinkTokenFilter source1 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader1));
TeeSinkTokenFilter.SinkTokenStream sink1 = source1.newSinkTokenStream();
TeeSinkTokenFilter.SinkTokenStream sink2 = source1.newSinkTokenStream();
TeeSinkTokenFilter source2 = new TeeSinkTokenFilter(new WhitespaceTokenizer(reader2));
source2.addSinkTokenStream(sink1);
source2.addSinkTokenStream(sink2);
TokenStream final1 = new LowerCaseFilter(source1);
TokenStream final2 = source2;
TokenStream final3 = new EntityDetect(sink1);
TokenStream final4 = new URLDetect(sink2);
d.add(new Field("f1", final1));
d.add(new Field("f2", final2));
d.add(new Field("f3", final3));
d.add(new Field("f4", final4));
</code>
In this example, <c>sink1</c> and <c>sink2</c> will both get tokens from both
<c>reader1</c> and <c>reader2</c> after whitespace tokenizer
and now we can further wrap any of these in extra analysis, and more "sources" can be inserted if desired.
It is important, that tees are consumed before sinks (in the above example, the field names must be
less the sink's field names). If you are not sure, which stream is consumed first, you can simply
add another sink and then pass all tokens to the sinks at once using <see cref="M:Lucene.Net.Analysis.TeeSinkTokenFilter.ConsumeAllTokens"/>.
This TokenFilter is exhausted after this. In the above example, change
the example above to:
<code>
...
TokenStream final1 = new LowerCaseFilter(source1.newSinkTokenStream());
TokenStream final2 = source2.newSinkTokenStream();
sink1.consumeAllTokens();
sink2.consumeAllTokens();
...
</code>
In this case, the fields can be added in any order, because the sources are not used anymore and all sinks are ready.
<p/>Note, the EntityDetect and URLDetect TokenStreams are for the example and do not currently exist in Lucene.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.#ctor(Lucene.Net.Analysis.TokenStream)">
<summary> Instantiates a new TeeSinkTokenFilter.</summary>
</member>
<member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.NewSinkTokenStream">
<summary> Returns a new <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream"/> that receives all tokens consumed by this stream.</summary>
</member>
<member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.NewSinkTokenStream(Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter)">
<summary> Returns a new <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream"/> that receives all tokens consumed by this stream
that pass the supplied filter.
</summary>
<seealso cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter">
</seealso>
</member>
<member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.AddSinkTokenStream(Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream)">
<summary> Adds a <see cref="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream"/> created by another <c>TeeSinkTokenFilter</c>
to this one. The supplied stream will also receive all consumed tokens.
This method can be used to pass tokens from two different tees to one sink.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.ConsumeAllTokens">
<summary> <c>TeeSinkTokenFilter</c> passes all tokens to the added sinks
when itself is consumed. To be sure, that all tokens from the input
stream are passed to the sinks, you can call this methods.
This instance is exhausted after this, but all sinks are instant available.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter">
<summary> A filter that decides which <see cref="T:Lucene.Net.Util.AttributeSource"/> states to store in the sink.</summary>
</member>
<member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter.Accept(Lucene.Net.Util.AttributeSource)">
<summary> Returns true, iff the current state of the passed-in <see cref="T:Lucene.Net.Util.AttributeSource"/> shall be stored
in the sink.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkFilter.Reset">
<summary> Called by <see cref="M:Lucene.Net.Analysis.TeeSinkTokenFilter.SinkTokenStream.Reset"/>. This method does nothing by default
and can optionally be overridden.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Token">
<summary>A Token is an occurrence of a term from the text of a field. It consists of
a term's text, the start and end offset of the term in the text of the field,
and a type string.
<p/>
The start and end offsets permit applications to re-associate a token with
its source text, e.g., to display highlighted query terms in a document
browser, or to show matching text fragments in a <abbr title="KeyWord In Context">KWIC</abbr> display, etc.
<p/>
The type is a string, assigned by a lexical analyzer
(a.k.a. tokenizer), naming the lexical or syntactic class that the token
belongs to. For example an end of sentence marker token might be implemented
with type "eos". The default token type is "word".
<p/>
A Token can optionally have metadata (a.k.a. Payload) in the form of a variable
length byte array. Use <see cref="P:Lucene.Net.Index.TermPositions.PayloadLength"/> and
<see cref="M:Lucene.Net.Index.TermPositions.GetPayload(System.Byte[],System.Int32)"/> to retrieve the payloads from the index.
</summary>
<summary><br/><br/>
</summary>
<summary><p/><b>NOTE:</b> As of 2.9, Token implements all <see cref="T:Lucene.Net.Util.IAttribute"/> interfaces
that are part of core Lucene and can be found in the <see cref="N:Lucene.Net.Analysis.Tokenattributes"/> namespace.
Even though it is not necessary to use Token anymore, with the new TokenStream API it can
be used as convenience class that implements all <see cref="T:Lucene.Net.Util.IAttribute"/>s, which is especially useful
to easily switch from the old to the new TokenStream API.
<br/><br/>
<p/>Tokenizers and TokenFilters should try to re-use a Token instance when
possible for best performance, by implementing the
<see cref="M:Lucene.Net.Analysis.TokenStream.IncrementToken"/> API.
Failing that, to create a new Token you should first use
one of the constructors that starts with null text. To load
the token from a char[] use <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>.
To load from a String use <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/> or <see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>.
Alternatively you can get the Token's termBuffer by calling either <see cref="M:Lucene.Net.Analysis.Token.TermBuffer"/>,
if you know that your text is shorter than the capacity of the termBuffer
or <see cref="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)"/>, if there is any possibility
that you may need to grow the buffer. Fill in the characters of your term into this
buffer, with <see cref="M:System.String.ToCharArray(System.Int32,System.Int32)"/> if loading from a string,
or with <see cref="M:System.Array.Copy(System.Array,System.Int64,System.Array,System.Int64,System.Int64)"/>, and finally call <see cref="M:Lucene.Net.Analysis.Token.SetTermLength(System.Int32)"/> to
set the length of the term text. See <a target="_top" href="https://issues.apache.org/jira/browse/LUCENE-969">LUCENE-969</a>
for details.<p/>
<p/>Typical Token reuse patterns:
<list type="bullet">
<item> Copying text from a string (type is reset to <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/> if not
specified):<br/>
<code>
return reusableToken.reinit(string, startOffset, endOffset[, type]);
</code>
</item>
<item> Copying some text from a string (type is reset to <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/>
if not specified):<br/>
<code>
return reusableToken.reinit(string, 0, string.length(), startOffset, endOffset[, type]);
</code>
</item>
<item> Copying text from char[] buffer (type is reset to <see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/>
if not specified):<br/>
<code>
return reusableToken.reinit(buffer, 0, buffer.length, startOffset, endOffset[, type]);
</code>
</item>
<item> Copying some text from a char[] buffer (type is reset to
<see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/> if not specified):<br/>
<code>
return reusableToken.reinit(buffer, start, end - start, startOffset, endOffset[, type]);
</code>
</item>
<item> Copying from one one Token to another (type is reset to
<see cref="F:Lucene.Net.Analysis.Token.DEFAULT_TYPE"/> if not specified):<br/>
<code>
return reusableToken.reinit(source.termBuffer(), 0, source.termLength(), source.startOffset(), source.endOffset()[, source.type()]);
</code>
</item>
</list>
A few things to note:
<list type="bullet">
<item>clear() initializes all of the fields to default values. This was changed in contrast to Lucene 2.4, but should affect no one.</item>
<item>Because <c>TokenStreams</c> can be chained, one cannot assume that the <c>Token's</c> current type is correct.</item>
<item>The startOffset and endOffset represent the start and offset in the
source text, so be careful in adjusting them.</item>
<item>When caching a reusable token, clone it. When injecting a cached token into a stream that can be reset, clone it again.</item>
</list>
<p/>
</summary>
<seealso cref="T:Lucene.Net.Index.Payload">
</seealso>
</member>
<member name="T:Lucene.Net.Util.Attribute">
<summary> Base class for Attributes that can be added to a
<see cref="T:Lucene.Net.Util.AttributeSource"/>.
<p/>
Attributes are used to add data in a dynamic, yet type-safe way to a source
of usually streamed objects, e. g. a <see cref="T:Lucene.Net.Analysis.TokenStream"/>.
</summary>
</member>
<member name="T:Lucene.Net.Util.IAttribute">
<summary> Base interface for attributes.</summary>
</member>
<member name="M:Lucene.Net.Util.Attribute.Clear">
<summary> Clears the values in this AttributeImpl and resets it to its
default value. If this implementation implements more than one Attribute interface
it clears all.
</summary>
</member>
<member name="M:Lucene.Net.Util.Attribute.ToString">
<summary> The default implementation of this method accesses all declared
fields of this object and prints the values in the following syntax:
<code>
public String toString() {
return "start=" + startOffset + ",end=" + endOffset;
}
</code>
This method may be overridden by subclasses.
</summary>
</member>
<member name="M:Lucene.Net.Util.Attribute.GetHashCode">
<summary> Subclasses must implement this method and should compute
a hashCode similar to this:
<code>
public int hashCode() {
int code = startOffset;
code = code * 31 + endOffset;
return code;
}
</code>
see also <see cref="M:Lucene.Net.Util.Attribute.Equals(System.Object)"/>
</summary>
</member>
<member name="M:Lucene.Net.Util.Attribute.Equals(System.Object)">
<summary> All values used for computation of <see cref="M:Lucene.Net.Util.Attribute.GetHashCode"/>
should be checked here for equality.
see also <see cref="M:System.Object.Equals(System.Object)"/>
</summary>
</member>
<member name="M:Lucene.Net.Util.Attribute.CopyTo(Lucene.Net.Util.Attribute)">
<summary> Copies the values from this Attribute into the passed-in
target attribute. The target implementation must support all the
Attributes this implementation supports.
</summary>
</member>
<member name="M:Lucene.Net.Util.Attribute.Clone">
<summary> Shallow clone. Subclasses must override this if they
need to clone any members deeply,
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.ITermAttribute">
<summary> The term text of a Token.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermBuffer(System.Char[],System.Int32,System.Int32)">
<summary>Copies the contents of buffer, starting at offset for
length characters, into the termBuffer array.
</summary>
<param name="buffer">the buffer to copy
</param>
<param name="offset">the index in the buffer of the first character to copy
</param>
<param name="length">the number of characters to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermBuffer(System.String)">
<summary>Copies the contents of buffer into the termBuffer array.</summary>
<param name="buffer">the buffer to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermBuffer(System.String,System.Int32,System.Int32)">
<summary>Copies the contents of buffer, starting at offset and continuing
for length characters, into the termBuffer array.
</summary>
<param name="buffer">the buffer to copy
</param>
<param name="offset">the index in the buffer of the first character to copy
</param>
<param name="length">the number of characters to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.TermBuffer">
<summary>Returns the internal termBuffer character array which
you can then directly alter. If the array is too
small for your token, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.ResizeTermBuffer(System.Int32)"/>
to increase it. After
altering the buffer be sure to call <see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermLength(System.Int32)"/>
to record the number of valid
characters that were placed into the termBuffer.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.ResizeTermBuffer(System.Int32)">
<summary>Grows the termBuffer to at least size newSize, preserving the
existing content. Note: If the next operation is to change
the contents of the term buffer use
<see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
<see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermBuffer(System.String)"/>, or
<see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermBuffer(System.String,System.Int32,System.Int32)"/>
to optimally combine the resize with the setting of the termBuffer.
</summary>
<param name="newSize">minimum size of the new termBuffer
</param>
<returns> newly created termBuffer with length >= newSize
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.TermLength">
<summary>Return number of valid characters (length of the term)
in the termBuffer array.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.SetTermLength(System.Int32)">
<summary>Set number of valid characters (length of the term) in
the termBuffer array. Use this to truncate the termBuffer
or to synchronize with external manipulation of the termBuffer.
Note: to grow the size of the array,
use <see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.ResizeTermBuffer(System.Int32)"/> first.
</summary>
<param name="length">the truncated length
</param>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.Term">
<summary>Returns the Token's term text.
This method has a performance penalty
because the text is stored internally in a char[]. If
possible, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.TermBuffer"/> and <see cref="M:Lucene.Net.Analysis.Tokenattributes.ITermAttribute.TermLength"/>
directly instead. If you really need a
String, use this method, which is nothing more than
a convenience call to <b>new String(token.termBuffer(), 0, token.termLength())</b>
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.ITypeAttribute">
<summary> A Token's lexical type. The Default value is "word". </summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.ITypeAttribute.Type">
<summary>Gets or sets this Token's lexical type. Defaults to "word". </summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.IPositionIncrementAttribute">
<summary>The positionIncrement determines the position of this token
relative to the previous Token in a TokenStream, used in phrase
searching.
<p/>The default value is one.
<p/>Some common uses for this are:<list>
<item>Set it to zero to put multiple terms in the same position. This is
useful if, e.g., a word has multiple stems. Searches for phrases
including either stem will match. In this case, all but the first stem's
increment should be set to zero: the increment of the first instance
should be one. Repeating a token with an increment of zero can also be
used to boost the scores of matches on that token.</item>
<item>Set it to values greater than one to inhibit exact phrase matches.
If, for example, one does not want phrases to match across removed stop
words, then one could build a stop word filter that removes stop words and
also sets the increment to the number of stop words removed before each
non-stop word. Then exact phrase queries will only match when the terms
occur with no intervening stop words.</item>
</list>
</summary>
<seealso cref="T:Lucene.Net.Index.TermPositions">
</seealso>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.IPositionIncrementAttribute.PositionIncrement">
<summary>Gets or sets the position increment. The default value is one.
</summary>
<value> the distance from the prior term </value>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.IFlagsAttribute">
<summary> This attribute can be used to pass different flags down the <see cref="T:Lucene.Net.Analysis.Tokenizer"/> chain,
eg from one TokenFilter to another one.
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.IFlagsAttribute.Flags">
<summary> EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long.
<p/>
Get the bitset for any bits that have been set. This is completely distinct from <see cref="P:Lucene.Net.Analysis.Tokenattributes.ITypeAttribute.Type"/>, although they do share similar purposes.
The flags can be used to encode information about the token for use by other <see cref="T:Lucene.Net.Analysis.TokenFilter"/>s.
</summary>
<value> The bits </value>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.IOffsetAttribute">
<summary> The start and end character offset of a Token. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.IOffsetAttribute.SetOffset(System.Int32,System.Int32)">
<summary>Set the starting and ending offset.
See StartOffset() and EndOffset()
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.IOffsetAttribute.StartOffset">
<summary>Returns this Token's starting offset, the position of the first character
corresponding to this token in the source text.
Note that the difference between endOffset() and startOffset() may not be
equal to termText.length(), as the term text may have been altered by a
stemmer or some other filter.
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.IOffsetAttribute.EndOffset">
<summary>Returns this Token's ending offset, one greater than the position of the
last character corresponding to this token in the source text. The length
of the token in the source text is (endOffset - startOffset).
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.IPayloadAttribute">
<summary> The payload of a Token. See also <see cref="P:Lucene.Net.Analysis.Tokenattributes.IPayloadAttribute.Payload"/>.</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.IPayloadAttribute.Payload">
<summary> Returns this Token's payload.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor">
<summary>Constructs a Token will null text. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor(System.Int32,System.Int32)">
<summary>Constructs a Token with null text and start & end
offsets.
</summary>
<param name="start">start offset in the source text</param>
<param name="end">end offset in the source text</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor(System.Int32,System.Int32,System.String)">
<summary>Constructs a Token with null text and start & end
offsets plus the Token type.
</summary>
<param name="start">start offset in the source text</param>
<param name="end">end offset in the source text</param>
<param name="typ">the lexical type of this Token</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor(System.Int32,System.Int32,System.Int32)">
<summary> Constructs a Token with null text and start & end
offsets plus flags. NOTE: flags is EXPERIMENTAL.
</summary>
<param name="start">start offset in the source text</param>
<param name="end">end offset in the source text</param>
<param name="flags">The bits to set for this token</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor(System.String,System.Int32,System.Int32)">
<summary>Constructs a Token with the given term text, and start
& end offsets. The type defaults to "word."
<b>NOTE:</b> for better indexing speed you should
instead use the char[] termBuffer methods to set the
term text.
</summary>
<param name="text">term text</param>
<param name="start">start offset</param>
<param name="end">end offset</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor(System.String,System.Int32,System.Int32,System.String)">
<summary>Constructs a Token with the given text, start and end
offsets, & type. <b>NOTE:</b> for better indexing
speed you should instead use the char[] termBuffer
methods to set the term text.
</summary>
<param name="text">term text</param>
<param name="start">start offset</param>
<param name="end">end offset</param>
<param name="typ">token type</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor(System.String,System.Int32,System.Int32,System.Int32)">
<summary> Constructs a Token with the given text, start and end
offsets, & type. <b>NOTE:</b> for better indexing
speed you should instead use the char[] termBuffer
methods to set the term text.
</summary>
<param name="text"></param>
<param name="start"></param>
<param name="end"></param>
<param name="flags">token type bits</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.#ctor(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32)">
<summary> Constructs a Token with the given term buffer (offset
& length), start and end
offsets
</summary>
<param name="startTermBuffer"></param>
<param name="termBufferOffset"></param>
<param name="termBufferLength"></param>
<param name="start"></param>
<param name="end"></param>
</member>
<member name="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)">
<summary>Copies the contents of buffer, starting at offset for
length characters, into the termBuffer array.
</summary>
<param name="buffer">the buffer to copy</param>
<param name="offset">the index in the buffer of the first character to copy</param>
<param name="length">the number of characters to copy</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)">
<summary>Copies the contents of buffer into the termBuffer array.</summary>
<param name="buffer">the buffer to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)">
<summary>Copies the contents of buffer, starting at offset and continuing
for length characters, into the termBuffer array.
</summary>
<param name="buffer">the buffer to copy
</param>
<param name="offset">the index in the buffer of the first character to copy
</param>
<param name="length">the number of characters to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.TermBuffer">
<summary>Returns the internal termBuffer character array which
you can then directly alter. If the array is too
small for your token, use <see cref="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)"/>
to increase it. After
altering the buffer be sure to call <see cref="M:Lucene.Net.Analysis.Token.SetTermLength(System.Int32)"/>
to record the number of valid
characters that were placed into the termBuffer.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)">
<summary>Grows the termBuffer to at least size newSize, preserving the
existing content. Note: If the next operation is to change
the contents of the term buffer use
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/>, or
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>
to optimally combine the resize with the setting of the termBuffer.
</summary>
<param name="newSize">minimum size of the new termBuffer
</param>
<returns> newly created termBuffer with length >= newSize
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Token.GrowTermBuffer(System.Int32)">
<summary>Allocates a buffer char[] of at least newSize, without preserving the existing content.
its always used in places that set the content
</summary>
<param name="newSize">minimum size of the buffer
</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.TermLength">
<summary>Return number of valid characters (length of the term)
in the termBuffer array.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.SetTermLength(System.Int32)">
<summary>Set number of valid characters (length of the term) in
the termBuffer array. Use this to truncate the termBuffer
or to synchronize with external manipulation of the termBuffer.
Note: to grow the size of the array,
use <see cref="M:Lucene.Net.Analysis.Token.ResizeTermBuffer(System.Int32)"/> first.
</summary>
<param name="length">the truncated length
</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.SetOffset(System.Int32,System.Int32)">
<summary>Set the starting and ending offset.
See StartOffset() and EndOffset()
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.Clear">
<summary>Resets the term text, payload, flags, and positionIncrement,
startOffset, endOffset and token type to default.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.Clone(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32)">
<summary>Makes a clone, but replaces the term buffer &
start/end offset in the process. This is more
efficient than doing a full clone (and then calling
setTermBuffer) because it saves a wasted copy of the old
termBuffer.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32,System.String)">
<summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
<see cref="P:Lucene.Net.Analysis.Token.StartOffset"/>,
<see cref="P:Lucene.Net.Analysis.Token.EndOffset"/>,
<see cref="P:Lucene.Net.Analysis.Token.Type"/>
</summary>
<returns> this Token instance
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(System.Char[],System.Int32,System.Int32,System.Int32,System.Int32)">
<summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
<see cref="P:Lucene.Net.Analysis.Token.StartOffset"/>,
<see cref="P:Lucene.Net.Analysis.Token.EndOffset"/>
<see cref="P:Lucene.Net.Analysis.Token.Type"/> on Token.DEFAULT_TYPE
</summary>
<returns> this Token instance
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32,System.String)">
<summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/>,
<see cref="P:Lucene.Net.Analysis.Token.StartOffset"/>,
<see cref="P:Lucene.Net.Analysis.Token.EndOffset"/>
<see cref="P:Lucene.Net.Analysis.Token.Type"/>
</summary>
<returns> this Token instance
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32,System.Int32,System.Int32,System.String)">
<summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>,
<see cref="P:Lucene.Net.Analysis.Token.StartOffset"/>,
<see cref="P:Lucene.Net.Analysis.Token.EndOffset"/>
<see cref="P:Lucene.Net.Analysis.Token.Type"/>
</summary>
<returns> this Token instance
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32)">
<summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String)"/>,
<see cref="P:Lucene.Net.Analysis.Token.StartOffset"/>,
<see cref="P:Lucene.Net.Analysis.Token.EndOffset"/>
<see cref="P:Lucene.Net.Analysis.Token.Type"/> on Token.DEFAULT_TYPE
</summary>
<returns> this Token instance
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(System.String,System.Int32,System.Int32,System.Int32,System.Int32)">
<summary>Shorthand for calling <see cref="M:Lucene.Net.Analysis.Token.Clear"/>,
<see cref="M:Lucene.Net.Analysis.Token.SetTermBuffer(System.String,System.Int32,System.Int32)"/>,
<see cref="P:Lucene.Net.Analysis.Token.StartOffset"/>,
<see cref="P:Lucene.Net.Analysis.Token.EndOffset"/>
<see cref="P:Lucene.Net.Analysis.Token.Type"/> on Token.DEFAULT_TYPE
</summary>
<returns> this Token instance
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(Lucene.Net.Analysis.Token)">
<summary> Copy the prototype token's fields into this one. Note: Payloads are shared.</summary>
<param name="prototype">
</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(Lucene.Net.Analysis.Token,System.String)">
<summary> Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.</summary>
<param name="prototype">
</param>
<param name="newTerm">
</param>
</member>
<member name="M:Lucene.Net.Analysis.Token.Reinit(Lucene.Net.Analysis.Token,System.Char[],System.Int32,System.Int32)">
<summary> Copy the prototype token's fields into this one, with a different term. Note: Payloads are shared.</summary>
<param name="prototype">
</param>
<param name="newTermBuffer">
</param>
<param name="offset">
</param>
<param name="length">
</param>
</member>
<member name="F:Lucene.Net.Analysis.Token.TOKEN_ATTRIBUTE_FACTORY">
<summary>
Convenience factory that returns <c>Token</c> as implementation for the basic
attributes and return the default impl (with "Impl" appended) for all other
attributes.
@since 3.0
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Token.PositionIncrement">
<summary>Set the position increment. This determines the position of this token
relative to the previous Token in a <see cref="T:Lucene.Net.Analysis.TokenStream"/>, used in phrase
searching.
<p/>The default value is one.
<p/>Some common uses for this are:<list>
<item>Set it to zero to put multiple terms in the same position. This is
useful if, e.g., a word has multiple stems. Searches for phrases
including either stem will match. In this case, all but the first stem's
increment should be set to zero: the increment of the first instance
should be one. Repeating a token with an increment of zero can also be
used to boost the scores of matches on that token.</item>
<item>Set it to values greater than one to inhibit exact phrase matches.
If, for example, one does not want phrases to match across removed stop
words, then one could build a stop word filter that removes stop words and
also sets the increment to the number of stop words removed before each
non-stop word. Then exact phrase queries will only match when the terms
occur with no intervening stop words.</item>
</list>
</summary>
<value> the distance from the prior term </value>
<seealso cref="T:Lucene.Net.Index.TermPositions">
</seealso>
</member>
<member name="P:Lucene.Net.Analysis.Token.Term">
<summary>Returns the Token's term text.
This method has a performance penalty
because the text is stored internally in a char[]. If
possible, use <see cref="M:Lucene.Net.Analysis.Token.TermBuffer"/> and <see cref="M:Lucene.Net.Analysis.Token.TermLength"/>
directly instead. If you really need a
String, use this method, which is nothing more than
a convenience call to <b>new String(token.termBuffer(), 0, token.termLength())</b>
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Token.StartOffset">
<summary>Gets or sets this Token's starting offset, the position of the first character
corresponding to this token in the source text.
Note that the difference between endOffset() and startOffset() may not be
equal to <see cref="M:Lucene.Net.Analysis.Token.TermLength"/>, as the term text may have been altered by a
stemmer or some other filter.
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Token.EndOffset">
<summary>Gets or sets this Token's ending offset, one greater than the position of the
last character corresponding to this token in the source text. The length
of the token in the source text is (endOffset - startOffset).
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Token.Type">
<summary>Returns this Token's lexical type. Defaults to "word". </summary>
</member>
<member name="P:Lucene.Net.Analysis.Token.Flags">
<summary> EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long.
<p/>
Get the bitset for any bits that have been set. This is completely distinct from <see cref="P:Lucene.Net.Analysis.Token.Type"/>, although they do share similar purposes.
The flags can be used to encode information about the token for use by other <see cref="T:Lucene.Net.Analysis.TokenFilter"/>s.
</summary>
<value> The bits </value>
</member>
<member name="P:Lucene.Net.Analysis.Token.Payload">
<summary> Returns this Token's payload.</summary>
</member>
<member name="T:Lucene.Net.Analysis.Token.TokenAttributeFactory">
<summary>
<b>Expert</b>: Creates an AttributeFactory returning {@link Token} as instance for the basic attributes
and for all other attributes calls the given delegate factory.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Token.TokenAttributeFactory.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory)">
<summary>
<b>Expert</b>: Creates an AttributeFactory returning {@link Token} as instance for the basic attributes
and for all other attributes calls the given delegate factory.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.FlagsAttribute">
<summary> This attribute can be used to pass different flags down the tokenizer chain,
eg from one TokenFilter to another one.
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.FlagsAttribute.Flags">
<summary> EXPERIMENTAL: While we think this is here to stay, we may want to change it to be a long.
<p/>
Get the bitset for any bits that have been set. This is completely distinct from <see cref="P:Lucene.Net.Analysis.Tokenattributes.ITypeAttribute.Type"/>, although they do share similar purposes.
The flags can be used to encode information about the token for use by other <see cref="T:Lucene.Net.Analysis.TokenFilter"/>s.
</summary>
<value> The bits </value>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute">
<summary> The start and end character offset of a Token. </summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute.SetOffset(System.Int32,System.Int32)">
<summary>Set the starting and ending offset.
See StartOffset() and EndOffset()
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute.StartOffset">
<summary>Returns this Token's starting offset, the position of the first character
corresponding to this token in the source text.
Note that the difference between endOffset() and startOffset() may not be
equal to termText.length(), as the term text may have been altered by a
stemmer or some other filter.
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.OffsetAttribute.EndOffset">
<summary>Returns this Token's ending offset, one greater than the position of the
last character corresponding to this token in the source text. The length
of the token in the source text is (endOffset - startOffset).
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute">
<summary> The payload of a Token. See also <see cref="P:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute.Payload"/>.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute.#ctor">
<summary> Initialize this attribute with no payload.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute.#ctor(Lucene.Net.Index.Payload)">
<summary> Initialize this attribute with the given payload. </summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.PayloadAttribute.Payload">
<summary> Returns this Token's payload.</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute">
<summary>The positionIncrement determines the position of this token
relative to the previous Token in a <see cref="T:Lucene.Net.Analysis.TokenStream"/>, used in phrase
searching.
<p/>The default value is one.
<p/>Some common uses for this are:<list>
<item>Set it to zero to put multiple terms in the same position. This is
useful if, e.g., a word has multiple stems. Searches for phrases
including either stem will match. In this case, all but the first stem's
increment should be set to zero: the increment of the first instance
should be one. Repeating a token with an increment of zero can also be
used to boost the scores of matches on that token.</item>
<item>Set it to values greater than one to inhibit exact phrase matches.
If, for example, one does not want phrases to match across removed stop
words, then one could build a stop word filter that removes stop words and
also sets the increment to the number of stop words removed before each
non-stop word. Then exact phrase queries will only match when the terms
occur with no intervening stop words.</item>
</list>
</summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.PositionIncrementAttribute.PositionIncrement">
<summary>Set the position increment. The default value is one.
</summary>
<value> the distance from the prior term </value>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.TermAttribute">
<summary> The term text of a Token.</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.Char[],System.Int32,System.Int32)">
<summary>Copies the contents of buffer, starting at offset for
length characters, into the termBuffer array.
</summary>
<param name="buffer">the buffer to copy
</param>
<param name="offset">the index in the buffer of the first character to copy
</param>
<param name="length">the number of characters to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String)">
<summary>Copies the contents of buffer into the termBuffer array.</summary>
<param name="buffer">the buffer to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String,System.Int32,System.Int32)">
<summary>Copies the contents of buffer, starting at offset and continuing
for length characters, into the termBuffer array.
</summary>
<param name="buffer">the buffer to copy
</param>
<param name="offset">the index in the buffer of the first character to copy
</param>
<param name="length">the number of characters to copy
</param>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermBuffer">
<summary>Returns the internal termBuffer character array which
you can then directly alter. If the array is too
small for your token, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.ResizeTermBuffer(System.Int32)"/>
to increase it. After
altering the buffer be sure to call <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermLength(System.Int32)"/>
to record the number of valid
characters that were placed into the termBuffer.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.ResizeTermBuffer(System.Int32)">
<summary>Grows the termBuffer to at least size newSize, preserving the
existing content. Note: If the next operation is to change
the contents of the term buffer use
<see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.Char[],System.Int32,System.Int32)"/>,
<see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String)"/>, or
<see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermBuffer(System.String,System.Int32,System.Int32)"/>
to optimally combine the resize with the setting of the termBuffer.
</summary>
<param name="newSize">minimum size of the new termBuffer
</param>
<returns> newly created termBuffer with length >= newSize
</returns>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.GrowTermBuffer(System.Int32)">
<summary>Allocates a buffer char[] of at least newSize, without preserving the existing content.
its always used in places that set the content
</summary>
<param name="newSize">minimum size of the buffer
</param>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermLength">
<summary>Return number of valid characters (length of the term)
in the termBuffer array.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.SetTermLength(System.Int32)">
<summary>Set number of valid characters (length of the term) in
the termBuffer array. Use this to truncate the termBuffer
or to synchronize with external manipulation of the termBuffer.
Note: to grow the size of the array,
use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.ResizeTermBuffer(System.Int32)"/> first.
</summary>
<param name="length">the truncated length
</param>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.TermAttribute.Term">
<summary>Returns the Token's term text.
This method has a performance penalty
because the text is stored internally in a char[]. If
possible, use <see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermBuffer"/> and
<see cref="M:Lucene.Net.Analysis.Tokenattributes.TermAttribute.TermLength"/> directly instead. If you
really need a String, use this method, which is nothing more than
a convenience call to <b>new String(token.termBuffer(), 0, token.termLength())</b>
</summary>
</member>
<member name="T:Lucene.Net.Analysis.Tokenattributes.TypeAttribute">
<summary> A Token's lexical type. The Default value is "word". </summary>
</member>
<member name="P:Lucene.Net.Analysis.Tokenattributes.TypeAttribute.Type">
<summary>Returns this Token's lexical type. Defaults to "word". </summary>
</member>
<member name="T:Lucene.Net.Analysis.WhitespaceAnalyzer">
<summary>An Analyzer that uses <see cref="T:Lucene.Net.Analysis.WhitespaceTokenizer"/>. </summary>
</member>
<member name="T:Lucene.Net.Analysis.WhitespaceTokenizer">
<summary>A WhitespaceTokenizer is a tokenizer that divides text at whitespace.
Adjacent sequences of non-Whitespace characters form tokens.
</summary>
</member>
<member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.#ctor(System.IO.TextReader)">
<summary>Construct a new WhitespaceTokenizer. </summary>
</member>
<member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.#ctor(Lucene.Net.Util.AttributeSource,System.IO.TextReader)">
<summary>Construct a new WhitespaceTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource"/>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.#ctor(Lucene.Net.Util.AttributeSource.AttributeFactory,System.IO.TextReader)">
<summary>Construct a new WhitespaceTokenizer using a given <see cref="T:Lucene.Net.Util.AttributeSource.AttributeFactory"/>. </summary>
</member>
<member name="M:Lucene.Net.Analysis.WhitespaceTokenizer.IsTokenChar(System.Char)">
<summary>Collects only characters which do not satisfy
<see cref="M:System.Char.IsWhiteSpace(System.Char)"/>.
</summary>
</member>
<member name="T:Lucene.Net.Analysis.WordlistLoader">
<summary> Loader for text files that represent a list of stopwords.</summary>
</member>
<member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo)">
<summary> Loads a text file and adds every line as an entry to a HashSet (omitting
leading and trailing whitespace). Every line of the file should contain only
one word. The words need to be in lowercase if you make use of an
Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
</summary>
<param name="wordfile">File containing the wordlist</param>
<returns> A HashSet with the file's words</returns>
</member>
<member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.FileInfo,System.String)">
<summary> Loads a text file and adds every non-comment line as an entry to a HashSet (omitting
leading and trailing whitespace). Every line of the file should contain only
one word. The words need to be in lowercase if you make use of an
Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
</summary>
<param name="wordfile">File containing the wordlist</param>
<param name="comment">The comment string to ignore</param>
<returns> A HashSet with the file's words</returns>
</member>
<member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader)">
<summary> Reads lines from a Reader and adds every line as an entry to a HashSet (omitting
leading and trailing whitespace). Every line of the Reader should contain only
one word. The words need to be in lowercase if you make use of an
Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
</summary>
<param name="reader">Reader containing the wordlist</param>
<returns>A HashSet with the reader's words</returns>
</member>
<member name="M:Lucene.Net.Analysis.WordlistLoader.GetWordSet(System.IO.TextReader,System.String)">
<summary> Reads lines from a Reader and adds every non-comment line as an entry to a HashSet (omitting
leading and trailing whitespace). Every line of the Reader should contain only
one word. The words need to be in lowercase if you make use of an
Analyzer which uses LowerCaseFilter (like StandardAnalyzer).
</summary>
<param name="reader">Reader containing the wordlist
</param>
<param name="comment">The string representing a comment.
</param>
<returns> A HashSet with the reader's words
</returns>
</member>
<member name="M:Lucene.Net.Analysis.WordlistLoader.GetStemDict(System.IO.FileInfo)">
<summary> Reads a stem dictionary. Each line contains:
<c>word<b>\t</b>stem</c>
(i.e. two tab seperated words)
</summary>
<returns> stem dictionary that overrules the stemming algorithm
</returns>
<throws> IOException </throws>
</member>
<member name="T:Lucene.Net.Documents.AbstractField">
<summary>
</summary>
</member>
<member name="T:Lucene.Net.Documents.IFieldable">
<summary> Synonymous with <see cref="T:Lucene.Net.Documents.Field"/>.
<p/><bold>WARNING</bold>: This interface may change within minor versions, despite Lucene's backward compatibility requirements.
This means new methods may be added from version to version. This change only affects the Fieldable API; other backwards
compatibility promises remain intact. For example, Lucene can still
read and write indices created within the same major version.
<p/>
</summary>
</member>
<member name="M:Lucene.Net.Documents.IFieldable.GetBinaryValue">
<summary> Return the raw byte[] for the binary field. Note that
you must also call <see cref="P:Lucene.Net.Documents.IFieldable.BinaryLength"/> and <see cref="P:Lucene.Net.Documents.IFieldable.BinaryOffset"/>
to know which range of bytes in this
returned array belong to the field.
</summary>
<returns> reference to the Field value as byte[]. </returns>
</member>
<member name="M:Lucene.Net.Documents.IFieldable.GetBinaryValue(System.Byte[])">
<summary> Return the raw byte[] for the binary field. Note that
you must also call <see cref="P:Lucene.Net.Documents.IFieldable.BinaryLength"/> and <see cref="P:Lucene.Net.Documents.IFieldable.BinaryOffset"/>
to know which range of bytes in this
returned array belong to the field.<p/>
About reuse: if you pass in the result byte[] and it is
used, likely the underlying implementation will hold
onto this byte[] and return it in future calls to
<see cref="M:Lucene.Net.Documents.IFieldable.GetBinaryValue"/> or <see cref="M:Lucene.Net.Documents.IFieldable.GetBinaryValue"/>.
So if you subsequently re-use the same byte[] elsewhere
it will alter this Fieldable's value.
</summary>
<param name="result"> User defined buffer that will be used if
possible. If this is null or not large enough, a new
buffer is allocated
</param>
<returns> reference to the Field value as byte[].
</returns>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.Boost">
<summary>Gets or sets the boost factor for hits for this field. This value will be
multiplied into the score of all hits on this this field of this
document.
<p/>The boost is multiplied by <see cref="P:Lucene.Net.Documents.Document.Boost"/> of the document
containing this field. If a document has multiple fields with the same
name, all such values are multiplied together. This product is then
used to compute the norm factor for the field. By
default, in the <see cref="M:Lucene.Net.Search.Similarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)"/>
method, the boost value is multiplied
by the <see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)"/>
and then rounded by <see cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)"/> before it is stored in the
index. One should attempt to ensure that this product does not overflow
the range of that encoding.
<p/>The default value is 1.0.
<p/>Note: this value is not stored directly with the document in the index.
Documents returned from <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/> and
<see cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)"/> may thus not have the same value present as when
this field was indexed.
</summary>
<seealso cref="P:Lucene.Net.Documents.Document.Boost">
</seealso>
<seealso cref="M:Lucene.Net.Search.Similarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)">
</seealso>
<seealso cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)">
</seealso>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.Name">
<summary>Returns the name of the field as an interned string.
For example "date", "title", "body", ...
</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.StringValue">
<summary>The value of the field as a String, or null.
<p/>
For indexing, if isStored()==true, the stringValue() will be used as the stored field value
unless isBinary()==true, in which case GetBinaryValue() will be used.
If isIndexed()==true and isTokenized()==false, this String value will be indexed as a single token.
If isIndexed()==true and isTokenized()==true, then tokenStreamValue() will be used to generate indexed tokens if not null,
else readerValue() will be used to generate indexed tokens if not null, else stringValue() will be used to generate tokens.
</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.ReaderValue">
<summary>The value of the field as a Reader, which can be used at index time to generate indexed tokens.</summary>
<seealso cref="P:Lucene.Net.Documents.IFieldable.StringValue">
</seealso>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.TokenStreamValue">
<summary>The TokenStream for this field to be used when indexing, or null.</summary>
<seealso cref="P:Lucene.Net.Documents.IFieldable.StringValue">
</seealso>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsStored">
<summary>True if the value of the field is to be stored in the index for return
with search hits.
</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsIndexed">
<summary>True if the value of the field is to be indexed, so that it may be
searched on.
</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsTokenized">
<summary>True if the value of the field should be tokenized as text prior to
indexing. Un-tokenized fields are indexed as a single word and may not be
Reader-valued.
</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsTermVectorStored">
<summary>True if the term or terms used to index this field are stored as a term
vector, available from <see cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)"/>.
These methods do not provide access to the original content of the field,
only to terms used to index it. If the original content must be
preserved, use the <c>stored</c> attribute instead.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)">
</seealso>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsStoreOffsetWithTermVector">
<summary> True if terms are stored as term vector together with their offsets
(start and end positon in source text).
</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsStorePositionWithTermVector">
<summary> True if terms are stored as term vector together with their token positions.</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsBinary">
<summary>True if the value of the field is stored as binary </summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.OmitNorms">
<summary>
True if norms are omitted for this indexed field.
<para>
Expert:
If set, omit normalization factors associated with this indexed field.
This effectively disables indexing boosts and length normalization for this field.
</para>
</summary>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.IsLazy">
<summary> Indicates whether a Field is Lazy or not. The semantics of Lazy loading are such that if a Field is lazily loaded, retrieving
it's values via <see cref="P:Lucene.Net.Documents.IFieldable.StringValue"/> or <see cref="M:Lucene.Net.Documents.IFieldable.GetBinaryValue"/> is only valid as long as the <see cref="T:Lucene.Net.Index.IndexReader"/> that
retrieved the <see cref="T:Lucene.Net.Documents.Document"/> is still open.
</summary>
<value> true if this field can be loaded lazily </value>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.BinaryOffset">
<summary> Returns offset into byte[] segment that is used as value, if Field is not binary
returned value is undefined
</summary>
<value> index of the first character in byte[] segment that represents this Field value </value>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.BinaryLength">
<summary> Returns length of byte[] segment that is used as value, if Field is not binary
returned value is undefined
</summary>
<value> length of byte[] segment that represents this Field value </value>
</member>
<member name="P:Lucene.Net.Documents.IFieldable.OmitTermFreqAndPositions">
Expert:
<para>
If set, omit term freq, positions and payloads from
postings for this field.
</para>
<para>
<b>NOTE</b>: While this option reduces storage space
required in the index, it also means any query
requiring positional information, such as
<see cref="T:Lucene.Net.Search.PhraseQuery"/> or
<see cref="T:Lucene.Net.Search.Spans.SpanQuery"/>
subclasses will silently fail to find results.
</para>
</member>
<member name="M:Lucene.Net.Documents.AbstractField.GetBinaryValue">
<summary> Return the raw byte[] for the binary field. Note that
you must also call <see cref="P:Lucene.Net.Documents.AbstractField.BinaryLength"/> and <see cref="P:Lucene.Net.Documents.AbstractField.BinaryOffset"/>
to know which range of bytes in this
returned array belong to the field.
</summary>
<returns> reference to the Field value as byte[]. </returns>
</member>
<member name="M:Lucene.Net.Documents.AbstractField.ToString">
<summary>Prints a Field for human consumption. </summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.Boost">
<summary>Gets or sets the boost factor for hits for this field.
<p/>The default value is 1.0.
<p/>Note: this value is not stored directly with the document in the index.
Documents returned from <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/> and
<see cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)"/> may thus not have the same value present as when
this field was indexed.
</summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.Name">
<summary>Returns the name of the field as an interned string.
For example "date", "title", "body", ...
</summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.IsStored">
<summary>True iff the value of the field is to be stored in the index for return
with search hits. It is an error for this to be true if a field is
Reader-valued.
</summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.IsIndexed">
<summary>True iff the value of the field is to be indexed, so that it may be
searched on.
</summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.IsTokenized">
<summary>True iff the value of the field should be tokenized as text prior to
indexing. Un-tokenized fields are indexed as a single word and may not be
Reader-valued.
</summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.IsTermVectorStored">
<summary>True iff the term or terms used to index this field are stored as a term
vector, available from <see cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)"/>.
These methods do not provide access to the original content of the field,
only to terms used to index it. If the original content must be
preserved, use the <c>stored</c> attribute instead.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)">
</seealso>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.IsStoreOffsetWithTermVector">
<summary> True iff terms are stored as term vector together with their offsets
(start and end position in source text).
</summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.IsStorePositionWithTermVector">
<summary> True iff terms are stored as term vector together with their token positions.</summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.IsBinary">
<summary>True iff the value of the filed is stored as binary </summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.BinaryLength">
<summary> Returns length of byte[] segment that is used as value, if Field is not binary
returned value is undefined
</summary>
<value> length of byte[] segment that represents this Field value </value>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.BinaryOffset">
<summary> Returns offset into byte[] segment that is used as value, if Field is not binary
returned value is undefined
</summary>
<value> index of the first character in byte[] segment that represents this Field value </value>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.OmitNorms">
<summary>True if norms are omitted for this indexed field </summary>
</member>
<member name="P:Lucene.Net.Documents.AbstractField.OmitTermFreqAndPositions">
<summary>Expert:
If set, omit term freq, positions and payloads from
postings for this field.
<p/><b>NOTE</b>: While this option reduces storage space
required in the index, it also means any query
requiring positional information, such as <see cref="T:Lucene.Net.Search.PhraseQuery"/>
or <see cref="T:Lucene.Net.Search.Spans.SpanQuery"/> subclasses will
silently fail to find results.
</summary>
</member>
<member name="T:Lucene.Net.Documents.CompressionTools">
<summary>Simple utility class providing static methods to
compress and decompress binary data for stored fields.
This class uses java.util.zip.Deflater and Inflater
classes to compress and decompress.
</summary>
</member>
<member name="M:Lucene.Net.Documents.CompressionTools.Compress(System.Byte[],System.Int32,System.Int32,System.Int32)">
<summary>Compresses the specified byte range using the
specified compressionLevel (constants are defined in
java.util.zip.Deflater).
</summary>
</member>
<member name="M:Lucene.Net.Documents.CompressionTools.Compress(System.Byte[],System.Int32,System.Int32)">
<summary>Compresses the specified byte range, with default BEST_COMPRESSION level </summary>
</member>
<member name="M:Lucene.Net.Documents.CompressionTools.Compress(System.Byte[])">
<summary>Compresses all bytes in the array, with default BEST_COMPRESSION level </summary>
</member>
<member name="M:Lucene.Net.Documents.CompressionTools.CompressString(System.String)">
<summary>Compresses the String value, with default BEST_COMPRESSION level </summary>
</member>
<member name="M:Lucene.Net.Documents.CompressionTools.CompressString(System.String,System.Int32)">
<summary>Compresses the String value using the specified
compressionLevel (constants are defined in
java.util.zip.Deflater).
</summary>
</member>
<member name="M:Lucene.Net.Documents.CompressionTools.Decompress(System.Byte[])">
<summary>Decompress the byte array previously returned by
compress
</summary>
</member>
<member name="M:Lucene.Net.Documents.CompressionTools.DecompressString(System.Byte[])">
<summary>Decompress the byte array previously returned by
compressString back into a String
</summary>
</member>
<member name="T:Lucene.Net.Documents.DateField">
<summary> Provides support for converting dates to strings and vice-versa.
The strings are structured so that lexicographic sorting orders by date,
which makes them suitable for use as field values and search terms.
<p/>Note that this class saves dates with millisecond granularity,
which is bad for <see cref="T:Lucene.Net.Search.TermRangeQuery"/> and <see cref="T:Lucene.Net.Search.PrefixQuery"/>, as those
queries are expanded to a BooleanQuery with a potentially large number
of terms when searching. Thus you might want to use
<see cref="T:Lucene.Net.Documents.DateTools"/> instead.
<p/>
Note: dates before 1970 cannot be used, and therefore cannot be
indexed when using this class. See <see cref="T:Lucene.Net.Documents.DateTools"/> for an
alternative without such a limitation.
<p/>
Another approach is <see cref="T:Lucene.Net.Util.NumericUtils"/>, which provides
a sortable binary representation (prefix encoded) of numeric values, which
date/time are.
For indexing a <see cref="T:System.DateTime"/>, convert it to unix timestamp as
<c>long</c> and
index this as a numeric value with <see cref="T:Lucene.Net.Documents.NumericField"/>
and use <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> to query it.
</summary>
<deprecated> If you build a new index, use <see cref="T:Lucene.Net.Documents.DateTools"/> or
<see cref="T:Lucene.Net.Documents.NumericField"/> instead.
This class is included for use with existing
indices and will be removed in a future (possibly Lucene 4.0)
</deprecated>
</member>
<member name="M:Lucene.Net.Documents.DateField.DateToString(System.DateTime)">
<summary> Converts a Date to a string suitable for indexing.</summary>
<throws> RuntimeException if the date specified in the </throws>
<summary> method argument is before 1970
</summary>
</member>
<member name="M:Lucene.Net.Documents.DateField.TimeToString(System.Int64)">
<summary> Converts a millisecond time to a string suitable for indexing.</summary>
<throws> RuntimeException if the time specified in the </throws>
<summary> method argument is negative, that is, before 1970
</summary>
</member>
<member name="M:Lucene.Net.Documents.DateField.StringToTime(System.String)">
<summary>Converts a string-encoded date into a millisecond time. </summary>
</member>
<member name="M:Lucene.Net.Documents.DateField.StringToDate(System.String)">
<summary>Converts a string-encoded date into a Date object. </summary>
</member>
<member name="T:Lucene.Net.Documents.DateTools">
<summary> Provides support for converting dates to strings and vice-versa.
The strings are structured so that lexicographic sorting orders
them by date, which makes them suitable for use as field values
and search terms.
<p/>This class also helps you to limit the resolution of your dates. Do not
save dates with a finer resolution than you really need, as then
RangeQuery and PrefixQuery will require more memory and become slower.
<p/>Compared to <see cref="T:Lucene.Net.Documents.DateField"/> the strings generated by the methods
in this class take slightly more space, unless your selected resolution
is set to <c>Resolution.DAY</c> or lower.
<p/>
Another approach is <see cref="T:Lucene.Net.Util.NumericUtils"/>, which provides
a sortable binary representation (prefix encoded) of numeric values, which
date/time are.
For indexing a <see cref="T:System.DateTime"/>, convert it to unix timestamp as
<c>long</c> and
index this as a numeric value with <see cref="T:Lucene.Net.Documents.NumericField"/>
and use <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> to query it.
</summary>
</member>
<member name="M:Lucene.Net.Documents.DateTools.DateToString(System.DateTime,Lucene.Net.Documents.DateTools.Resolution)">
<summary> Converts a Date to a string suitable for indexing.
</summary>
<param name="date">the date to be converted
</param>
<param name="resolution">the desired resolution, see
<see cref="M:Lucene.Net.Documents.DateTools.Round(System.DateTime,Lucene.Net.Documents.DateTools.Resolution)"/>
</param>
<returns> a string in format <c>yyyyMMddHHmmssSSS</c> or shorter,
depending on <c>resolution</c>; using GMT as timezone
</returns>
</member>
<member name="M:Lucene.Net.Documents.DateTools.TimeToString(System.Int64,Lucene.Net.Documents.DateTools.Resolution)">
<summary> Converts a millisecond time to a string suitable for indexing.
</summary>
<param name="time">the date expressed as milliseconds since January 1, 1970, 00:00:00 GMT
</param>
<param name="resolution">the desired resolution, see
<see cref="M:Lucene.Net.Documents.DateTools.Round(System.Int64,Lucene.Net.Documents.DateTools.Resolution)"/>
</param>
<returns> a string in format <c>yyyyMMddHHmmssSSS</c> or shorter,
depending on <c>resolution</c>; using GMT as timezone
</returns>
</member>
<member name="M:Lucene.Net.Documents.DateTools.StringToTime(System.String)">
<summary> Converts a string produced by <c>timeToString</c> or
<c>DateToString</c> back to a time, represented as the
number of milliseconds since January 1, 1970, 00:00:00 GMT.
</summary>
<param name="dateString">the date string to be converted
</param>
<returns> the number of milliseconds since January 1, 1970, 00:00:00 GMT
</returns>
<throws> ParseException if <c>dateString</c> is not in the </throws>
<summary> expected format
</summary>
</member>
<member name="M:Lucene.Net.Documents.DateTools.StringToDate(System.String)">
<summary> Converts a string produced by <c>timeToString</c> or
<c>DateToString</c> back to a time, represented as a
Date object.
</summary>
<param name="dateString">the date string to be converted
</param>
<returns> the parsed time as a Date object
</returns>
<throws> ParseException if <c>dateString</c> is not in the </throws>
<summary> expected format
</summary>
</member>
<member name="M:Lucene.Net.Documents.DateTools.Round(System.DateTime,Lucene.Net.Documents.DateTools.Resolution)">
<summary> Limit a date's resolution. For example, the date <c>2004-09-21 13:50:11</c>
will be changed to <c>2004-09-01 00:00:00</c> when using
<c>Resolution.MONTH</c>.
</summary>
<param name="date"></param>
<param name="resolution">The desired resolution of the date to be returned
</param>
<returns> the date with all values more precise than <c>resolution</c>
set to 0 or 1
</returns>
</member>
<member name="M:Lucene.Net.Documents.DateTools.Round(System.Int64,Lucene.Net.Documents.DateTools.Resolution)">
<summary> Limit a date's resolution. For example, the date <c>1095767411000</c>
(which represents 2004-09-21 13:50:11) will be changed to
<c>1093989600000</c> (2004-09-01 00:00:00) when using
<c>Resolution.MONTH</c>.
</summary>
<param name="time">The time in milliseconds (not ticks).</param>
<param name="resolution">The desired resolution of the date to be returned
</param>
<returns> the date with all values more precise than <c>resolution</c>
set to 0 or 1, expressed as milliseconds since January 1, 1970, 00:00:00 GMT
</returns>
</member>
<member name="T:Lucene.Net.Documents.DateTools.Resolution">
<summary>Specifies the time granularity. </summary>
</member>
<member name="T:Lucene.Net.Documents.Document">
<summary>Documents are the unit of indexing and search.
A Document is a set of fields. Each field has a name and a textual value.
A field may be <see cref="P:Lucene.Net.Documents.IFieldable.IsStored">stored</see> with the document, in which
case it is returned with search hits on the document. Thus each document
should typically contain one or more stored fields which uniquely identify
it.
<p/>Note that fields which are <i>not</i> <see cref="P:Lucene.Net.Documents.IFieldable.IsStored">stored</see> are
<i>not</i> available in documents retrieved from the index, e.g. with <see cref="P:Lucene.Net.Search.ScoreDoc.Doc"/>,
<see cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)"/> or <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.#ctor">
<summary>Constructs a new document with no fields. </summary>
</member>
<member name="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.IFieldable)">
<summary> <p/>Adds a field to a document. Several fields may be added with
the same name. In this case, if the fields are indexed, their text is
treated as though appended for the purposes of search.<p/>
<p/> Note that add like the removeField(s) methods only makes sense
prior to adding a document to an index. These methods cannot
be used to change the content of an existing index! In order to achieve this,
a document has to be deleted from an index and a new changed version of that
document has to be added.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.RemoveField(System.String)">
<summary> <p/>Removes field with the specified name from the document.
If multiple fields exist with this name, this method removes the first field that has been added.
If there is no field with the specified name, the document remains unchanged.<p/>
<p/> Note that the removeField(s) methods like the add method only make sense
prior to adding a document to an index. These methods cannot
be used to change the content of an existing index! In order to achieve this,
a document has to be deleted from an index and a new changed version of that
document has to be added.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.RemoveFields(System.String)">
<summary> <p/>Removes all fields with the given name from the document.
If there is no field with the specified name, the document remains unchanged.<p/>
<p/> Note that the removeField(s) methods like the add method only make sense
prior to adding a document to an index. These methods cannot
be used to change the content of an existing index! In order to achieve this,
a document has to be deleted from an index and a new changed version of that
document has to be added.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.GetField(System.String)">
<summary>Returns a field with the given name if any exist in this document, or
null. If multiple fields exists with this name, this method returns the
first value added.
Do not use this method with lazy loaded fields.
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.GetFieldable(System.String)">
<summary>Returns a field with the given name if any exist in this document, or
null. If multiple fields exists with this name, this method returns the
first value added.
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.Get(System.String)">
<summary>Returns the string value of the field with the given name if any exist in
this document, or null. If multiple fields exist with this name, this
method returns the first value added. If only binary fields with this name
exist, returns null.
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.GetFields">
<summary>Returns a List of all the fields in a document.
<p/>Note that fields which are <i>not</i> <see cref="P:Lucene.Net.Documents.IFieldable.IsStored">stored</see> are
<i>not</i> available in documents retrieved from the
index, e.g. <see cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)"/> or <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Documents.Document.GetFields(System.String)">
<summary> Returns an array of <see cref="T:Lucene.Net.Documents.Field"/>s with the given name.
Do not use with lazy loaded fields.
This method returns an empty array when there are no
matching fields. It never returns null.
</summary>
<param name="name">the name of the field
</param>
<returns> a <c>Field[]</c> array
</returns>
</member>
<member name="M:Lucene.Net.Documents.Document.GetFieldables(System.String)">
<summary> Returns an array of <see cref="T:Lucene.Net.Documents.IFieldable"/>s with the given name.
This method returns an empty array when there are no
matching fields. It never returns null.
</summary>
<param name="name">the name of the field
</param>
<returns> a <c>Fieldable[]</c> array
</returns>
</member>
<member name="M:Lucene.Net.Documents.Document.GetValues(System.String)">
<summary> Returns an array of values of the field specified as the method parameter.
This method returns an empty array when there are no
matching fields. It never returns null.
</summary>
<param name="name">the name of the field
</param>
<returns> a <c>String[]</c> of field values
</returns>
</member>
<member name="M:Lucene.Net.Documents.Document.GetBinaryValues(System.String)">
<summary> Returns an array of byte arrays for of the fields that have the name specified
as the method parameter. This method returns an empty
array when there are no matching fields. It never
returns null.
</summary>
<param name="name">the name of the field
</param>
<returns> a <c>byte[][]</c> of binary field values
</returns>
</member>
<member name="M:Lucene.Net.Documents.Document.GetBinaryValue(System.String)">
<summary> Returns an array of bytes for the first (or only) field that has the name
specified as the method parameter. This method will return <c>null</c>
if no binary fields with the specified name are available.
There may be non-binary fields with the same name.
</summary>
<param name="name">the name of the field.
</param>
<returns> a <c>byte[]</c> containing the binary field value or <c>null</c>
</returns>
</member>
<member name="M:Lucene.Net.Documents.Document.ToString">
<summary>Prints the fields of a document for human consumption. </summary>
</member>
<member name="P:Lucene.Net.Documents.Document.Boost">
<summary>Gets or sets, at indexing time, the boost factor.
<para>
The default is 1.0
</para>
<p/>Note that once a document is indexed this value is no longer available
from the index. At search time, for retrieved documents, this method always
returns 1. This however does not mean that the boost value set at indexing
time was ignored - it was just combined with other indexing time factors and
stored elsewhere, for better indexing and search performance. (For more
information see the "norm(t,d)" part of the scoring formula in
<see cref="T:Lucene.Net.Search.Similarity">Similarity</see>.)
</summary>
</member>
<member name="T:Lucene.Net.Documents.Field">
<summary>A field is a section of a Document. Each field has two parts, a name and a
value. Values may be free text, provided as a String or as a Reader, or they
may be atomic keywords, which are not further processed. Such keywords may
be used to represent dates, urls, etc. Fields are optionally stored in the
index, so that they may be returned with hits on the document.
</summary>
</member>
<member name="M:Lucene.Net.Documents.Field.SetValue(System.String)">
<summary><p/>Expert: change the value of this field. This can
be used during indexing to re-use a single Field
instance to improve indexing speed by avoiding GC cost
of new'ing and reclaiming Field instances. Typically
a single <see cref="T:Lucene.Net.Documents.Document"/> instance is re-used as
well. This helps most on small documents.<p/>
<p/>Each Field instance should only be used once
within a single <see cref="T:Lucene.Net.Documents.Document"/> instance. See <a href="http://wiki.apache.org/lucene-java/ImproveIndexingSpeed">ImproveIndexingSpeed</a>
for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Documents.Field.SetValue(System.IO.TextReader)">
<summary>Expert: change the value of this field. See <a href="#setValue(java.lang.String)">setValue(String)</a>. </summary>
</member>
<member name="M:Lucene.Net.Documents.Field.SetValue(System.Byte[])">
<summary>Expert: change the value of this field. See <a href="#setValue(java.lang.String)">setValue(String)</a>. </summary>
</member>
<member name="M:Lucene.Net.Documents.Field.SetValue(System.Byte[],System.Int32,System.Int32)">
<summary>Expert: change the value of this field. See <a href="#setValue(java.lang.String)">setValue(String)</a>. </summary>
</member>
<member name="M:Lucene.Net.Documents.Field.SetTokenStream(Lucene.Net.Analysis.TokenStream)">
<summary>Expert: sets the token stream to be used for indexing and causes isIndexed() and isTokenized() to return true.
May be combined with stored values from stringValue() or GetBinaryValue()
</summary>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.String,Lucene.Net.Documents.Field.Store,Lucene.Net.Documents.Field.Index)">
<summary> Create a field by specifying its name, value and how it will
be saved in the index. Term vectors will not be stored in the index.
</summary>
<param name="name">The name of the field
</param>
<param name="value">The string to process
</param>
<param name="store">Whether <c>value</c> should be stored in the index
</param>
<param name="index">Whether the field should be indexed, and if so, if it should
be tokenized before indexing
</param>
<throws> NullPointerException if name or value is <c>null</c> </throws>
<throws> IllegalArgumentException if the field is neither stored nor indexed </throws>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.String,Lucene.Net.Documents.Field.Store,Lucene.Net.Documents.Field.Index,Lucene.Net.Documents.Field.TermVector)">
<summary> Create a field by specifying its name, value and how it will
be saved in the index.
</summary>
<param name="name">The name of the field
</param>
<param name="value">The string to process
</param>
<param name="store">Whether <c>value</c> should be stored in the index
</param>
<param name="index">Whether the field should be indexed, and if so, if it should
be tokenized before indexing
</param>
<param name="termVector">Whether term vector should be stored
</param>
<throws> NullPointerException if name or value is <c>null</c> </throws>
<throws> IllegalArgumentException in any of the following situations: </throws>
<summary> <list>
<item>the field is neither stored nor indexed</item>
<item>the field is not indexed but termVector is <c>TermVector.YES</c></item>
</list>
</summary>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.Boolean,System.String,Lucene.Net.Documents.Field.Store,Lucene.Net.Documents.Field.Index,Lucene.Net.Documents.Field.TermVector)">
<summary> Create a field by specifying its name, value and how it will
be saved in the index.
</summary>
<param name="name">The name of the field
</param>
<param name="internName">Whether to .intern() name or not
</param>
<param name="value">The string to process
</param>
<param name="store">Whether <c>value</c> should be stored in the index
</param>
<param name="index">Whether the field should be indexed, and if so, if it should
be tokenized before indexing
</param>
<param name="termVector">Whether term vector should be stored
</param>
<throws> NullPointerException if name or value is <c>null</c> </throws>
<throws> IllegalArgumentException in any of the following situations: </throws>
<summary> <list>
<item>the field is neither stored nor indexed</item>
<item>the field is not indexed but termVector is <c>TermVector.YES</c></item>
</list>
</summary>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.IO.TextReader)">
<summary> Create a tokenized and indexed field that is not stored. Term vectors will
not be stored. The Reader is read only when the Document is added to the index,
i.e. you may not close the Reader until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
has been called.
</summary>
<param name="name">The name of the field
</param>
<param name="reader">The reader with the content
</param>
<throws> NullPointerException if name or reader is <c>null</c> </throws>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.IO.TextReader,Lucene.Net.Documents.Field.TermVector)">
<summary> Create a tokenized and indexed field that is not stored, optionally with
storing term vectors. The Reader is read only when the Document is added to the index,
i.e. you may not close the Reader until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
has been called.
</summary>
<param name="name">The name of the field
</param>
<param name="reader">The reader with the content
</param>
<param name="termVector">Whether term vector should be stored
</param>
<throws> NullPointerException if name or reader is <c>null</c> </throws>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,Lucene.Net.Analysis.TokenStream)">
<summary> Create a tokenized and indexed field that is not stored. Term vectors will
not be stored. This is useful for pre-analyzed fields.
The TokenStream is read only when the Document is added to the index,
i.e. you may not close the TokenStream until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
has been called.
</summary>
<param name="name">The name of the field
</param>
<param name="tokenStream">The TokenStream with the content
</param>
<throws> NullPointerException if name or tokenStream is <c>null</c> </throws>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,Lucene.Net.Analysis.TokenStream,Lucene.Net.Documents.Field.TermVector)">
<summary> Create a tokenized and indexed field that is not stored, optionally with
storing term vectors. This is useful for pre-analyzed fields.
The TokenStream is read only when the Document is added to the index,
i.e. you may not close the TokenStream until <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
has been called.
</summary>
<param name="name">The name of the field
</param>
<param name="tokenStream">The TokenStream with the content
</param>
<param name="termVector">Whether term vector should be stored
</param>
<throws> NullPointerException if name or tokenStream is <c>null</c> </throws>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.Byte[],Lucene.Net.Documents.Field.Store)">
<summary> Create a stored field with binary value. Optionally the value may be compressed.
</summary>
<param name="name">The name of the field
</param>
<param name="value_Renamed">The binary value
</param>
<param name="store">How <c>value</c> should be stored (compressed or not)
</param>
<throws> IllegalArgumentException if store is <c>Store.NO</c> </throws>
</member>
<member name="M:Lucene.Net.Documents.Field.#ctor(System.String,System.Byte[],System.Int32,System.Int32,Lucene.Net.Documents.Field.Store)">
<summary> Create a stored field with binary value. Optionally the value may be compressed.
</summary>
<param name="name">The name of the field
</param>
<param name="value_Renamed">The binary value
</param>
<param name="offset">Starting offset in value where this Field's bytes are
</param>
<param name="length">Number of bytes to use for this Field, starting at offset
</param>
<param name="store">How <c>value</c> should be stored (compressed or not)
</param>
<throws> IllegalArgumentException if store is <c>Store.NO</c> </throws>
</member>
<member name="P:Lucene.Net.Documents.Field.StringValue">
<summary>The value of the field as a String, or null. If null, the Reader value or
binary value is used. Exactly one of stringValue(),
readerValue(), and getBinaryValue() must be set.
</summary>
</member>
<member name="P:Lucene.Net.Documents.Field.ReaderValue">
<summary>The value of the field as a Reader, or null. If null, the String value or
binary value is used. Exactly one of stringValue(),
readerValue(), and getBinaryValue() must be set.
</summary>
</member>
<member name="P:Lucene.Net.Documents.Field.TokenStreamValue">
<summary>The TokesStream for this field to be used when indexing, or null. If null, the Reader value
or String value is analyzed to produce the indexed tokens.
</summary>
</member>
<member name="T:Lucene.Net.Documents.Field.Store">
<summary>Specifies whether and how a field should be stored. </summary>
</member>
<member name="F:Lucene.Net.Documents.Field.Store.YES">
<summary>Store the original field value in the index. This is useful for short texts
like a document's title which should be displayed with the results. The
value is stored in its original form, i.e. no analyzer is used before it is
stored.
</summary>
</member>
<member name="F:Lucene.Net.Documents.Field.Store.NO">
<summary>Do not store the field value in the index. </summary>
</member>
<member name="T:Lucene.Net.Documents.Field.Index">
<summary>Specifies whether and how a field should be indexed. </summary>
</member>
<member name="F:Lucene.Net.Documents.Field.Index.NO">
<summary>Do not index the field value. This field can thus not be searched,
but one can still access its contents provided it is
<see cref="T:Lucene.Net.Documents.Field.Store">stored</see>.
</summary>
</member>
<member name="F:Lucene.Net.Documents.Field.Index.ANALYZED">
<summary>Index the tokens produced by running the field's
value through an Analyzer. This is useful for
common text.
</summary>
</member>
<member name="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED">
<summary>Index the field's value without using an Analyzer, so it can be searched.
As no analyzer is used the value will be stored as a single term. This is
useful for unique Ids like product numbers.
</summary>
</member>
<member name="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED_NO_NORMS">
<summary>Expert: Index the field's value without an Analyzer,
and also disable the storing of norms. Note that you
can also separately enable/disable norms by setting
<see cref="P:Lucene.Net.Documents.AbstractField.OmitNorms"/>. No norms means that
index-time field and document boosting and field
length normalization are disabled. The benefit is
less memory usage as norms take up one byte of RAM
per indexed field for every document in the index,
during searching. Note that once you index a given
field <i>with</i> norms enabled, disabling norms will
have no effect. In other words, for this to have the
above described effect on a field, all instances of
that field must be indexed with NOT_ANALYZED_NO_NORMS
from the beginning.
</summary>
</member>
<member name="F:Lucene.Net.Documents.Field.Index.ANALYZED_NO_NORMS">
<summary>Expert: Index the tokens produced by running the
field's value through an Analyzer, and also
separately disable the storing of norms. See
<see cref="F:Lucene.Net.Documents.Field.Index.NOT_ANALYZED_NO_NORMS"/> for what norms are
and why you may want to disable them.
</summary>
</member>
<member name="T:Lucene.Net.Documents.Field.TermVector">
<summary>Specifies whether and how a field should have term vectors. </summary>
</member>
<member name="F:Lucene.Net.Documents.Field.TermVector.NO">
<summary>Do not store term vectors. </summary>
</member>
<member name="F:Lucene.Net.Documents.Field.TermVector.YES">
<summary>Store the term vectors of each document. A term vector is a list
of the document's terms and their number of occurrences in that document.
</summary>
</member>
<member name="F:Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS">
<summary> Store the term vector + token position information
</summary>
<seealso cref="F:Lucene.Net.Documents.Field.TermVector.YES">
</seealso>
</member>
<member name="F:Lucene.Net.Documents.Field.TermVector.WITH_OFFSETS">
<summary> Store the term vector + Token offset information
</summary>
<seealso cref="F:Lucene.Net.Documents.Field.TermVector.YES">
</seealso>
</member>
<member name="F:Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS_OFFSETS">
<summary> Store the term vector + Token position and offset information
</summary>
<seealso cref="F:Lucene.Net.Documents.Field.TermVector.YES">
</seealso>
<seealso cref="F:Lucene.Net.Documents.Field.TermVector.WITH_POSITIONS">
</seealso>
<seealso cref="F:Lucene.Net.Documents.Field.TermVector.WITH_OFFSETS">
</seealso>
</member>
<member name="M:Lucene.Net.Documents.FieldExtensions.ToTermVector(System.Boolean,System.Boolean,System.Boolean)">
<summary>
Get the best representation of a TermVector given the flags.
</summary>
</member>
<member name="T:Lucene.Net.Documents.FieldSelector">
<summary> Similar to a <a href="http://download.oracle.com/javase/1.5.0/docs/api/java/io/FileFilter.html">
java.io.FileFilter</a>, the FieldSelector allows one to make decisions about
what Fields get loaded on a <see cref="T:Lucene.Net.Documents.Document"/> by <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32,Lucene.Net.Documents.FieldSelector)"/>
</summary>
</member>
<member name="M:Lucene.Net.Documents.FieldSelector.Accept(System.String)">
<summary> </summary>
<param name="fieldName">the field to accept or reject
</param>
<returns> an instance of <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>
if the <see cref="T:Lucene.Net.Documents.Field"/> named <c>fieldName</c> should be loaded.
</returns>
</member>
<member name="T:Lucene.Net.Documents.FieldSelectorResult">
<summary>Provides information about what should be done with this Field</summary>
</member>
<member name="F:Lucene.Net.Documents.FieldSelectorResult.INVALID">
<summary>
</summary>
</member>
<member name="F:Lucene.Net.Documents.FieldSelectorResult.LOAD">
<summary> Load this <see cref="T:Lucene.Net.Documents.Field"/> every time the <see cref="T:Lucene.Net.Documents.Document"/> is loaded, reading in the data as it is encountered.
<see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> and <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> should not return null.
<p/>
<see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.IFieldable)"/> should be called by the Reader.
</summary>
</member>
<member name="F:Lucene.Net.Documents.FieldSelectorResult.LAZY_LOAD">
<summary> Lazily load this <see cref="T:Lucene.Net.Documents.Field"/>. This means the <see cref="T:Lucene.Net.Documents.Field"/> is valid, but it may not actually contain its data until
invoked. <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> SHOULD NOT BE USED. <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> is safe to use and should
return a valid instance of a <see cref="T:Lucene.Net.Documents.IFieldable"/>.
<p/>
<see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.IFieldable)"/> should be called by the Reader.
</summary>
</member>
<member name="F:Lucene.Net.Documents.FieldSelectorResult.NO_LOAD">
<summary> Do not load the <see cref="T:Lucene.Net.Documents.Field"/>. <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> and <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> should return null.
<see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.IFieldable)"/> is not called.
<p/>
<see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.IFieldable)"/> should not be called by the Reader.
</summary>
</member>
<member name="F:Lucene.Net.Documents.FieldSelectorResult.LOAD_AND_BREAK">
<summary> Load this field as in the <see cref="F:Lucene.Net.Documents.FieldSelectorResult.LOAD"/> case, but immediately return from <see cref="T:Lucene.Net.Documents.Field"/> loading for the <see cref="T:Lucene.Net.Documents.Document"/>. Thus, the
Document may not have its complete set of Fields. <see cref="M:Lucene.Net.Documents.Document.GetField(System.String)"/> and <see cref="M:Lucene.Net.Documents.Document.GetFieldable(System.String)"/> should
both be valid for this <see cref="T:Lucene.Net.Documents.Field"/>
<p/>
<see cref="M:Lucene.Net.Documents.Document.Add(Lucene.Net.Documents.IFieldable)"/> should be called by the Reader.
</summary>
</member>
<member name="F:Lucene.Net.Documents.FieldSelectorResult.SIZE">
<summary>Expert: Load the size of this <see cref="T:Lucene.Net.Documents.Field"/> rather than its value.
Size is measured as number of bytes required to store the field == bytes for a binary or any compressed value, and 2*chars for a String value.
The size is stored as a binary value, represented as an int in a byte[], with the higher order byte first in [0]
</summary>
</member>
<member name="F:Lucene.Net.Documents.FieldSelectorResult.SIZE_AND_BREAK">
<summary>Expert: Like <see cref="F:Lucene.Net.Documents.FieldSelectorResult.SIZE"/> but immediately break from the field loading loop, i.e., stop loading further fields, after the size is loaded </summary>
</member>
<member name="T:Lucene.Net.Documents.LoadFirstFieldSelector">
<summary> Load the First field and break.
<p/>
See <see cref="F:Lucene.Net.Documents.FieldSelectorResult.LOAD_AND_BREAK"/>
</summary>
</member>
<member name="T:Lucene.Net.Documents.MapFieldSelector">
<summary>A <see cref="T:Lucene.Net.Documents.FieldSelector"/> based on a Map of field names to <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>s</summary>
</member>
<member name="M:Lucene.Net.Documents.MapFieldSelector.#ctor(System.Collections.Generic.IDictionary{System.String,Lucene.Net.Documents.FieldSelectorResult})">
<summary>Create a a MapFieldSelector</summary>
<param name="fieldSelections">maps from field names (String) to <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>s
</param>
</member>
<member name="M:Lucene.Net.Documents.MapFieldSelector.#ctor(System.Collections.Generic.IList{System.String})">
<summary>Create a a MapFieldSelector</summary>
<param name="fields">fields to LOAD. List of Strings. All other fields are NO_LOAD.
</param>
</member>
<member name="M:Lucene.Net.Documents.MapFieldSelector.#ctor(System.String[])">
<summary>Create a a MapFieldSelector</summary>
<param name="fields">fields to LOAD. All other fields are NO_LOAD.
</param>
</member>
<member name="M:Lucene.Net.Documents.MapFieldSelector.Accept(System.String)">
<summary>Load field according to its associated value in fieldSelections</summary>
<param name="field">a field name
</param>
<returns> the fieldSelections value that field maps to or NO_LOAD if none.
</returns>
</member>
<member name="T:Lucene.Net.Documents.NumberTools">
<summary> Provides support for converting longs to Strings, and back again. The strings
are structured so that lexicographic sorting order is preserved.
<p/>
That is, if l1 is less than l2 for any two longs l1 and l2, then
NumberTools.longToString(l1) is lexicographically less than
NumberTools.longToString(l2). (Similarly for "greater than" and "equals".)
<p/>
This class handles <b>all</b> long values (unlike
<see cref="T:Lucene.Net.Documents.DateField"/>).
</summary>
<deprecated> For new indexes use <see cref="T:Lucene.Net.Util.NumericUtils"/> instead, which
provides a sortable binary representation (prefix encoded) of numeric
values.
To index and efficiently query numeric values use <see cref="T:Lucene.Net.Documents.NumericField"/>
and <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/>.
This class is included for use with existing
indices and will be removed in a future release (possibly Lucene 4.0).
</deprecated>
</member>
<member name="F:Lucene.Net.Documents.NumberTools.MIN_STRING_VALUE">
<summary> Equivalent to longToString(Long.MIN_VALUE)</summary>
</member>
<member name="F:Lucene.Net.Documents.NumberTools.MAX_STRING_VALUE">
<summary> Equivalent to longToString(Long.MAX_VALUE)</summary>
</member>
<member name="F:Lucene.Net.Documents.NumberTools.STR_SIZE">
<summary> The length of (all) strings returned by <see cref="M:Lucene.Net.Documents.NumberTools.LongToString(System.Int64)"/></summary>
</member>
<member name="M:Lucene.Net.Documents.NumberTools.LongToString(System.Int64)">
<summary> Converts a long to a String suitable for indexing.</summary>
</member>
<member name="M:Lucene.Net.Documents.NumberTools.StringToLong(System.String)">
<summary> Converts a String that was returned by <see cref="M:Lucene.Net.Documents.NumberTools.LongToString(System.Int64)"/> back to a
long.
</summary>
<throws> IllegalArgumentException </throws>
<summary> if the input is null
</summary>
<throws> NumberFormatException </throws>
<summary> if the input does not parse (it was not a String returned by
longToString()).
</summary>
</member>
<member name="T:Lucene.Net.Documents.NumericField">
<summary> <p/>This class provides a <see cref="T:Lucene.Net.Documents.Field"/> that enables indexing
of numeric values for efficient range filtering and
sorting. Here's an example usage, adding an int value:
<code>
document.add(new NumericField(name).setIntValue(value));
</code>
For optimal performance, re-use the
<c>NumericField</c> and <see cref="T:Lucene.Net.Documents.Document"/> instance for more than
one document:
<code>
NumericField field = new NumericField(name);
Document document = new Document();
document.add(field);
for(all documents) {
...
field.setIntValue(value)
writer.addDocument(document);
...
}
</code>
<p/>The .Net native types <c>int</c>, <c>long</c>,
<c>float</c> and <c>double</c> are
directly supported. However, any value that can be
converted into these native types can also be indexed.
For example, date/time values represented by a
<see cref="T:System.DateTime"/> can be translated into a long
value using the <c>java.util.Date.getTime</c> method. If you
don't need millisecond precision, you can quantize the
value, either by dividing the result of
<c>java.util.Date.getTime</c> or using the separate getters
(for year, month, etc.) to construct an <c>int</c> or
<c>long</c> value.<p/>
<p/>To perform range querying or filtering against a
<c>NumericField</c>, use <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> or <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/>
. To sort according to a
<c>NumericField</c>, use the normal numeric sort types, eg
<see cref="F:Lucene.Net.Search.SortField.INT"/> <c>NumericField</c> values
can also be loaded directly from <see cref="T:Lucene.Net.Search.FieldCache"/>.<p/>
<p/>By default, a <c>NumericField</c>'s value is not stored but
is indexed for range filtering and sorting. You can use
the <see cref="M:Lucene.Net.Documents.NumericField.#ctor(System.String,Lucene.Net.Documents.Field.Store,System.Boolean)"/>
constructor if you need to change these defaults.<p/>
<p/>You may add the same field name as a <c>NumericField</c> to
the same document more than once. Range querying and
filtering will be the logical OR of all values; so a range query
will hit all documents that have at least one value in
the range. However sort behavior is not defined. If you need to sort,
you should separately index a single-valued <c>NumericField</c>.<p/>
<p/>A <c>NumericField</c> will consume somewhat more disk space
in the index than an ordinary single-valued field.
However, for a typical index that includes substantial
textual content per document, this increase will likely
be in the noise. <p/>
<p/>Within Lucene, each numeric value is indexed as a
<em>trie</em> structure, where each term is logically
assigned to larger and larger pre-defined brackets (which
are simply lower-precision representations of the value).
The step size between each successive bracket is called the
<c>precisionStep</c>, measured in bits. Smaller
<c>precisionStep</c> values result in larger number
of brackets, which consumes more disk space in the index
but may result in faster range search performance. The
default value, 4, was selected for a reasonable tradeoff
of disk space consumption versus performance. You can
use the expert constructor <see cref="M:Lucene.Net.Documents.NumericField.#ctor(System.String,System.Int32,Lucene.Net.Documents.Field.Store,System.Boolean)"/>
if you'd
like to change the value. Note that you must also
specify a congruent value when creating <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/>
or <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/>.
For low cardinality fields larger precision steps are good.
If the cardinality is < 100, it is fair
to use <see cref="F:System.Int32.MaxValue"/>, which produces one
term per value.
<p/>For more information on the internals of numeric trie
indexing, including the <a href="../search/NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>
configuration, see <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/>. The format of
indexed values is described in <see cref="T:Lucene.Net.Util.NumericUtils"/>.
<p/>If you only need to sort by numeric value, and never
run range querying/filtering, you can index using a
<c>precisionStep</c> of <see cref="F:System.Int32.MaxValue"/>.
This will minimize disk space consumed. <p/>
<p/>More advanced users can instead use <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
directly, when indexing numbers. This
class is a wrapper around this token stream type for
easier, more intuitive usage.<p/>
<p/><b>NOTE:</b> This class is only used during
indexing. When retrieving the stored field value from a
<see cref="T:Lucene.Net.Documents.Document"/> instance after search, you will get a
conventional <see cref="T:Lucene.Net.Documents.IFieldable"/> instance where the numeric
values are returned as <see cref="T:System.String"/>s (according to
<c>toString(value)</c> of the used data type).
<p/><font color="red"><b>NOTE:</b> This API is
experimental and might change in incompatible ways in the
next release.</font>
</summary>
<since> 2.9
</since>
</member>
<member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String)">
<summary> Creates a field for numeric values using the default <c>precisionStep</c>
<see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4). The instance is not yet initialized with
a numeric value, before indexing a document containing this field,
set a value using the various set<em>???</em>Value() methods.
This constructor creates an indexed, but not stored field.
</summary>
<param name="name">the field name
</param>
</member>
<member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String,Lucene.Net.Documents.Field.Store,System.Boolean)">
<summary> Creates a field for numeric values using the default <c>precisionStep</c>
<see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4). The instance is not yet initialized with
a numeric value, before indexing a document containing this field,
set a value using the various set<em>???</em>Value() methods.
</summary>
<param name="name">the field name
</param>
<param name="store">if the field should be stored in plain text form
(according to <c>toString(value)</c> of the used data type)
</param>
<param name="index">if the field should be indexed using <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
</param>
</member>
<member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String,System.Int32)">
<summary> Creates a field for numeric values with the specified
<c>precisionStep</c>. The instance is not yet initialized with
a numeric value, before indexing a document containing this field,
set a value using the various set<em>???</em>Value() methods.
This constructor creates an indexed, but not stored field.
</summary>
<param name="name">the field name
</param>
<param name="precisionStep">the used <a href="../search/NumericRangeQuery.html#precisionStepDesc">precision step</a>
</param>
</member>
<member name="M:Lucene.Net.Documents.NumericField.#ctor(System.String,System.Int32,Lucene.Net.Documents.Field.Store,System.Boolean)">
<summary> Creates a field for numeric values with the specified
<c>precisionStep</c>. The instance is not yet initialized with
a numeric value, before indexing a document containing this field,
set a value using the various set<em>???</em>Value() methods.
</summary>
<param name="name">the field name
</param>
<param name="precisionStep">the used <a href="../search/NumericRangeQuery.html#precisionStepDesc">precision step</a>
</param>
<param name="store">if the field should be stored in plain text form
(according to <c>toString(value)</c> of the used data type)
</param>
<param name="index">if the field should be indexed using <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
</param>
</member>
<member name="M:Lucene.Net.Documents.NumericField.GetBinaryValue(System.Byte[])">
<summary>Returns always <c>null</c> for numeric fields </summary>
</member>
<member name="M:Lucene.Net.Documents.NumericField.SetLongValue(System.Int64)">
<summary> Initializes the field with the supplied <c>long</c> value.</summary>
<param name="value_Renamed">the numeric value
</param>
<returns> this instance, because of this you can use it the following way:
<c>document.add(new NumericField(name, precisionStep).SetLongValue(value))</c>
</returns>
</member>
<member name="M:Lucene.Net.Documents.NumericField.SetIntValue(System.Int32)">
<summary> Initializes the field with the supplied <c>int</c> value.</summary>
<param name="value_Renamed">the numeric value
</param>
<returns> this instance, because of this you can use it the following way:
<c>document.add(new NumericField(name, precisionStep).setIntValue(value))</c>
</returns>
</member>
<member name="M:Lucene.Net.Documents.NumericField.SetDoubleValue(System.Double)">
<summary> Initializes the field with the supplied <c>double</c> value.</summary>
<param name="value_Renamed">the numeric value
</param>
<returns> this instance, because of this you can use it the following way:
<c>document.add(new NumericField(name, precisionStep).setDoubleValue(value))</c>
</returns>
</member>
<member name="M:Lucene.Net.Documents.NumericField.SetFloatValue(System.Single)">
<summary> Initializes the field with the supplied <c>float</c> value.</summary>
<param name="value_Renamed">the numeric value
</param>
<returns> this instance, because of this you can use it the following way:
<c>document.add(new NumericField(name, precisionStep).setFloatValue(value))</c>
</returns>
</member>
<member name="P:Lucene.Net.Documents.NumericField.TokenStreamValue">
<summary>Returns a <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/> for indexing the numeric value. </summary>
</member>
<member name="P:Lucene.Net.Documents.NumericField.ReaderValue">
<summary>Returns always <c>null</c> for numeric fields </summary>
</member>
<member name="P:Lucene.Net.Documents.NumericField.StringValue">
<summary>Returns the numeric value as a string (how it is stored, when <see cref="F:Lucene.Net.Documents.Field.Store.YES"/> is chosen). </summary>
</member>
<member name="P:Lucene.Net.Documents.NumericField.NumericValue">
<summary>Returns the current numeric value as a subclass of <see cref="T:System.Number"/>, <c>null</c> if not yet initialized. </summary>
</member>
<member name="T:Lucene.Net.Documents.SetBasedFieldSelector">
<summary> Declare what fields to load normally and what fields to load lazily
</summary>
</member>
<member name="M:Lucene.Net.Documents.SetBasedFieldSelector.#ctor(System.Collections.Generic.ISet{System.String},System.Collections.Generic.ISet{System.String})">
<summary> Pass in the Set of <see cref="T:Lucene.Net.Documents.Field"/> names to load and the Set of <see cref="T:Lucene.Net.Documents.Field"/> names to load lazily. If both are null, the
Document will not have any <see cref="T:Lucene.Net.Documents.Field"/> on it.
</summary>
<param name="fieldsToLoad">A Set of <see cref="T:System.String"/> field names to load. May be empty, but not null
</param>
<param name="lazyFieldsToLoad">A Set of <see cref="T:System.String"/> field names to load lazily. May be empty, but not null
</param>
</member>
<member name="M:Lucene.Net.Documents.SetBasedFieldSelector.Accept(System.String)">
<summary> Indicate whether to load the field with the given name or not. If the <see cref="P:Lucene.Net.Documents.AbstractField.Name"/> is not in either of the
initializing Sets, then <see cref="F:Lucene.Net.Documents.FieldSelectorResult.NO_LOAD"/> is returned. If a Field name
is in both <c>fieldsToLoad</c> and <c>lazyFieldsToLoad</c>, lazy has precedence.
</summary>
<param name="fieldName">The <see cref="T:Lucene.Net.Documents.Field"/> name to check
</param>
<returns> The <see cref="T:Lucene.Net.Documents.FieldSelectorResult"/>
</returns>
</member>
<member name="T:Lucene.Net.Index.AbstractAllTermDocs">
<summary>
Base class for enumerating all but deleted docs.
<p/>NOTE: this class is meant only to be used internally
by Lucene; it's only public so it can be shared across
packages. This means the API is freely subject to
change, and, the class could be removed entirely, in any
Lucene release. Use directly at your own risk! */
</summary>
</member>
<member name="T:Lucene.Net.Index.TermDocs">
<summary>TermDocs provides an interface for enumerating <document, frequency>
pairs for a term. <p/> The document portion names each document containing
the term. Documents are indicated by number. The frequency portion gives
the number of times the term occurred in each document. <p/> The pairs are
ordered by document number.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.TermDocs"/>
</member>
<member name="M:Lucene.Net.Index.TermDocs.Seek(Lucene.Net.Index.Term)">
<summary>Sets this to the data for a term.
The enumeration is reset to the start of the data for this term.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermDocs.Seek(Lucene.Net.Index.TermEnum)">
<summary>Sets this to the data for the current term in a <see cref="T:Lucene.Net.Index.TermEnum"/>.
This may be optimized in some implementations.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermDocs.Next">
<summary>Moves to the next pair in the enumeration. <p/> Returns true iff there is
such a next pair in the enumeration.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermDocs.Read(System.Int32[],System.Int32[])">
<summary>Attempts to read multiple entries from the enumeration, up to length of
<i>docs</i>. Document numbers are stored in <i>docs</i>, and term
frequencies are stored in <i>freqs</i>. The <i>freqs</i> array must be as
long as the <i>docs</i> array.
<p/>Returns the number of entries read. Zero is only returned when the
stream has been exhausted.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermDocs.SkipTo(System.Int32)">
<summary>Skips entries to the first beyond the current whose document number is
greater than or equal to <i>target</i>. <p/>Returns true iff there is such
an entry. <p/>Behaves as if written: <code>
boolean skipTo(int target) {
do {
if (!next())
return false;
} while (target > doc());
return true;
}
</code>
Some implementations are considerably more efficient than that.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermDocs.Close">
<summary>Frees associated resources. </summary>
</member>
<member name="P:Lucene.Net.Index.TermDocs.Doc">
<summary>Returns the current document number. <p/> This is invalid until <see cref="M:Lucene.Net.Index.TermDocs.Next"/>
is called for the first time.
</summary>
</member>
<member name="P:Lucene.Net.Index.TermDocs.Freq">
<summary>Returns the frequency of the term within the current document. <p/> This
is invalid until <see cref="M:Lucene.Net.Index.TermDocs.Next"/> is called for the first time.
</summary>
</member>
<member name="T:Lucene.Net.Index.BufferedDeletes">
<summary>Holds buffered deletes, by docID, term or query. We
hold two instances of this class: one for the deletes
prior to the last flush, the other for deletes after
the last flush. This is so if we need to abort
(discard all buffered docs) we can also discard the
buffered deletes yet keep the deletes done during
previously flushed segments.
</summary>
</member>
<member name="T:Lucene.Net.Store.IndexInput">
<summary>Abstract base class for input from a file in a <see cref="T:Lucene.Net.Store.Directory"/>. A
random-access input stream. Used for all Lucene index input operations.
</summary>
<seealso cref="T:Lucene.Net.Store.Directory">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadByte">
<summary>Reads and returns a single byte.</summary>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteByte(System.Byte)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32)">
<summary>Reads a specified number of bytes into an array at the specified offset.</summary>
<param name="b">the array to read bytes into
</param>
<param name="offset">the offset in the array to start storing bytes
</param>
<param name="len">the number of bytes to read
</param>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteBytes(System.Byte[],System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32,System.Boolean)">
<summary>Reads a specified number of bytes into an array at the
specified offset with control over whether the read
should be buffered (callers who have their own buffer
should pass in "false" for useBuffer). Currently only
<see cref="T:Lucene.Net.Store.BufferedIndexInput"/> respects this parameter.
</summary>
<param name="b">the array to read bytes into
</param>
<param name="offset">the offset in the array to start storing bytes
</param>
<param name="len">the number of bytes to read
</param>
<param name="useBuffer">set to false if the caller will handle
buffering.
</param>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteBytes(System.Byte[],System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadInt">
<summary>Reads four bytes and returns an int.</summary>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteInt(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadVInt">
<summary>Reads an int stored in variable-length format. Reads between one and
five bytes. Smaller values take fewer bytes. Negative numbers are not
supported.
</summary>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteVInt(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadLong">
<summary>Reads eight bytes and returns a long.</summary>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteLong(System.Int64)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadVLong">
<summary>Reads a long stored in variable-length format. Reads between one and
nine bytes. Smaller values take fewer bytes. Negative numbers are not
supported.
</summary>
</member>
<member name="M:Lucene.Net.Store.IndexInput.SetModifiedUTF8StringsMode">
<summary>Call this if readString should read characters stored
in the old modified UTF8 format (length in java chars
and java's modified UTF8 encoding). This is used for
indices written pre-2.4 See LUCENE-510 for details.
</summary>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadString">
<summary>Reads a string.</summary>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteString(System.String)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.ReadChars(System.Char[],System.Int32,System.Int32)">
<summary>Reads Lucene's old "modified UTF-8" encoded
characters into an array.
</summary>
<param name="buffer">the array to read characters into
</param>
<param name="start">the offset in the array to start storing characters
</param>
<param name="length">the number of characters to read
</param>
<seealso cref="M:Lucene.Net.Store.IndexOutput.WriteChars(System.String,System.Int32,System.Int32)">
</seealso>
<deprecated> -- please use readString or readBytes
instead, and construct the string
from those utf8 bytes
</deprecated>
</member>
<member name="M:Lucene.Net.Store.IndexInput.SkipChars(System.Int32)">
<summary> Expert
Similar to <see cref="M:Lucene.Net.Store.IndexInput.ReadChars(System.Char[],System.Int32,System.Int32)"/> but does not do any conversion operations on the bytes it is reading in. It still
has to invoke <see cref="M:Lucene.Net.Store.IndexInput.ReadByte"/> just as <see cref="M:Lucene.Net.Store.IndexInput.ReadChars(System.Char[],System.Int32,System.Int32)"/> does, but it does not need a buffer to store anything
and it does not have to do any of the bitwise operations, since we don't actually care what is in the byte except to determine
how many more bytes to read
</summary>
<param name="length">The number of chars to read
</param>
<deprecated> this method operates on old "modified utf8" encoded
strings
</deprecated>
</member>
<member name="M:Lucene.Net.Store.IndexInput.Dispose">
<summary>Closes the stream to futher operations. </summary>
</member>
<member name="M:Lucene.Net.Store.IndexInput.Seek(System.Int64)">
<summary>Sets current position in this file, where the next read will occur.</summary>
<seealso cref="P:Lucene.Net.Store.IndexInput.FilePointer">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexInput.Length">
<summary>The number of bytes in the file. </summary>
</member>
<member name="M:Lucene.Net.Store.IndexInput.Clone">
<summary>Returns a clone of this stream.
<p/>Clones of a stream access the same data, and are positioned at the same
point as the stream they were cloned from.
<p/>Expert: Subclasses must ensure that clones may be positioned at
different points in the input from each other and from the stream they
were cloned from.
</summary>
</member>
<member name="P:Lucene.Net.Store.IndexInput.FilePointer">
<summary>Returns the current position in this file, where the next read will
occur.
</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.Seek(System.Int64)">
</seealso>
</member>
<member name="T:Lucene.Net.Index.ByteSliceWriter">
<summary> Class to write byte streams into slices of shared
byte[]. This is used by DocumentsWriter to hold the
posting list for many terms in RAM.
</summary>
</member>
<member name="M:Lucene.Net.Index.ByteSliceWriter.Init(System.Int32)">
<summary> Set up the writer to write at address.</summary>
</member>
<member name="M:Lucene.Net.Index.ByteSliceWriter.WriteByte(System.Byte)">
<summary>Write byte into byte slice stream </summary>
</member>
<member name="T:Lucene.Net.Index.CheckIndex">
<summary> Basic tool and API to check the health of an index and
write a new segments file that removes reference to
problematic segments.
<p/>As this tool checks every byte in the index, on a large
index it can take quite a long time to run.
<p/><b>WARNING</b>: this tool and API is new and
experimental and is subject to suddenly change in the
next release. Please make a complete backup of your
index before using this to fix your index!
</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.#ctor(Lucene.Net.Store.Directory)">
<summary>Create a new CheckIndex on the directory. </summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.SetInfoStream(System.IO.StreamWriter)">
<summary>Set infoStream where messages should go. If null, no
messages are printed
</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method">
<summary>Returns a <see cref="T:Lucene.Net.Index.CheckIndex.Status"/> instance detailing
the state of the index.
<p/>As this method checks every byte in the index, on a large
index it can take quite a long time to run.
<p/><b>WARNING</b>: make sure
you only call this when the index is not opened by any
writer.
</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method(System.Collections.Generic.List{System.String})">
<summary>Returns a <see cref="T:Lucene.Net.Index.CheckIndex.Status"/> instance detailing
the state of the index.
</summary>
<param name="onlySegments">list of specific segment names to check
<p/>As this method checks every byte in the specified
segments, on a large index it can take quite a long
time to run.
<p/><b>WARNING</b>: make sure
you only call this when the index is not opened by any
writer.
</param>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.TestFieldNorms(System.Collections.Generic.IEnumerable{System.String},Lucene.Net.Index.SegmentReader)">
<summary> Test field norms.</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.TestTermIndex(Lucene.Net.Index.SegmentInfo,Lucene.Net.Index.SegmentReader)">
<summary> Test the term index.</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.TestStoredFields(Lucene.Net.Index.SegmentInfo,Lucene.Net.Index.SegmentReader,System.Globalization.NumberFormatInfo)">
<summary> Test stored fields for a segment.</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.TestTermVectors(Lucene.Net.Index.SegmentInfo,Lucene.Net.Index.SegmentReader,System.Globalization.NumberFormatInfo)">
<summary> Test term vectors for a segment.</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.FixIndex(Lucene.Net.Index.CheckIndex.Status)">
<summary>Repairs the index using previously returned result
from <see cref="T:Lucene.Net.Index.CheckIndex"/>. Note that this does not
remove any of the unreferenced files after it's done;
you must separately open an <see cref="T:Lucene.Net.Index.IndexWriter"/>, which
deletes unreferenced files when it's created.
<p/><b>WARNING</b>: this writes a
new segments file into the index, effectively removing
all documents in broken segments from the index.
BE CAREFUL.
<p/><b>WARNING</b>: Make sure you only call this when the
index is not opened by any writer.
</summary>
</member>
<member name="M:Lucene.Net.Index.CheckIndex.Main(System.String[])">
<summary>Command-line interface to check and fix an index.
<p/>
Run it like this:
<code>
java -ea:Lucene.Net... Lucene.Net.Index.CheckIndex pathToIndex [-fix] [-segment X] [-segment Y]
</code>
<list type="bullet">
<item><c>-fix</c>: actually write a new segments_N file, removing any problematic segments</item>
<item><c>-segment X</c>: only check the specified
segment(s). This can be specified multiple times,
to check more than one segment, eg <c>-segment _2
-segment _a</c>. You can't use this with the -fix
option.</item>
</list>
<p/><b>WARNING</b>: <c>-fix</c> should only be used on an emergency basis as it will cause
documents (perhaps many) to be permanently removed from the index. Always make
a backup copy of your index before running this! Do not run this tool on an index
that is actively being written to. You have been warned!
<p/> Run without -fix, this tool will open the index, report version information
and report any exceptions it hits and what action it would take if -fix were
specified. With -fix, this tool will remove any segments that have issues and
write a new segments_N file. This means all documents contained in the affected
segments will be removed.
<p/>
This tool exits with exit code 1 if the index cannot be opened or has any
corruption, else 0.
</summary>
</member>
<member name="T:Lucene.Net.Index.CheckIndex.Status">
<summary> Returned from <see cref="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method"/> detailing the health and status of the index.
<p/><b>WARNING</b>: this API is new and experimental and is
subject to suddenly change in the next release.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.clean">
<summary>True if no problems were found with the index. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.missingSegments">
<summary>True if we were unable to locate and load the segments_N file. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.cantOpenSegments">
<summary>True if we were unable to open the segments_N file. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.missingSegmentVersion">
<summary>True if we were unable to read the version number from segments_N file. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.segmentsFileName">
<summary>Name of latest segments_N file in the index. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.numSegments">
<summary>Number of segments in the index. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.segmentFormat">
<summary>String description of the version of the index. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.segmentsChecked">
<summary>Empty unless you passed specific segments list to check as optional 3rd argument.</summary>
<seealso>
<cref>CheckIndex.CheckIndex_Renamed_Method(System.Collections.IList)</cref>
</seealso>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.toolOutOfDate">
<summary>True if the index was created with a newer version of Lucene than the CheckIndex tool. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.segmentInfos">
<summary>List of <see cref="T:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus"/> instances, detailing status of each segment. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.dir">
<summary>Directory index is in. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.newSegments">
<summary> SegmentInfos instance containing only segments that
had no problems (this is used with the <see cref="M:Lucene.Net.Index.CheckIndex.FixIndex(Lucene.Net.Index.CheckIndex.Status)"/>
method to repair the index.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.totLoseDocCount">
<summary>How many documents will be lost to bad segments. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.numBadSegments">
<summary>How many bad segments were found. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.partial">
<summary>True if we checked only specific segments (<see cref="M:Lucene.Net.Index.CheckIndex.CheckIndex_Renamed_Method(System.Collections.Generic.List{System.String})"/>)
was called with non-null
argument).
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.userData">
<summary>Holds the userData of the last commit in the index </summary>
</member>
<member name="T:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus">
<summary>Holds the status of each segment in the index.
See <see cref="T:Lucene.Net.Index.SegmentInfos"/>.
<p/><b>WARNING</b>: this API is new and experimental and is
subject to suddenly change in the next release.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.name">
<summary>Name of the segment. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docCount">
<summary>Document count (does not take deletions into account). </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.compound">
<summary>True if segment is compound file format. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.numFiles">
<summary>Number of files referenced by this segment. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.sizeMB">
<summary>Net size (MB) of the files referenced by this
segment.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docStoreOffset">
<summary>Doc store offset, if this segment shares the doc
store files (stored fields and term vectors) with
other segments. This is -1 if it does not share.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docStoreSegment">
<summary>String of the shared doc store segment, or null if
this segment does not share the doc store files.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.docStoreCompoundFile">
<summary>True if the shared doc store files are compound file
format.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.hasDeletions">
<summary>True if this segment has pending deletions. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.deletionsFileName">
<summary>Name of the current deletions file name. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.numDeleted">
<summary>Number of deleted documents. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.openReaderPassed">
<summary>True if we were able to open a SegmentReader on this
segment.
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.numFields">
<summary>Number of fields in this segment. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.hasProx">
<summary>True if at least one of the fields in this segment
does not omitTermFreqAndPositions.
</summary>
<seealso cref="P:Lucene.Net.Documents.AbstractField.OmitTermFreqAndPositions">
</seealso>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.diagnostics">
<summary>Map<String, String> that includes certain
debugging details that IndexWriter records into
each segment it creates
</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.fieldNormStatus">
<summary>Status for testing of field norms (null if field norms could not be tested). </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.termIndexStatus">
<summary>Status for testing of indexed terms (null if indexed terms could not be tested). </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.storedFieldStatus">
<summary>Status for testing of stored fields (null if stored fields could not be tested). </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.SegmentInfoStatus.termVectorStatus">
<summary>Status for testing of term vectors (null if term vectors could not be tested). </summary>
</member>
<member name="T:Lucene.Net.Index.CheckIndex.Status.FieldNormStatus">
<summary> Status from testing field norms.</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.FieldNormStatus.totFields">
<summary>Number of fields successfully tested </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.FieldNormStatus.error">
<summary>Exception thrown during term index test (null on success) </summary>
</member>
<member name="T:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus">
<summary> Status from testing term index.</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.termCount">
<summary>Total term count </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.totFreq">
<summary>Total frequency across all terms. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.totPos">
<summary>Total number of positions. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.TermIndexStatus.error">
<summary>Exception thrown during term index test (null on success) </summary>
</member>
<member name="T:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus">
<summary> Status from testing stored fields.</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus.docCount">
<summary>Number of documents tested. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus.totFields">
<summary>Total number of stored fields tested. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.StoredFieldStatus.error">
<summary>Exception thrown during stored fields test (null on success) </summary>
</member>
<member name="T:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus">
<summary> Status from testing stored fields.</summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus.docCount">
<summary>Number of documents tested. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus.totVectors">
<summary>Total number of term vectors tested. </summary>
</member>
<member name="F:Lucene.Net.Index.CheckIndex.Status.TermVectorStatus.error">
<summary>Exception thrown during term vector test (null on success) </summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermDocs.Read(System.Int32[],System.Int32[])">
<summary>Optimized implementation. </summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermDocs.SkipProx(System.Int64,System.Int32)">
<summary>Overridden by SegmentTermPositions to skip in prox stream. </summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermDocs.SkipTo(System.Int32)">
<summary>Optimized implementation. </summary>
</member>
<member name="T:Lucene.Net.Index.CompoundFileReader">
<summary> Class for accessing a compound stream.
This class implements a directory, but is limited to only read operations.
Directory methods that would normally modify data throw an exception.
</summary>
</member>
<member name="T:Lucene.Net.Store.Directory">
<summary>A Directory is a flat list of files. Files may be written once, when they
are created. Once a file is created it may only be opened for read, or
deleted. Random access is permitted both when reading and writing.
<p/> Java's i/o APIs not used directly, but rather all i/o is
through this API. This permits things such as: <list>
<item> implementation of RAM-based indices;</item>
<item> implementation indices stored in a database, via JDBC;</item>
<item> implementation of an index as a single file;</item>
</list>
Directory locking is implemented by an instance of <see cref="P:Lucene.Net.Store.Directory.LockFactory"/>
, and can be changed for each Directory
instance using <see cref="M:Lucene.Net.Store.Directory.SetLockFactory(Lucene.Net.Store.LockFactory)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Store.Directory.interalLockFactory">
<summary>Holds the LockFactory instance (implements locking for
this Directory instance).
</summary>
</member>
<member name="M:Lucene.Net.Store.Directory.ListAll">
<summary>Returns an array of strings, one for each file in the directory.</summary>
<exception cref="T:System.IO.IOException"></exception>
</member>
<member name="M:Lucene.Net.Store.Directory.FileExists(System.String)">
<summary>Returns true iff a file with the given name exists. </summary>
</member>
<member name="M:Lucene.Net.Store.Directory.FileModified(System.String)">
<summary>Returns the time the named file was last modified. </summary>
</member>
<member name="M:Lucene.Net.Store.Directory.TouchFile(System.String)">
<summary>Set the modified time of an existing file to now. </summary>
</member>
<member name="M:Lucene.Net.Store.Directory.DeleteFile(System.String)">
<summary>Removes an existing file in the directory. </summary>
</member>
<member name="M:Lucene.Net.Store.Directory.FileLength(System.String)">
<summary>Returns the length of a file in the directory. </summary>
</member>
<member name="M:Lucene.Net.Store.Directory.CreateOutput(System.String)">
<summary>Creates a new, empty file in the directory with the given name.
Returns a stream writing this file.
</summary>
</member>
<member name="M:Lucene.Net.Store.Directory.Sync(System.String)">
<summary>Ensure that any writes to this file are moved to
stable storage. Lucene uses this to properly commit
changes to the index, to prevent a machine/OS crash
from corrupting the index.
</summary>
</member>
<member name="M:Lucene.Net.Store.Directory.OpenInput(System.String)">
<summary>Returns a stream reading an existing file. </summary>
</member>
<member name="M:Lucene.Net.Store.Directory.OpenInput(System.String,System.Int32)">
<summary>Returns a stream reading an existing file, with the
specified read buffer size. The particular Directory
implementation may ignore the buffer size. Currently
the only Directory implementations that respect this
parameter are <see cref="T:Lucene.Net.Store.FSDirectory"/> and <see cref="T:Lucene.Net.Index.CompoundFileReader"/>
.
</summary>
</member>
<member name="M:Lucene.Net.Store.Directory.MakeLock(System.String)">
<summary>Construct a <see cref="T:Lucene.Net.Store.Lock"/>.</summary>
<param name="name">the name of the lock file
</param>
</member>
<member name="M:Lucene.Net.Store.Directory.ClearLock(System.String)">
<summary> Attempt to clear (forcefully unlock and remove) the
specified lock. Only call this at a time when you are
certain this lock is no longer in use.
</summary>
<param name="name">name of the lock to be cleared.
</param>
</member>
<member name="M:Lucene.Net.Store.Directory.Dispose">
<summary>Closes the store. </summary>
</member>
<member name="M:Lucene.Net.Store.Directory.SetLockFactory(Lucene.Net.Store.LockFactory)">
<summary> Set the LockFactory that this Directory instance should
use for its locking implementation. Each * instance of
LockFactory should only be used for one directory (ie,
do not share a single instance across multiple
Directories).
</summary>
<param name="lockFactory">instance of <see cref="P:Lucene.Net.Store.Directory.LockFactory"/>.
</param>
</member>
<member name="M:Lucene.Net.Store.Directory.GetLockId">
<summary> Return a string identifier that uniquely differentiates
this Directory instance from other Directory instances.
This ID should be the same if two Directory instances
(even in different JVMs and/or on different machines)
are considered "the same index". This is how locking
"scopes" to the right index.
</summary>
</member>
<member name="M:Lucene.Net.Store.Directory.Copy(Lucene.Net.Store.Directory,Lucene.Net.Store.Directory,System.Boolean)">
<summary> Copy contents of a directory src to a directory dest.
If a file in src already exists in dest then the
one in dest will be blindly overwritten.
<p/><b>NOTE:</b> the source directory cannot change
while this method is running. Otherwise the results
are undefined and you could easily hit a
FileNotFoundException.
<p/><b>NOTE:</b> this method only copies files that look
like index files (ie, have extensions matching the
known extensions of index files).
</summary>
<param name="src">source directory
</param>
<param name="dest">destination directory
</param>
<param name="closeDirSrc">if <c>true</c>, call <see cref="M:Lucene.Net.Store.Directory.Close"/> method on source directory
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Store.Directory.EnsureOpen">
<throws> AlreadyClosedException if this Directory is closed </throws>
</member>
<member name="P:Lucene.Net.Store.Directory.LockFactory">
<summary> Get the LockFactory that this Directory instance is
using for its locking implementation. Note that this
may be null for Directory implementations that provide
their own locking implementation.
</summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.ListAll">
<summary>Returns an array of strings, one for each file in the directory. </summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.FileExists(System.String)">
<summary>Returns true iff a file with the given name exists. </summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.FileModified(System.String)">
<summary>Returns the time the compound file was last modified. </summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.TouchFile(System.String)">
<summary>Set the modified time of the compound file to now. </summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.DeleteFile(System.String)">
<summary>Not implemented</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.RenameFile(System.String,System.String)">
<summary>Not implemented</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.FileLength(System.String)">
<summary>Returns the length of a file in the directory.</summary>
<throws> IOException if the file does not exist </throws>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.CreateOutput(System.String)">
<summary>Not implemented</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.MakeLock(System.String)">
<summary>Not implemented</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="T:Lucene.Net.Index.CompoundFileReader.CSIndexInput">
<summary>Implementation of an IndexInput that reads from a portion of the
compound file. The visibility is left as "package" *only* because
this helps with testing since JUnit test cases in a different class
can then access package fields of this class.
</summary>
</member>
<member name="T:Lucene.Net.Store.BufferedIndexInput">
<summary>Base implementation class for buffered <see cref="T:Lucene.Net.Store.IndexInput"/>. </summary>
</member>
<member name="F:Lucene.Net.Store.BufferedIndexInput.BUFFER_SIZE">
<summary>Default buffer size </summary>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexInput.#ctor(System.Int32)">
<summary>Inits BufferedIndexInput with a specific bufferSize </summary>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexInput.SetBufferSize(System.Int32)">
<summary>Change the buffer size used by this IndexInput </summary>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
<summary>Expert: implements buffer refill. Reads bytes from the current position
in the input.
</summary>
<param name="b">the array to read bytes into
</param>
<param name="offset">the offset in the array to start storing bytes
</param>
<param name="length">the number of bytes to read
</param>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexInput.SeekInternal(System.Int64)">
<summary>Expert: implements seek. Sets current position in this file, where the
next <see cref="M:Lucene.Net.Store.BufferedIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)"/> will occur.
</summary>
<seealso cref="M:Lucene.Net.Store.BufferedIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
</seealso>
</member>
<member name="P:Lucene.Net.Store.BufferedIndexInput.BufferSize">
<seealso cref="M:Lucene.Net.Store.BufferedIndexInput.SetBufferSize(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
<summary>Expert: implements buffer refill. Reads bytes from the current
position in the input.
</summary>
<param name="b">the array to read bytes into
</param>
<param name="offset">the offset in the array to start storing bytes
</param>
<param name="len">the number of bytes to read
</param>
</member>
<member name="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.SeekInternal(System.Int64)">
<summary>Expert: implements seek. Sets current position in this file, where
the next <see cref="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)"/> will occur.
</summary>
<seealso cref="M:Lucene.Net.Index.CompoundFileReader.CSIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
</seealso>
</member>
<member name="T:Lucene.Net.Index.CompoundFileWriter">
<summary> Combines multiple files into a single compound file.
The file format:<br/>
<list type="bullet">
<item>VInt fileCount</item>
<item>{Directory}
fileCount entries with the following structure:</item>
<list type="bullet">
<item>long dataOffset</item>
<item>String fileName</item>
</list>
<item>{File Data}
fileCount entries with the raw data of the corresponding file</item>
</list>
The fileCount integer indicates how many files are contained in this compound
file. The {directory} that follows has that many entries. Each directory entry
contains a long pointer to the start of this file's data section, and a String
with that file's name.
</summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileWriter.#ctor(Lucene.Net.Store.Directory,System.String)">
<summary>Create the compound stream in the specified file. The file name is the
entire name (no extensions are added).
</summary>
<throws> NullPointerException if <c>dir</c> or <c>name</c> is null </throws>
</member>
<member name="M:Lucene.Net.Index.CompoundFileWriter.AddFile(System.String)">
<summary>Add a source stream. <c>file</c> is the string by which the
sub-stream will be known in the compound stream.
</summary>
<throws> IllegalStateException if this writer is closed </throws>
<throws> NullPointerException if <c>file</c> is null </throws>
<throws> IllegalArgumentException if a file with the same name </throws>
<summary> has been added already
</summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileWriter.Dispose">
<summary>Merge files with the extensions added up to now.
All files with these extensions are combined sequentially into the
compound stream. After successful merge, the source files
are deleted.
</summary>
<throws> IllegalStateException if close() had been called before or </throws>
<summary> if no file has been added to this object
</summary>
</member>
<member name="M:Lucene.Net.Index.CompoundFileWriter.CopyFile(Lucene.Net.Index.CompoundFileWriter.FileEntry,Lucene.Net.Store.IndexOutput,System.Byte[])">
<summary>Copy the contents of the file with specified extension into the
provided output stream. Use the provided buffer for moving data
to reduce memory allocation.
</summary>
</member>
<member name="P:Lucene.Net.Index.CompoundFileWriter.Directory">
<summary>Returns the directory of the compound file. </summary>
</member>
<member name="P:Lucene.Net.Index.CompoundFileWriter.Name">
<summary>Returns the name of the compound file. </summary>
</member>
<member name="F:Lucene.Net.Index.CompoundFileWriter.FileEntry.file">
<summary>source file </summary>
</member>
<member name="F:Lucene.Net.Index.CompoundFileWriter.FileEntry.directoryOffset">
<summary>temporary holder for the start of directory entry for this file </summary>
</member>
<member name="F:Lucene.Net.Index.CompoundFileWriter.FileEntry.dataOffset">
<summary>temporary holder for the start of this file's data section </summary>
</member>
<member name="T:Lucene.Net.Index.ConcurrentMergeScheduler">
<summary>A <see cref="T:Lucene.Net.Index.MergeScheduler"/> that runs each merge using a
separate thread, up until a maximum number of threads
(<see cref="P:Lucene.Net.Index.ConcurrentMergeScheduler.MaxThreadCount"/>) at which when a merge is
needed, the thread(s) that are updating the index will
pause until one or more merges completes. This is a
simple way to use concurrency in the indexing process
without having to create and manage application level
threads.
</summary>
</member>
<member name="T:Lucene.Net.Index.MergeScheduler">
<summary><p/>Expert: <see cref="T:Lucene.Net.Index.IndexWriter"/> uses an instance
implementing this interface to execute the merges
selected by a <see cref="T:Lucene.Net.Index.MergePolicy"/>. The default
MergeScheduler is <see cref="T:Lucene.Net.Index.ConcurrentMergeScheduler"/>.<p/>
<p/><b>NOTE:</b> This API is new and still experimental
(subject to change suddenly in the next release)<p/>
<p/><b>NOTE</b>: This class typically requires access to
package-private APIs (eg, SegmentInfos) to do its job;
if you implement your own MergePolicy, you'll need to put
it in package Lucene.Net.Index in order to use
these APIs.
</summary>
</member>
<member name="M:Lucene.Net.Index.MergeScheduler.Merge(Lucene.Net.Index.IndexWriter)">
<summary>Run the merges provided by <see cref="M:Lucene.Net.Index.IndexWriter.GetNextMerge"/>. </summary>
</member>
<member name="M:Lucene.Net.Index.MergeScheduler.Dispose(System.Boolean)">
<summary>Close this MergeScheduler. </summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.GetMergeThreadPriority">
<summary>Return the priority that merge threads run at. By
default the priority is 1 plus the priority of (ie,
slightly higher priority than) the first thread that
calls merge.
</summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.SetMergeThreadPriority(System.Int32)">
<summary>Set the priority that merge threads run at. </summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.DoMerge(Lucene.Net.Index.MergePolicy.OneMerge)">
<summary>Does the actual merge, by calling <see cref="M:Lucene.Net.Index.IndexWriter.Merge(Lucene.Net.Index.MergePolicy.OneMerge)"/> </summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.GetMergeThread(Lucene.Net.Index.IndexWriter,Lucene.Net.Index.MergePolicy.OneMerge)">
<summary>Create and return a new MergeThread </summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.HandleMergeException(System.Exception)">
<summary>Called when an exception is hit in a background merge
thread
</summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.AnyUnhandledExceptions">
<summary>Used for testing </summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.AddMyself">
<summary>Used for testing </summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.SetSuppressExceptions">
<summary>Used for testing </summary>
</member>
<member name="M:Lucene.Net.Index.ConcurrentMergeScheduler.ClearSuppressExceptions">
<summary>Used for testing </summary>
</member>
<member name="F:Lucene.Net.Index.ConcurrentMergeScheduler.allInstances">
<summary>Used for testing </summary>
</member>
<member name="P:Lucene.Net.Index.ConcurrentMergeScheduler.MaxThreadCount">
<summary>Gets or sets the max # simultaneous threads that may be
running. If a merge is necessary yet we already have
this many threads running, the incoming thread (that
is calling add/updateDocument) will block until
a merge thread has completed.
</summary>
</member>
<member name="T:Lucene.Net.Support.ThreadClass">
<summary>
Support class used to handle threads
</summary>
</member>
<member name="T:Lucene.Net.Support.IThreadRunnable">
<summary>
This interface should be implemented by any class whose instances are intended
to be executed by a thread.
</summary>
</member>
<member name="M:Lucene.Net.Support.IThreadRunnable.Run">
<summary>
This method has to be implemented in order that starting of the thread causes the object's
run method to be called in that separately executing thread.
</summary>
</member>
<member name="F:Lucene.Net.Support.ThreadClass.threadField">
<summary>
The instance of System.Threading.Thread
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.#ctor">
<summary>
Initializes a new instance of the ThreadClass class
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.#ctor(System.String)">
<summary>
Initializes a new instance of the Thread class.
</summary>
<param name="Name">The name of the thread</param>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.#ctor(System.Threading.ThreadStart)">
<summary>
Initializes a new instance of the Thread class.
</summary>
<param name="Start">A ThreadStart delegate that references the methods to be invoked when this thread begins executing</param>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.#ctor(System.Threading.ThreadStart,System.String)">
<summary>
Initializes a new instance of the Thread class.
</summary>
<param name="Start">A ThreadStart delegate that references the methods to be invoked when this thread begins executing</param>
<param name="Name">The name of the thread</param>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Run">
<summary>
This method has no functionality unless the method is overridden
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Start">
<summary>
Causes the operating system to change the state of the current thread instance to ThreadState.Running
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Interrupt">
<summary>
Interrupts a thread that is in the WaitSleepJoin thread state
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Join">
<summary>
Blocks the calling thread until a thread terminates
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Join(System.Int64)">
<summary>
Blocks the calling thread until a thread terminates or the specified time elapses
</summary>
<param name="MiliSeconds">Time of wait in milliseconds</param>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Join(System.Int64,System.Int32)">
<summary>
Blocks the calling thread until a thread terminates or the specified time elapses
</summary>
<param name="MiliSeconds">Time of wait in milliseconds</param>
<param name="NanoSeconds">Time of wait in nanoseconds</param>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Resume">
<summary>
Resumes a thread that has been suspended
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Abort">
<summary>
Raises a ThreadAbortException in the thread on which it is invoked,
to begin the process of terminating the thread. Calling this method
usually terminates the thread
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Abort(System.Object)">
<summary>
Raises a ThreadAbortException in the thread on which it is invoked,
to begin the process of terminating the thread while also providing
exception information about the thread termination.
Calling this method usually terminates the thread.
</summary>
<param name="stateInfo">An object that contains application-specific information, such as state, which can be used by the thread being aborted</param>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Suspend">
<summary>
Suspends the thread, if the thread is already suspended it has no effect
</summary>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.ToString">
<summary>
Obtain a String that represents the current object
</summary>
<returns>A String that represents the current object</returns>
</member>
<member name="M:Lucene.Net.Support.ThreadClass.Current">
<summary>
Gets the currently running thread
</summary>
<returns>The currently running thread</returns>
</member>
<member name="P:Lucene.Net.Support.ThreadClass.Instance">
<summary>
Gets the current thread instance
</summary>
</member>
<member name="P:Lucene.Net.Support.ThreadClass.Name">
<summary>
Gets or sets the name of the thread
</summary>
</member>
<member name="P:Lucene.Net.Support.ThreadClass.Priority">
<summary>
Gets or sets a value indicating the scheduling priority of a thread
</summary>
</member>
<member name="P:Lucene.Net.Support.ThreadClass.IsAlive">
<summary>
Gets a value indicating the execution status of the current thread
</summary>
</member>
<member name="P:Lucene.Net.Support.ThreadClass.IsBackground">
<summary>
Gets or sets a value indicating whether or not a thread is a background thread.
</summary>
</member>
<member name="T:Lucene.Net.Index.CorruptIndexException">
<summary> This exception is thrown when Lucene detects
an inconsistency in the index.
</summary>
</member>
<member name="T:Lucene.Net.Index.DefaultSkipListReader">
<summary> Implements the skip list reader for the default posting list format
that stores positions and payloads.
</summary>
</member>
<member name="T:Lucene.Net.Index.MultiLevelSkipListReader">
<summary> This abstract class reads skip lists with multiple levels.
See <see cref="T:Lucene.Net.Index.MultiLevelSkipListWriter"/> for the information about the encoding
of the multi level skip lists.
Subclasses must implement the abstract method <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.ReadSkipData(System.Int32,Lucene.Net.Store.IndexInput)"/>
which defines the actual format of the skip data.
</summary>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListReader.GetDoc">
<summary>Returns the id of the doc to which the last call of <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/>
has skipped.
</summary>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)">
<summary>Skips entries to the first beyond the current whose document number is
greater than or equal to <i>target</i>. Returns the current doc count.
</summary>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListReader.SeekChild(System.Int32)">
<summary>Seeks the skip entry on the given level </summary>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListReader.Init(System.Int64,System.Int32)">
<summary>initializes the reader </summary>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListReader.LoadSkipLevels">
<summary>Loads the skip levels </summary>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListReader.ReadSkipData(System.Int32,Lucene.Net.Store.IndexInput)">
<summary> Subclasses must implement the actual skip data encoding in this method.
</summary>
<param name="level">the level skip data shall be read from
</param>
<param name="skipStream">the skip stream to read from
</param>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListReader.SetLastSkipData(System.Int32)">
<summary>Copies the values of the last read skip entry on this level </summary>
</member>
<member name="T:Lucene.Net.Index.MultiLevelSkipListReader.SkipBuffer">
<summary>used to buffer the top skip levels </summary>
</member>
<member name="M:Lucene.Net.Index.DefaultSkipListReader.GetFreqPointer">
<summary>Returns the freq pointer of the doc to which the last call of
<see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/> has skipped.
</summary>
</member>
<member name="M:Lucene.Net.Index.DefaultSkipListReader.GetProxPointer">
<summary>Returns the prox pointer of the doc to which the last call of
<see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/> has skipped.
</summary>
</member>
<member name="M:Lucene.Net.Index.DefaultSkipListReader.GetPayloadLength">
<summary>Returns the payload length of the payload stored just before
the doc to which the last call of <see cref="M:Lucene.Net.Index.MultiLevelSkipListReader.SkipTo(System.Int32)"/>
has skipped.
</summary>
</member>
<member name="T:Lucene.Net.Index.DefaultSkipListWriter">
<summary> Implements the skip list writer for the default posting list format
that stores positions and payloads.
</summary>
</member>
<member name="T:Lucene.Net.Index.MultiLevelSkipListWriter">
<summary> This abstract class writes skip lists with multiple levels.
Example for skipInterval = 3:
c (skip level 2)
c c c (skip level 1)
x x x x x x x x x x (skip level 0)
d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d (posting list)
3 6 9 12 15 18 21 24 27 30 (df)
d - document
x - skip data
c - skip data with child pointer
Skip level i contains every skipInterval-th entry from skip level i-1.
Therefore the number of entries on level i is: floor(df / ((skipInterval ^ (i + 1))).
Each skip entry on a level i>0 contains a pointer to the corresponding skip entry in list i-1.
This guarantess a logarithmic amount of skips to find the target document.
While this class takes care of writing the different skip levels,
subclasses must define the actual format of the skip data.
</summary>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListWriter.WriteSkipData(System.Int32,Lucene.Net.Store.IndexOutput)">
<summary> Subclasses must implement the actual skip data encoding in this method.
</summary>
<param name="level">the level skip data shall be writting for
</param>
<param name="skipBuffer">the skip buffer to write to
</param>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListWriter.BufferSkip(System.Int32)">
<summary> Writes the current skip data to the buffers. The current document frequency determines
the max level is skip data is to be written to.
</summary>
<param name="df">the current document frequency
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.MultiLevelSkipListWriter.WriteSkip(Lucene.Net.Store.IndexOutput)">
<summary> Writes the buffered skip lists to the given output.
</summary>
<param name="output">the IndexOutput the skip lists shall be written to
</param>
<returns> the pointer the skip list starts
</returns>
</member>
<member name="M:Lucene.Net.Index.DefaultSkipListWriter.SetSkipData(System.Int32,System.Boolean,System.Int32)">
<summary> Sets the values for the current skip data. </summary>
</member>
<member name="T:Lucene.Net.Index.DirectoryReader">
<summary> An IndexReader which reads indexes with multiple segments.</summary>
</member>
<member name="T:Lucene.Net.Index.IndexReader">
<summary>IndexReader is an abstract class, providing an interface for accessing an
index. Search of an index is done entirely through this abstract interface,
so that any subclass which implements it is searchable.
<p/> Concrete subclasses of IndexReader are usually constructed with a call to
one of the static <c>open()</c> methods, e.g. <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/>
.
<p/> For efficiency, in this API documents are often referred to via
<i>document numbers</i>, non-negative integers which each name a unique
document in the index. These document numbers are ephemeral--they may change
as documents are added to and deleted from an index. Clients should thus not
rely on a given document having the same number between sessions.
<p/> An IndexReader can be opened on a directory for which an IndexWriter is
opened already, but it cannot be used to delete documents from the index then.
<p/>
<b>NOTE</b>: for backwards API compatibility, several methods are not listed
as abstract, but have no useful implementations in this base class and
instead always throw UnsupportedOperationException. Subclasses are
strongly encouraged to override these methods, but in many cases may not
need to.
<p/>
<p/>
<b>NOTE</b>: as of 2.4, it's possible to open a read-only
IndexReader using the static open methods that accepts the
boolean readOnly parameter. Such a reader has better
better concurrency as it's not necessary to synchronize on the
isDeleted method. You must explicitly specify false
if you want to make changes with the resulting IndexReader.
<p/>
<a name="thread-safety"></a><p/><b>NOTE</b>: <see cref="T:Lucene.Net.Index.IndexReader"/>
instances are completely thread
safe, meaning multiple threads can call any of its methods,
concurrently. If your application requires external
synchronization, you should <b>not</b> synchronize on the
<c>IndexReader</c> instance; use your own
(non-Lucene) objects instead.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.IncRef">
<summary> Expert: increments the refCount of this IndexReader
instance. RefCounts are used to determine when a
reader can be closed safely, i.e. as soon as there are
no more references. Be sure to always call a
corresponding <see cref="M:Lucene.Net.Index.IndexReader.DecRef"/>, in a finally clause;
otherwise the reader may never be closed. Note that
<see cref="M:Lucene.Net.Index.IndexReader.Close"/> simply calls decRef(), which means that
the IndexReader will not really be closed until <see cref="M:Lucene.Net.Index.IndexReader.DecRef"/>
has been called for all outstanding
references.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.DecRef">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DecRef">
<summary> Expert: decreases the refCount of this IndexReader
instance. If the refCount drops to 0, then pending
changes (if any) are committed to the index and this
reader is closed.
</summary>
<throws> IOException in case an IOException occurs in commit() or doClose() </throws>
<summary>
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.IncRef">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexReader.EnsureOpen">
<throws> AlreadyClosedException if this IndexReader is closed </throws>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)">
<summary>Returns an IndexReader reading the index in the given
Directory. You should pass readOnly=true, since it
gives much better concurrent performance, unless you
intend to do write operations (delete documents or
change norms) with the reader.
</summary>
<param name="directory">the index directory</param>
<param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader</param>
<exception cref="T:Lucene.Net.Index.CorruptIndexException">CorruptIndexException if the index is corrupt</exception>
<exception cref="T:System.IO.IOException">IOException if there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,System.Boolean)">
<summary>Expert: returns an IndexReader reading the index in the given
<see cref="P:Lucene.Net.Index.IndexReader.IndexCommit"/>. You should pass readOnly=true, since it
gives much better concurrent performance, unless you
intend to do write operations (delete documents or
change norms) with the reader.
</summary>
<param name="commit">the commit point to open
</param>
<param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean)">
<summary>Expert: returns an IndexReader reading the index in
the given Directory, with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
. You should pass readOnly=true,
since it gives much better concurrent performance,
unless you intend to do write operations (delete
documents or change norms) with the reader.
</summary>
<param name="directory">the index directory
</param>
<param name="deletionPolicy">a custom deletion policy (only used
if you use this reader to perform deletes or to set
norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
</param>
<param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean,System.Int32)">
<summary>Expert: returns an IndexReader reading the index in
the given Directory, with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
. You should pass readOnly=true,
since it gives much better concurrent performance,
unless you intend to do write operations (delete
documents or change norms) with the reader.
</summary>
<param name="directory">the index directory
</param>
<param name="deletionPolicy">a custom deletion policy (only used
if you use this reader to perform deletes or to set
norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
</param>
<param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
</param>
<param name="termInfosIndexDivisor">Subsamples which indexed
terms are loaded into RAM. This has the same effect as <see>
<cref>IndexWriter.SetTermIndexInterval</cref>
</see> except that setting
must be done at indexing time while this setting can be
set per reader. When set to N, then one in every
N*termIndexInterval terms in the index is loaded into
memory. By setting this to a value > 1 you can reduce
memory usage, at the expense of higher latency when
loading a TermInfo. The default value is 1. Set this
to -1 to skip loading the terms index entirely.
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean)">
<summary>Expert: returns an IndexReader reading the index in
the given Directory, using a specific commit and with
a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>. You should pass
readOnly=true, since it gives much better concurrent
performance, unless you intend to do write operations
(delete documents or change norms) with the reader.
</summary>
<param name="commit">the specific <see cref="P:Lucene.Net.Index.IndexReader.IndexCommit"/> to open;
see <see cref="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)"/> to list all commits
in a directory
</param>
<param name="deletionPolicy">a custom deletion policy (only used
if you use this reader to perform deletes or to set
norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
</param>
<param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean,System.Int32)">
<summary>Expert: returns an IndexReader reading the index in
the given Directory, using a specific commit and with
a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>. You should pass
readOnly=true, since it gives much better concurrent
performance, unless you intend to do write operations
(delete documents or change norms) with the reader.
</summary>
<param name="commit">the specific <see cref="P:Lucene.Net.Index.IndexReader.IndexCommit"/> to open;
see <see cref="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)"/> to list all commits
in a directory
</param>
<param name="deletionPolicy">a custom deletion policy (only used
if you use this reader to perform deletes or to set
norms); see <see cref="T:Lucene.Net.Index.IndexWriter"/> for details.
</param>
<param name="readOnly">true if no changes (deletions, norms) will be made with this IndexReader
</param>
<param name="termInfosIndexDivisor">Subsambles which indexed
terms are loaded into RAM. This has the same effect as <see>
<cref>IndexWriter.SetTermIndexInterval</cref>
</see> except that setting
must be done at indexing time while this setting can be
set per reader. When set to N, then one in every
N*termIndexInterval terms in the index is loaded into
memory. By setting this to a value > 1 you can reduce
memory usage, at the expense of higher latency when
loading a TermInfo. The default value is 1. Set this
to -1 to skip loading the terms index entirely.
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Reopen">
<summary> Refreshes an IndexReader if the index has changed since this instance
was (re)opened.
<p/>
Opening an IndexReader is an expensive operation. This method can be used
to refresh an existing IndexReader to reduce these costs. This method
tries to only load segments that have changed or were created after the
IndexReader was (re)opened.
<p/>
If the index has not changed since this instance was (re)opened, then this
call is a NOOP and returns this instance. Otherwise, a new instance is
returned. The old instance is <b>not</b> closed and remains usable.<br/>
<p/>
If the reader is reopened, even though they share
resources internally, it's safe to make changes
(deletions, norms) with the new reader. All shared
mutable state obeys "copy on write" semantics to ensure
the changes are not seen by other readers.
<p/>
You can determine whether a reader was actually reopened by comparing the
old instance with the instance returned by this method:
<code>
IndexReader reader = ...
...
IndexReader newReader = r.reopen();
if (newReader != reader) {
... // reader was reopened
reader.close();
}
reader = newReader;
...
</code>
Be sure to synchronize that code so that other threads,
if present, can never use reader after it has been
closed and before it's switched to newReader.
<p/><b>NOTE</b>: If this reader is a near real-time
reader (obtained from <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>,
reopen() will simply call writer.getReader() again for
you, though this may change in the future.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Reopen(System.Boolean)">
<summary>Just like <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>, except you can change the
readOnly of the original reader. If the index is
unchanged but readOnly is different then a new reader
will be returned.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Reopen(Lucene.Net.Index.IndexCommit)">
<summary>Expert: reopen this reader on a specific commit point.
This always returns a readOnly reader. If the
specified commit point matches what this reader is
already on, and this reader is already readOnly, then
this same instance is returned; if it is not already
readOnly, a readOnly clone is returned.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Clone">
<summary> Efficiently clones the IndexReader (sharing most
internal state).
<p/>
On cloning a reader with pending changes (deletions,
norms), the original reader transfers its write lock to
the cloned reader. This means only the cloned reader
may make further changes to the index, and commit the
changes to the index on close, but the old reader still
reflects all changes made up until it was cloned.
<p/>
Like <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>, it's safe to make changes to
either the original or the cloned reader: all shared
mutable state obeys "copy on write" semantics to ensure
the changes are not seen by other readers.
<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Clone(System.Boolean)">
<summary> Clones the IndexReader and optionally changes readOnly. A readOnly
reader cannot open a writeable reader.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Directory">
<summary> Returns the directory associated with this index. The Default
implementation returns the directory specified by subclasses when
delegating to the IndexReader(Directory) constructor, or throws an
UnsupportedOperationException if one was not specified.
</summary>
<throws> UnsupportedOperationException if no directory </throws>
</member>
<member name="M:Lucene.Net.Index.IndexReader.LastModified(Lucene.Net.Store.Directory)">
<summary> Returns the time the index in the named directory was last modified.
Do not use this to check whether the reader is still up-to-date, use
<see cref="M:Lucene.Net.Index.IndexReader.IsCurrent"/> instead.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetCurrentVersion(Lucene.Net.Store.Directory)">
<summary> Reads version number from segments files. The version number is
initialized with a timestamp and then increased by one for each change of
the index.
</summary>
<param name="directory">where the index resides.
</param>
<returns> version number.
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)">
<summary> Reads commitUserData, previously passed to
<see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>,
from current index segments file. This will return null if
<see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>
has never been called for this index.
</summary>
<param name="directory">where the index resides.
</param>
<returns> commit userData.
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
<summary>
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexReader.IsCurrent">
<summary> Check whether any new changes have occurred to the index since this
reader was opened.
<p/>
If this reader is based on a Directory (ie, was created by calling
<see>
<cref>Open(Store.Directory)</cref>
</see> , or <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> on a reader based on a Directory), then
this method checks if any further commits (see <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>
have occurred in that directory).
<p/>
<p/>
If instead this reader is a near real-time reader (ie, obtained by a call
to <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>, or by calling <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> on a near
real-time reader), then this method checks if either a new commmit has
occurred, or any new uncommitted changes have taken place via the writer.
Note that even if the writer has only performed merging, this method will
still return false.
<p/>
<p/>
In any event, if this returns false, you should call <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> to
get a new reader that sees the changes.
<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
<throws> UnsupportedOperationException unless overridden in subclass </throws>
</member>
<member name="M:Lucene.Net.Index.IndexReader.IsOptimized">
<summary> Checks is the index is optimized (if it has a single segment and
no deletions). Not implemented in the IndexReader base class.
</summary>
<returns> &lt;c&gt;true&lt;/c&gt; if the index is optimized; &lt;c&gt;false&lt;/c&gt; otherwise </returns>
<throws> UnsupportedOperationException unless overridden in subclass </throws>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVectors(System.Int32)">
<summary> Return an array of term frequency vectors for the specified document.
The array contains a vector for each vectorized field in the document.
Each vector contains terms and frequencies for all terms in a given vectorized field.
If no such fields existed, the method returns null. The term vectors that are
returned may either be of type <see cref="T:Lucene.Net.Index.ITermFreqVector"/>
or of type <see cref="T:Lucene.Net.Index.TermPositionVector"/> if
positions or offsets have been stored.
</summary>
<param name="docNumber">document for which term frequency vectors are returned
</param>
<returns> array of term frequency vectors. May be null if no term vectors have been
stored for the specified document.
</returns>
<throws> IOException if index cannot be accessed </throws>
<seealso cref="T:Lucene.Net.Documents.Field.TermVector">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)">
<summary> Return a term frequency vector for the specified document and field. The
returned vector contains terms and frequencies for the terms in
the specified field of this document, if the field had the storeTermVector
flag set. If termvectors had been stored with positions or offsets, a
<see cref="T:Lucene.Net.Index.TermPositionVector"/> is returned.
</summary>
<param name="docNumber">document for which the term frequency vector is returned
</param>
<param name="field">field for which the term frequency vector is returned.
</param>
<returns> term frequency vector May be null if field does not exist in the specified
document or term vector was not stored.
</returns>
<throws> IOException if index cannot be accessed </throws>
<seealso cref="T:Lucene.Net.Documents.Field.TermVector">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String,Lucene.Net.Index.TermVectorMapper)">
<summary> Load the Term Vector into a user-defined data structure instead of relying on the parallel arrays of
the <see cref="T:Lucene.Net.Index.ITermFreqVector"/>.
</summary>
<param name="docNumber">The number of the document to load the vector for
</param>
<param name="field">The name of the field to load
</param>
<param name="mapper">The <see cref="T:Lucene.Net.Index.TermVectorMapper"/> to process the vector. Must not be null
</param>
<throws> IOException if term vectors cannot be accessed or if they do not exist on the field and doc. specified. </throws>
<summary>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,Lucene.Net.Index.TermVectorMapper)">
<summary> Map all the term vectors for all fields in a Document</summary>
<param name="docNumber">The number of the document to load the vector for
</param>
<param name="mapper">The <see cref="T:Lucene.Net.Index.TermVectorMapper"/> to process the vector. Must not be null
</param>
<throws> IOException if term vectors cannot be accessed or if they do not exist on the field and doc. specified. </throws>
</member>
<member name="M:Lucene.Net.Index.IndexReader.IndexExists(Lucene.Net.Store.Directory)">
<summary> Returns <c>true</c> if an index exists at the specified directory.
If the directory does not exist or if there is no index in it.
</summary>
<param name="directory">the directory to check for an index
</param>
<returns> <c>true</c> if an index exists; <c>false</c> otherwise
</returns>
<throws> IOException if there is a problem with accessing the index </throws>
</member>
<member name="M:Lucene.Net.Index.IndexReader.NumDocs">
<summary>Returns the number of documents in this index. </summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Document(System.Int32)">
<summary> Returns the stored fields of the <c>n</c><sup>th</sup>
<c>Document</c> in this index.
<p/>
<b>NOTE:</b> for performance reasons, this method does not check if the
requested document is deleted, and therefore asking for a deleted document
may yield unspecified results. Usually this is not required, however you
can call <see cref="M:Lucene.Net.Index.IndexReader.IsDeleted(System.Int32)"/> with the requested document ID to verify
the document is not deleted.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Document(System.Int32,Lucene.Net.Documents.FieldSelector)">
<summary> Get the <see cref="T:Lucene.Net.Documents.Document"/> at the <c>n</c>
<sup>th</sup> position. The <see cref="T:Lucene.Net.Documents.FieldSelector"/> may be used to determine
what <see cref="T:Lucene.Net.Documents.Field"/>s to load and how they should
be loaded. <b>NOTE:</b> If this Reader (more specifically, the underlying
<c>FieldsReader</c>) is closed before the lazy
<see cref="T:Lucene.Net.Documents.Field"/> is loaded an exception may be
thrown. If you want the value of a lazy
<see cref="T:Lucene.Net.Documents.Field"/> to be available after closing you
must explicitly load it or fetch the Document again with a new loader.
<p/>
<b>NOTE:</b> for performance reasons, this method does not check if the
requested document is deleted, and therefore asking for a deleted document
may yield unspecified results. Usually this is not required, however you
can call <see cref="M:Lucene.Net.Index.IndexReader.IsDeleted(System.Int32)"/> with the requested document ID to verify
the document is not deleted.
</summary>
<param name="n">Get the document at the <c>n</c><sup>th</sup> position
</param>
<param name="fieldSelector">The <see cref="T:Lucene.Net.Documents.FieldSelector"/> to use to determine what
Fields should be loaded on the Document. May be null, in which case
all Fields will be loaded.
</param>
<returns> The stored fields of the
<see cref="T:Lucene.Net.Documents.Document"/> at the nth position
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
<seealso cref="T:Lucene.Net.Documents.IFieldable">
</seealso>
<seealso cref="T:Lucene.Net.Documents.FieldSelector">
</seealso>
<seealso cref="T:Lucene.Net.Documents.SetBasedFieldSelector">
</seealso>
<seealso cref="T:Lucene.Net.Documents.LoadFirstFieldSelector">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexReader.IsDeleted(System.Int32)">
<summary>Returns true if document <i>n</i> has been deleted </summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.HasNorms(System.String)">
<summary>Returns true if there are norms stored for this field. </summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Norms(System.String)">
<summary>
Returns the byte-encoded normalization factor for the named field of
every document. This is used by the search code to score documents.
</summary>
<seealso cref="P:Lucene.Net.Documents.AbstractField.Boost"/>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Norms(System.String,System.Byte[],System.Int32)">
<summary>
Reads the byte-encoded normalization factor for the named field of every
document. This is used by the search code to score documents.
</summary>
<seealso cref="P:Lucene.Net.Documents.AbstractField.Boost"/>
</member>
<member name="M:Lucene.Net.Index.IndexReader.SetNorm(System.Int32,System.String,System.Byte)">
<summary>Expert: Resets the normalization factor for the named field of the named
document. The norm represents the product of the field's <see cref="P:Lucene.Net.Documents.IFieldable.Boost">boost</see>
and its <see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)">length normalization</see>. Thus, to preserve the length normalization
values when resetting this, one should base the new value upon the old.
<b>NOTE:</b> If this field does not store norms, then
this method call will silently do nothing.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.Norms(System.String)"/>
<seealso cref="M:Lucene.Net.Search.Similarity.DecodeNorm(System.Byte)"/>
<exception cref="T:Lucene.Net.Index.StaleReaderException">
If the index has changed since this reader was opened
</exception>
<exception cref="T:Lucene.Net.Index.CorruptIndexException">
If the index is corrupt
</exception>
<exception cref="T:Lucene.Net.Store.LockObtainFailedException">
If another writer has this index open (<c>write.lock</c> could not be obtained)
</exception>
<exception cref="T:System.IO.IOException">
If there is a low-level IO error
</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DoSetNorm(System.Int32,System.String,System.Byte)">
<summary>Implements setNorm in subclass.</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.SetNorm(System.Int32,System.String,System.Single)">
<summary>
Expert: Resets the normalization factor for the named field of the named document.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.Norms(System.String)"/>
<seealso cref="M:Lucene.Net.Search.Similarity.DecodeNorm(System.Byte)"/>
<exception cref="T:Lucene.Net.Index.StaleReaderException">
If the index has changed since this reader was opened
</exception>
<exception cref="T:Lucene.Net.Index.CorruptIndexException">
If the index is corrupt
</exception>
<exception cref="T:Lucene.Net.Store.LockObtainFailedException">
If another writer has this index open (<c>write.lock</c> could not be obtained)
</exception>
<exception cref="T:System.IO.IOException">
If there is a low-level IO error
</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Terms">
<summary>Returns an enumeration of all the terms in the index. The
enumeration is ordered by Term.compareTo(). Each term is greater
than all that precede it in the enumeration. Note that after
calling terms(), <see cref="M:Lucene.Net.Index.TermEnum.Next"/> must be called
on the resulting enumeration before calling other methods such as
<see cref="P:Lucene.Net.Index.TermEnum.Term"/>.
</summary>
<exception cref="T:System.IO.IOException">
If there is a low-level IO error
</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Terms(Lucene.Net.Index.Term)">
<summary>Returns an enumeration of all terms starting at a given term. If
the given term does not exist, the enumeration is positioned at the
first term greater than the supplied term. The enumeration is
ordered by Term.compareTo(). Each term is greater than all that
precede it in the enumeration.
</summary>
<exception cref="T:System.IO.IOException">
If there is a low-level IO error
</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DocFreq(Lucene.Net.Index.Term)">
<summary>Returns the number of documents containing the term <c>t</c>.</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.TermDocs(Lucene.Net.Index.Term)">
<summary>Returns an enumeration of all the documents which contain
<c>term</c>. For each document, the document number, the frequency of
the term in that document is also provided, for use in
search scoring. If term is null, then all non-deleted
docs are returned with freq=1.
Thus, this method implements the mapping:
<p/><list>
Term => <docNum, freq><sup>*</sup>
</list>
<p/>The enumeration is ordered by document number. Each document number
is greater than all that precede it in the enumeration.
</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.TermDocs">
<summary>Returns an unpositioned <see cref="T:Lucene.Net.Index.TermDocs"/> enumerator.</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.TermPositions(Lucene.Net.Index.Term)">
<summary>Returns an enumeration of all the documents which contain
<c>term</c>. For each document, in addition to the document number
and frequency of the term in that document, a list of all of the ordinal
positions of the term in the document is available. Thus, this method
implements the mapping:
<p/><list>
Term => <docNum, freq,
<pos<sub>1</sub>, pos<sub>2</sub>, ...
pos<sub>freq-1</sub>>
><sup>*</sup>
</list>
<p/> This positional information facilitates phrase and proximity searching.
<p/>The enumeration is ordered by document number. Each document number is
greater than all that precede it in the enumeration.
</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.TermPositions">
<summary>Returns an unpositioned <see cref="T:Lucene.Net.Index.TermPositions"/> enumerator.</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)">
<summary>
Deletes the document numbered <c>docNum</c>. Once a document is
deleted it will not appear in TermDocs or TermPostitions enumerations.
Attempts to read its field with the <see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/>
method will result in an error. The presence of this document may still be
reflected in the <see cref="M:Lucene.Net.Index.IndexReader.DocFreq(Lucene.Net.Index.Term)"/> statistic, though
this will be corrected eventually as the index is further modified.
</summary>
<exception cref="T:Lucene.Net.Index.StaleReaderException">
If the index has changed since this reader was opened
</exception>
<exception cref="T:Lucene.Net.Index.CorruptIndexException">If the index is corrupt</exception>
<exception cref="T:Lucene.Net.Store.LockObtainFailedException">
If another writer has this index open (<c>write.lock</c> could not be obtained)
</exception>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DoDelete(System.Int32)">
<summary>Implements deletion of the document numbered <c>docNum</c>.
Applications should call <see cref="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)"/> or <see cref="M:Lucene.Net.Index.IndexReader.DeleteDocuments(Lucene.Net.Index.Term)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DeleteDocuments(Lucene.Net.Index.Term)">
<summary>
Deletes all documents that have a given <c>term</c> indexed.
This is useful if one uses a document field to hold a unique ID string for
the document. Then to delete such a document, one merely constructs a
term with the appropriate field and the unique ID string as its text and
passes it to this method.
See <see cref="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)"/> for information about when this deletion will
become effective.
</summary>
<returns>The number of documents deleted</returns>
<exception cref="T:Lucene.Net.Index.StaleReaderException">
If the index has changed since this reader was opened
</exception>
<exception cref="T:Lucene.Net.Index.CorruptIndexException">If the index is corrupt</exception>
<exception cref="T:Lucene.Net.Store.LockObtainFailedException">
If another writer has this index open (<c>write.lock</c> could not be obtained)
</exception>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.UndeleteAll">
<summary>Undeletes all documents currently marked as deleted in this index.
</summary>
<exception cref="T:Lucene.Net.Index.StaleReaderException">
If the index has changed since this reader was opened
</exception>
<exception cref="T:Lucene.Net.Index.CorruptIndexException">If the index is corrupt</exception>
<exception cref="T:Lucene.Net.Store.LockObtainFailedException">
If another writer has this index open (<c>write.lock</c> could not be obtained)
</exception>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DoUndeleteAll">
<summary>Implements actual undeleteAll() in subclass. </summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.AcquireWriteLock">
<summary>
Does nothing by default. Subclasses that require a write lock for
index modifications must implement this method.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Flush">
<summary> </summary>
<exception cref="T:System.IO.IOException"/>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Flush(System.Collections.Generic.IDictionary{System.String,System.String})">
<param name="commitUserData">Opaque Map (String -> String)
that's recorded into the segments file in the index,
and retrievable by <see cref="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)"/>
</param>
<exception cref="T:System.IO.IOException"/>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Commit">
<summary> Commit changes resulting from delete, undeleteAll, or
setNorm operations
If an exception is hit, then either no changes or all
changes will have been committed to the index
(transactional semantics).
</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Commit(System.Collections.Generic.IDictionary{System.String,System.String})">
<summary> Commit changes resulting from delete, undeleteAll, or
setNorm operations
If an exception is hit, then either no changes or all
changes will have been committed to the index
(transactional semantics).
</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DoCommit(System.Collections.Generic.IDictionary{System.String,System.String})">
<summary>Implements commit.</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Dispose">
<summary> Closes files associated with this index.
Also saves any new deletions to disk.
No other methods should be called after this has been called.
</summary>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="M:Lucene.Net.Index.IndexReader.DoClose">
<summary>Implements close. </summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetFieldNames(Lucene.Net.Index.IndexReader.FieldOption)">
<summary> Get a list of unique field names that exist in this index and have the specified
field option information.
</summary>
<param name="fldOption">specifies which field option should be available for the returned fields
</param>
<returns> Collection of Strings indicating the names of the fields.
</returns>
<seealso cref="T:Lucene.Net.Index.IndexReader.FieldOption">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexReader.Main(System.String[])">
<summary> Prints the filename and size of each file within a given compound file.
Add the -extract flag to extract files to the current working directory.
In order to make the extracted version of the index work, you have to copy
the segments file from the compound index into the directory where the extracted files are stored.
</summary>
<param name="args">Usage: Lucene.Net.Index.IndexReader [-extract] <cfsfile>
</param>
</member>
<member name="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)">
<summary>Returns all commit points that exist in the Directory.
Normally, because the default is <see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>
, there would be only
one commit point. But if you're using a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
then there could be many commits.
Once you have a given commit, you can open a reader on
it by calling <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Index.IndexCommit,System.Boolean)"/>
There must be at least one commit in
the Directory, else this method throws <see cref="T:System.IO.IOException"/>.
Note that if a commit is in
progress while this method is running, that commit
may or may not be returned array.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexReader.GetSequentialSubReaders">
<summary>Expert: returns the sequential sub readers that this
reader is logically composed of. For example,
IndexSearcher uses this API to drive searching by one
sub reader at a time. If this reader is not composed
of sequential child readers, it should return null.
If this method returns an empty array, that means this
reader is a null reader (for example a MultiReader
that has no sub readers).
<p/>
NOTE: You should not try using sub-readers returned by
this method to make any changes (setNorm, deleteDocument,
etc.). While this might succeed for one composite reader
(like MultiReader), it will most likely lead to index
corruption for other readers (like DirectoryReader obtained
through <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/>. Use the parent reader directly.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.RefCount">
<summary>Expert: returns the current refCount for this reader </summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.Version">
<summary> Version number when this IndexReader was opened. Not implemented in the
IndexReader base class.
<p/>
If this reader is based on a Directory (ie, was created by calling
<see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/>, or <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>
on a reader based on a Directory), then
this method returns the version recorded in the commit that the reader
opened. This version is advanced every time <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> is
called.
<p/>
<p/>
If instead this reader is a near real-time reader (ie, obtained by a call
to <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>, or by calling <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/> on a near
real-time reader), then this method returns the version of the last
commit done by the writer. Note that even as further changes are made
with the writer, the version will not changed until a commit is
completed. Thus, you should not rely on this method to determine when a
near real-time reader should be opened. Use <see cref="M:Lucene.Net.Index.IndexReader.IsCurrent"/> instead.
<p/>
</summary>
<throws> UnsupportedOperationException </throws>
<summary> unless overridden in subclass
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.CommitUserData">
<summary> Retrieve the String userData optionally passed to
<see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>.
This will return null if
<see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>
has never been called for this index.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)">
</seealso>
</member>
<member name="P:Lucene.Net.Index.IndexReader.MaxDoc">
<summary>Returns one greater than the largest possible document number.
This may be used to, e.g., determine how big to allocate an array which
will have an element for every document number in an index.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.NumDeletedDocs">
<summary>Returns the number of deleted documents. </summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.Item(System.Int32)">
<summary> Returns the stored fields of the <c>n</c><sup>th</sup>
<c>Document</c> in this index.
<p/>
<b>NOTE:</b> for performance reasons, this method does not check if the
requested document is deleted, and therefore asking for a deleted document
may yield unspecified results. Usually this is not required, however you
can call <see cref="M:Lucene.Net.Index.IndexReader.IsDeleted(System.Int32)"/> with the requested document ID to verify
the document is not deleted.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<exception cref="T:System.IO.IOException">If there is a low-level IO error</exception>
</member>
<member name="P:Lucene.Net.Index.IndexReader.HasDeletions">
<summary>Returns true if any documents have been deleted </summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.IndexCommit">
<summary> Expert: return the IndexCommit that this reader has
opened. This method is only implemented by those
readers that correspond to a Directory with its own
segments_N file.
<p/><b>WARNING</b>: this API is new and experimental and
may suddenly change.<p/>
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.FieldCacheKey">
<summary>Expert</summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.UniqueTermCount">
<summary>Returns the number of unique terms (across all fields)
in this reader.
This method returns long, even though internally
Lucene cannot handle more than 2^31 unique terms, for
a possible future when this limitation is removed.
</summary>
<throws> UnsupportedOperationException if this count </throws>
<summary> cannot be easily determined (eg Multi*Readers).
Instead, you should call <see cref="M:Lucene.Net.Index.IndexReader.GetSequentialSubReaders"/>
and ask each sub reader for
its unique term count.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexReader.TermInfosIndexDivisor">
<summary>
For IndexReader implementations that use
TermInfosReader to read terms, this returns the
current indexDivisor as specified when the reader was
opened.
</summary>
</member>
<member name="T:Lucene.Net.Index.SegmentInfos.FindSegmentsFile">
<summary> Utility class for executing code that needs to do
something with the current segments file. This is
necessary with lock-less commits because from the time
you locate the current segments file name, until you
actually open it, read its contents, or check modified
time, etc., it could have been deleted due to a writer
commit finishing.
</summary>
</member>
<member name="T:Lucene.Net.Index.SegmentInfos">
<summary> A collection of segmentInfo objects with methods for operating on
those segments in relation to the file system.
<p/><b>NOTE:</b> This API is new and still experimental
(subject to change suddenly in the next release)<p/>
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT">
<summary>The file format version, a negative number. </summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_LOCKLESS">
<summary>This format adds details used for lockless commits. It differs
slightly from the previous format in that file names
are never re-used (write once). Instead, each file is
written to the next generation. For example,
segments_1, segments_2, etc. This allows us to not use
a commit lock. See <a
href="http://lucene.apache.org/java/docs/fileformats.html">file
formats</a> for details.
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_SINGLE_NORM_FILE">
<summary>This format adds a "hasSingleNormFile" flag into each segment info.
See <a href="http://issues.apache.org/jira/browse/LUCENE-756">LUCENE-756</a>
for details.
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_SHARED_DOC_STORE">
<summary>This format allows multiple segments to share a single
vectors and stored fields file.
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_CHECKSUM">
<summary>This format adds a checksum at the end of the file to
ensure all bytes were successfully written.
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_DEL_COUNT">
<summary>This format adds the deletion count for each segment.
This way IndexWriter can efficiently report numDocs().
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_HAS_PROX">
<summary>This format adds the boolean hasProx to record if any
fields in the segment store prox information (ie, have
omitTermFreqAndPositions==false)
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_USER_DATA">
<summary>This format adds optional commit userData (String) storage. </summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.FORMAT_DIAGNOSTICS">
<summary>This format adds optional per-segment String
dianostics storage, and switches userData to Map
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.version">
<summary> counts how often the index has been changed by adding or deleting docs.
starting with the current time in milliseconds forces to create unique version numbers.
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentInfos.infoStream">
<summary> If non-null, information about loading segments_N files</summary>
<seealso cref="M:Lucene.Net.Index.SegmentInfos.SetInfoStream(System.IO.StreamWriter)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentGeneration(System.String[])">
<summary> Get the generation (N) of the current segments_N file
from a list of files.
</summary>
<param name="files">-- array of file names to check
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentGeneration(Lucene.Net.Store.Directory)">
<summary> Get the generation (N) of the current segments_N file
in the directory.
</summary>
<param name="directory">-- directory to search for the latest segments_N file
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentFileName(System.String[])">
<summary> Get the filename of the current segments_N file
from a list of files.
</summary>
<param name="files">-- array of file names to check
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentFileName(Lucene.Net.Store.Directory)">
<summary> Get the filename of the current segments_N file
in the directory.
</summary>
<param name="directory">-- directory to search for the latest segments_N file
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GetCurrentSegmentFileName">
<summary> Get the segments_N filename in use by this segment infos.</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GenerationFromSegmentsFileName(System.String)">
<summary> Parse the generation off the segments file name and
return it.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GetNextSegmentFileName">
<summary> Get the next segments_N filename that will be written.</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Read(Lucene.Net.Store.Directory,System.String)">
<summary> Read a particular segmentFileName. Note that this may
throw an IOException if a commit is in process.
</summary>
<param name="directory">-- directory containing the segments file
</param>
<param name="segmentFileName">-- segment file to load
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Read(Lucene.Net.Store.Directory)">
<summary> This version of read uses the retry logic (for lock-less
commits) to find the right segments file to load.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Clone">
<summary> Returns a copy of this instance, also copying each
SegmentInfo.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.ReadCurrentVersion(Lucene.Net.Store.Directory)">
<summary> Current version number from segments file.</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.ReadCurrentUserData(Lucene.Net.Store.Directory)">
<summary> Returns userData from latest segments file</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.SetInfoStream(System.IO.StreamWriter)">
<summary>If non-null, information about retries when loading
the segments file will be printed to this.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Range(System.Int32,System.Int32)">
<summary> Returns a new SegmentInfos containg the SegmentInfo
instances in the specified range first (inclusive) to
last (exclusive), so total number of segments returned
is last-first.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.PrepareCommit(Lucene.Net.Store.Directory)">
<summary>Call this to start a commit. This writes the new
segments file, but writes an invalid checksum at the
end, so that it is not visible to readers. Once this
is called you must call <see cref="M:Lucene.Net.Index.SegmentInfos.FinishCommit(Lucene.Net.Store.Directory)"/> to complete
the commit or <see cref="M:Lucene.Net.Index.SegmentInfos.RollbackCommit(Lucene.Net.Store.Directory)"/> to abort it.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Files(Lucene.Net.Store.Directory,System.Boolean)">
<summary>Returns all file names referenced by SegmentInfo
instances matching the provided Directory (ie files
associated with any "external" segments are skipped).
The returned collection is recomputed on each
invocation.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Commit(Lucene.Net.Store.Directory)">
<summary>Writes & syncs to the Directory dir, taking care to
remove the segments file on exception
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Replace(Lucene.Net.Index.SegmentInfos)">
<summary>Replaces all segments in this instance, but keeps
generation, version, counter so that future commits
remain write once.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.Equals(System.Object)">
<summary>
Simple brute force implementation.
If size is equal, compare items one by one.
</summary>
<param name="obj">SegmentInfos object to check equality for</param>
<returns>true if lists are equal, false otherwise</returns>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.GetHashCode">
<summary>
Calculate hash code of SegmentInfos
</summary>
<returns>hash code as in java version of ArrayList</returns>
</member>
<member name="P:Lucene.Net.Index.SegmentInfos.Version">
<summary> version number when this SegmentInfos was generated.</summary>
</member>
<member name="P:Lucene.Net.Index.SegmentInfos.DefaultGenFileRetryCount">
<summary> Advanced: Gets or sets how many times to try loading the
segments.gen file contents to determine current segment
generation. This file is only referenced when the
primary method (listing the directory) fails.
</summary>
</member>
<member name="P:Lucene.Net.Index.SegmentInfos.DefaultGenLookaheadCount">
<summary> Advanced: set how many times to try incrementing the
gen when loading the segments file. This only runs if
the primary (listing directory) and secondary (opening
segments.gen file) methods fail to find the segments
file.
</summary>
</member>
<member name="P:Lucene.Net.Index.SegmentInfos.InfoStream">
<seealso cref="M:Lucene.Net.Index.SegmentInfos.SetInfoStream(System.IO.StreamWriter)">
</seealso>
</member>
<member name="T:Lucene.Net.Index.SegmentInfos.FindSegmentsFile">
<summary> Utility class for executing code that needs to do
something with the current segments file. This is
necessary with lock-less commits because from the time
you locate the current segments file name, until you
actually open it, read its contents, or check modified
time, etc., it could have been deleted due to a writer
commit finishing.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfos.FindSegmentsFile.DoBody(System.String)">
<summary> Subclass must implement this. The assumption is an
IOException will be thrown if something goes wrong
during the processing that could have been caused by
a writer committing.
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexReader.FieldOption">
<summary> Constants describing field properties, for example used for
<see cref="M:Lucene.Net.Index.IndexReader.GetFieldNames(Lucene.Net.Index.IndexReader.FieldOption)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.ALL">
<summary>All fields </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.INDEXED">
<summary>All indexed fields </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.STORES_PAYLOADS">
<summary>All fields that store payloads </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.OMIT_TERM_FREQ_AND_POSITIONS">
<summary>All fields that omit tf </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.UNINDEXED">
<summary>All fields which are not indexed </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.INDEXED_WITH_TERMVECTOR">
<summary>All fields which are indexed with termvectors enabled </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.INDEXED_NO_TERMVECTOR">
<summary>All fields which are indexed but don't have termvectors enabled </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR">
<summary>All fields with termvectors enabled. Please note that only standard termvector fields are returned </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR_WITH_POSITION">
<summary>All fields with termvectors with position values enabled </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR_WITH_OFFSET">
<summary>All fields with termvectors with offset values enabled </summary>
</member>
<member name="F:Lucene.Net.Index.IndexReader.FieldOption.TERMVECTOR_WITH_POSITION_OFFSET">
<summary>All fields with termvectors with offset values and position values enabled </summary>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Index.SegmentInfos,Lucene.Net.Index.IndexDeletionPolicy,System.Boolean,System.Int32)">
<summary>Construct reading the named set of readers. </summary>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Index.SegmentInfos,Lucene.Net.Index.SegmentReader[],System.Int32[],System.Collections.Generic.IEnumerable{System.Collections.Generic.KeyValuePair{System.String,System.Byte[]}},System.Boolean,System.Boolean,System.Int32)">
<summary>This constructor is only used for <see cref="M:Lucene.Net.Index.DirectoryReader.Reopen"/> </summary>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.IsOptimized">
<summary> Checks is the index is optimized (if it has a single segment and no deletions)</summary>
<returns> &lt;c&gt;true&lt;/c&gt; if the index is optimized; &lt;c&gt;false&lt;/c&gt; otherwise </returns>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.AcquireWriteLock">
<summary> Tries to acquire the WriteLock on this directory. this method is only valid if this IndexReader is directory
owner.
</summary>
<throws> StaleReaderException if the index has changed since this reader was opened </throws>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> Lucene.Net.Store.LockObtainFailedException </throws>
<summary> if another writer has this index open (<c>write.lock</c> could not be
obtained)
</summary>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.DoCommit(System.Collections.Generic.IDictionary{System.String,System.String})">
<summary> Commit changes resulting from delete, undeleteAll, or setNorm operations
<p/>
If an exception is hit, then either no changes or all changes will have been committed to the index (transactional
semantics).
</summary>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.Directory">
<summary>Returns the directory this index resides in. </summary>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.ListCommits(Lucene.Net.Store.Directory)">
<seealso cref="M:Lucene.Net.Index.IndexReader.ListCommits(Lucene.Net.Store.Directory)">
</seealso>
</member>
<member name="P:Lucene.Net.Index.DirectoryReader.Version">
<summary>Version number when this IndexReader was opened. </summary>
</member>
<member name="P:Lucene.Net.Index.DirectoryReader.IndexCommit">
<summary> Expert: return the IndexCommit that this reader has opened.
<p/>
<p/><b>WARNING</b>: this API is new and experimental and may suddenly change.<p/>
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexCommit">
<summary> <p/>Expert: represents a single commit into an index as seen by the
<see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> or <see cref="T:Lucene.Net.Index.IndexReader"/>.<p/>
<p/> Changes to the content of an index are made visible
only after the writer who made that change commits by
writing a new segments file
(<c>segments_N</c>). This point in time, when the
action of writing of a new segments file to the directory
is completed, is an index commit.<p/>
<p/>Each index commit point has a unique segments file
associated with it. The segments file associated with a
later index commit point would have a larger N.<p/>
<p/><b>WARNING</b>: This API is a new and experimental and
may suddenly change. <p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexCommit.Delete">
<summary> Delete this commit point. This only applies when using
the commit point in the context of IndexWriter's
IndexDeletionPolicy.
<p/>
Upon calling this, the writer is notified that this commit
point should be deleted.
<p/>
Decision that a commit-point should be deleted is taken by the <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> in effect
and therefore this should only be called by its <see cref="M:Lucene.Net.Index.IndexDeletionPolicy.OnInit``1(System.Collections.Generic.IList{``0})"/> or
<see cref="M:Lucene.Net.Index.IndexDeletionPolicy.OnCommit``1(System.Collections.Generic.IList{``0})"/> methods.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexCommit.Equals(System.Object)">
<summary> Two IndexCommits are equal if both their Directory and versions are equal.</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.SegmentsFileName">
<summary> Get the segments file (<c>segments_N</c>) associated
with this commit point.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.FileNames">
<summary> Returns all index files referenced by this commit point.</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.Directory">
<summary> Returns the <see cref="T:Lucene.Net.Store.Directory"/> for the index.</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.IsOptimized">
<summary> Returns true if this commit is an optimized index.</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.Version">
<summary>Returns the version for this IndexCommit. This is the
same value that <see cref="P:Lucene.Net.Index.IndexReader.Version"/> would
return if it were opened on this commit.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.Generation">
<summary>Returns the generation (the _N in segments_N) for this
IndexCommit
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.Timestamp">
<summary>Convenience method that returns the last modified time
of the segments_N file corresponding to this index
commit, equivalent to
getDirectory().fileModified(getSegmentsFileName()).
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexCommit.UserData">
<summary>Returns userData, previously passed to
<see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/>
for this commit. IDictionary is String -> String.
</summary>
</member>
<member name="T:Lucene.Net.Index.TermEnum">
<summary>Abstract class for enumerating terms.
<p/>Term enumerations are always ordered by Term.compareTo(). Each term in
the enumeration is greater than all that precede it.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermEnum.Next">
<summary>Increments the enumeration to the next element. True if one exists.</summary>
</member>
<member name="M:Lucene.Net.Index.TermEnum.DocFreq">
<summary>Returns the docFreq of the current Term in the enumeration.</summary>
</member>
<member name="M:Lucene.Net.Index.TermEnum.Close">
<summary>Closes the enumeration to further activity, freeing resources. </summary>
</member>
<member name="M:Lucene.Net.Index.TermEnum.Dispose">
<summary>Closes the enumeration to further activity, freeing resources. </summary>
</member>
<member name="P:Lucene.Net.Index.TermEnum.Term">
<summary>Returns the current Term in the enumeration.</summary>
</member>
<member name="M:Lucene.Net.Index.DirectoryReader.MultiTermDocs.Read(System.Int32[],System.Int32[])">
<summary>Optimized implementation. </summary>
</member>
<member name="T:Lucene.Net.Index.TermPositions">
<summary> TermPositions provides an interface for enumerating the <document,
frequency, <position>* > tuples for a term. <p/> The document and
frequency are the same as for a TermDocs. The positions portion lists the ordinal
positions of each occurrence of a term in a document.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.TermPositions">
</seealso>
</member>
<member name="M:Lucene.Net.Index.TermPositions.NextPosition">
<summary>Returns next position in the current document. It is an error to call
this more than <see cref="P:Lucene.Net.Index.TermDocs.Freq"/> times
without calling <see cref="M:Lucene.Net.Index.TermDocs.Next"/><p/> This is
invalid until <see cref="M:Lucene.Net.Index.TermDocs.Next"/> is called for
the first time.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermPositions.GetPayload(System.Byte[],System.Int32)">
<summary> Returns the payload data at the current term position.
This is invalid until <see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/> is called for
the first time.
This method must not be called more than once after each call
of <see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/>. However, payloads are loaded lazily,
so if the payload data for the current position is not needed,
this method may not be called at all for performance reasons.<br/>
</summary>
<param name="data">the array into which the data of this payload is to be
stored, if it is big enough; otherwise, a new byte[] array
is allocated for this purpose.
</param>
<param name="offset">the offset in the array into which the data of this payload
is to be stored.
</param>
<returns> a byte[] array containing the data of this payload
</returns>
<throws> IOException </throws>
</member>
<member name="P:Lucene.Net.Index.TermPositions.PayloadLength">
<summary> Returns the length of the payload at the current term position.
This is invalid until <see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/> is called for
the first time.<br/>
</summary>
<value> length of the current payload in number of bytes </value>
</member>
<member name="P:Lucene.Net.Index.TermPositions.IsPayloadAvailable">
<summary> Checks if a payload can be loaded at this position.
<p/>
Payloads can only be loaded once per call to
<see cref="M:Lucene.Net.Index.TermPositions.NextPosition"/>.
</summary>
<value> true if there is a payload available at this position that can be loaded </value>
</member>
<member name="M:Lucene.Net.Index.DocConsumerPerThread.ProcessDocument">
<summary>Process the document. If there is
something for this document to be done in docID order,
you should encapsulate that as a
DocumentsWriter.DocWriter and return it.
DocumentsWriter then calls finish() on this object
when it's its turn.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocFieldConsumer.Flush(System.Collections.Generic.IDictionary{Lucene.Net.Index.DocFieldConsumerPerThread,System.Collections.Generic.ICollection{Lucene.Net.Index.DocFieldConsumerPerField}},Lucene.Net.Index.SegmentWriteState)">
<summary>Called when DocumentsWriter decides to create a new
segment
</summary>
</member>
<member name="M:Lucene.Net.Index.DocFieldConsumer.CloseDocStore(Lucene.Net.Index.SegmentWriteState)">
<summary>Called when DocumentsWriter decides to close the doc
stores
</summary>
</member>
<member name="M:Lucene.Net.Index.DocFieldConsumer.Abort">
<summary>Called when an aborting exception is hit </summary>
</member>
<member name="M:Lucene.Net.Index.DocFieldConsumer.AddThread(Lucene.Net.Index.DocFieldProcessorPerThread)">
<summary>Add a new thread </summary>
</member>
<member name="M:Lucene.Net.Index.DocFieldConsumer.FreeRAM">
<summary>Called when DocumentsWriter is using too much RAM.
The consumer should free RAM, if possible, returning
true if any RAM was in fact freed.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocFieldConsumerPerField.ProcessFields(Lucene.Net.Documents.IFieldable[],System.Int32)">
<summary>Processes all occurrences of a single field </summary>
</member>
<member name="T:Lucene.Net.Index.DocFieldConsumers">
<summary>This is just a "splitter" class: it lets you wrap two
DocFieldConsumer instances as a single consumer.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocumentsWriter.DocWriter">
<summary>Consumer returns this on each doc. This holds any
state that must be flushed synchronized "in docID
order". We gather these and flush them in order.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocumentsWriter">
<summary> This class accepts multiple added documents and directly
writes a single segment file. It does this more
efficiently than creating a single segment per document
(with DocumentWriter) and doing standard merges on those
segments.
Each added document is passed to the <see cref="T:Lucene.Net.Index.DocConsumer"/>,
which in turn processes the document and interacts with
other consumers in the indexing chain. Certain
consumers, like <see cref="T:Lucene.Net.Index.StoredFieldsWriter"/> and <see cref="T:Lucene.Net.Index.TermVectorsTermsWriter"/>
, digest a document and
immediately write bytes to the "doc store" files (ie,
they do not consume RAM per document, except while they
are processing the document).
Other consumers, eg <see cref="T:Lucene.Net.Index.FreqProxTermsWriter"/> and
<see cref="T:Lucene.Net.Index.NormsWriter"/>, buffer bytes in RAM and flush only
when a new segment is produced.
Once we have used our allowed RAM buffer, or the number
of added docs is large enough (in the case we are
flushing by doc count instead of RAM usage), we create a
real segment and flush it to the Directory.
Threads:
Multiple threads are allowed into addDocument at once.
There is an initial synchronized call to getThreadState
which allocates a ThreadState for this thread. The same
thread will get the same ThreadState over time (thread
affinity) so that if there are consistent patterns (for
example each thread is indexing a different content
source) then we make better use of RAM. Then
processDocument is called on that ThreadState without
synchronization (most of the "heavy lifting" is in this
call). Finally the synchronized "finishDocument" is
called to flush changes to the directory.
When flush is called by IndexWriter we forcefully idle
all threads and flush only once they are all idle. This
means you can call flush with a given thread even while
other threads are actively adding/deleting documents.
Exceptions:
Because this class directly updates in-memory posting
lists, and flushes stored fields and term vectors
directly to files in the directory, there are certain
limited times when an exception can corrupt this state.
For example, a disk full while flushing stored fields
leaves this file in a corrupt state. Or, an OOM
exception while appending to the in-memory posting lists
can corrupt that posting list. We call such exceptions
"aborting exceptions". In these cases we must call
abort() to discard all docs added since the last flush.
All other exceptions ("non-aborting exceptions") can
still partially update the index structures. These
updates are consistent, but, they represent only a part
of the document seen up until the exception was hit.
When this happens, we immediately mark the document as
deleted so that the document is always atomically ("all
or none") added to the index.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.HasProx">
<summary>Returns true if any of the fields in the current
buffered docs have omitTermFreqAndPositions==false
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.SetInfoStream(System.IO.StreamWriter)">
<summary>If non-null, various details of indexing are printed
here.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.SetRAMBufferSizeMB(System.Double)">
<summary>Set how much RAM we can use before flushing. </summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.CloseDocStore">
<summary>Closes the current open doc stores an returns the doc
store segment name. This returns null if there are *
no buffered documents.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.Abort">
<summary>Called if we hit an exception at a bad time (when
updating the index files) and must discard all
currently buffered docs. This resets our state,
discarding any docs added since last flush.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.DoAfterFlush">
<summary>Reset after a flush </summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.Flush(System.Boolean)">
<summary>Flush all pending docs to a new segment </summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.CreateCompoundFile(System.String)">
<summary>Build compound file for the segment we just flushed </summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.SetFlushPending">
<summary>Set flushPending if it is not already set and returns
whether it was set. This is used by IndexWriter to
trigger a single flush even when multiple threads are
trying to do so.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.GetThreadState(Lucene.Net.Documents.Document,Lucene.Net.Index.Term)">
<summary>Returns a free (idle) ThreadState that may be used for
indexing this one document. This call also pauses if a
flush is pending. If delTerm is non-null then we
buffer this deleted term after the thread state has
been acquired.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.AddDocument(Lucene.Net.Documents.Document,Lucene.Net.Analysis.Analyzer)">
<summary>Returns true if the caller (IndexWriter) should now
flush.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.RemapDeletes(Lucene.Net.Index.SegmentInfos,System.Int32[][],System.Int32[],Lucene.Net.Index.MergePolicy.OneMerge,System.Int32)">
<summary>Called whenever a merge has completed and the merged segments had deletions </summary>
</member>
<member name="M:Lucene.Net.Index.DocumentsWriter.FinishDocument(Lucene.Net.Index.DocumentsWriterThreadState,Lucene.Net.Index.DocumentsWriter.DocWriter)">
<summary>Does the synchronized work to finish/flush the
inverted document.
</summary>
</member>
<member name="P:Lucene.Net.Index.DocumentsWriter.MaxBufferedDocs">
<summary>Gets or sets max buffered docs, which means we will flush by
doc count instead of by RAM usage.
</summary>
</member>
<member name="P:Lucene.Net.Index.DocumentsWriter.Segment">
<summary>Get current segment name we are writing. </summary>
</member>
<member name="P:Lucene.Net.Index.DocumentsWriter.NumDocsInRAM">
<summary>Returns how many docs are currently buffered in RAM. </summary>
</member>
<member name="P:Lucene.Net.Index.DocumentsWriter.DocStoreSegment">
<summary>Returns the current doc store segment we are writing
to.
</summary>
</member>
<member name="P:Lucene.Net.Index.DocumentsWriter.DocStoreOffset">
<summary>Returns the doc offset into the shared doc store for
the current buffered docs.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocumentsWriter.IndexingChain">
<summary> The IndexingChain must define the <see cref="M:Lucene.Net.Index.DocumentsWriter.IndexingChain.GetChain(Lucene.Net.Index.DocumentsWriter)"/> method
which returns the DocConsumer that the DocumentsWriter calls to process the
documents.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocumentsWriter.DocWriter">
<summary>Consumer returns this on each doc. This holds any
state that must be flushed synchronized "in docID
order". We gather these and flush them in order.
</summary>
</member>
<member name="M:Lucene.Net.Store.RAMFile.NewBuffer(System.Int32)">
<summary> Expert: allocate a new buffer.
Subclasses can allocate differently.
</summary>
<param name="size">size of allocated buffer.
</param>
<returns> allocated buffer.
</returns>
</member>
<member name="T:Lucene.Net.Index.DocFieldProcessor">
<summary> This is a DocConsumer that gathers all fields under the
same name, and calls per-field consumers to process field
by field. This class doesn't doesn't do any "real" work
of its own: it just forwards the fields to a
DocFieldConsumer.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocFieldProcessorPerField">
<summary> Holds all per thread, per field state.</summary>
</member>
<member name="T:Lucene.Net.Index.DocFieldProcessorPerThread">
<summary> Gathers all Fieldables for a document under the same
name, updates FieldInfos, and calls per-field consumers
to process field by field.
Currently, only a single thread visits the fields,
sequentially, for processing.
</summary>
</member>
<member name="M:Lucene.Net.Index.DocFieldProcessorPerThread.TrimFields(Lucene.Net.Index.SegmentWriteState)">
<summary>If there are fields we've seen but did not see again
in the last run, then free them up.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocInverter">
<summary>This is a DocFieldConsumer that inverts each field,
separately, from a Document, and accepts a
InvertedTermsConsumer to process those terms.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocInverterPerField">
<summary> Holds state for inverting all occurrences of a single
field in the document. This class doesn't do anything
itself; instead, it forwards the tokens produced by
analysis to its own consumer
(InvertedDocConsumerPerField). It also interacts with an
endConsumer (InvertedDocEndConsumerPerField).
</summary>
</member>
<member name="T:Lucene.Net.Index.DocInverterPerThread">
<summary>This is a DocFieldConsumer that inverts each field,
separately, from a Document, and accepts a
InvertedTermsConsumer to process those terms.
</summary>
</member>
<member name="T:Lucene.Net.Index.DocumentsWriterThreadState">
<summary>Used by DocumentsWriter to maintain per-thread state.
We keep a separate Posting hash and other state for each
thread and then merge postings hashes from all threads
when writing the segment.
</summary>
</member>
<member name="T:Lucene.Net.Index.FieldInfos">
<summary>Access to the Fieldable Info file that describes document fields and whether or
not they are indexed. Each segment has a separate Fieldable Info file. Objects
of this class are thread-safe for multiple readers, but only one thread can
be adding documents at a time, with no other reader or writer threads
accessing this object.
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.#ctor(Lucene.Net.Store.Directory,System.String)">
<summary> Construct a FieldInfos object using the directory and the name of the file
IndexInput
</summary>
<param name="d">The directory to open the IndexInput from
</param>
<param name="name">The name of the file to open the IndexInput from in the Directory
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Clone">
<summary> Returns a deep clone of this FieldInfos instance.</summary>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Add(Lucene.Net.Documents.Document)">
<summary>Adds field info for a Document. </summary>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.HasProx">
<summary>Returns true if any fields do not omitTermFreqAndPositions </summary>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.AddIndexed(System.Collections.Generic.ICollection{System.String},System.Boolean,System.Boolean,System.Boolean)">
<summary> Add fields that are indexed. Whether they have termvectors has to be specified.
</summary>
<param name="names">The names of the fields
</param>
<param name="storeTermVectors">Whether the fields store term vectors or not
</param>
<param name="storePositionWithTermVector">true if positions should be stored.
</param>
<param name="storeOffsetWithTermVector">true if offsets should be stored
</param>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Add(System.Collections.Generic.ICollection{System.String},System.Boolean)">
<summary> Assumes the fields are not storing term vectors.
</summary>
<param name="names">The names of the fields
</param>
<param name="isIndexed">Whether the fields are indexed or not
</param>
<seealso cref="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean)">
<summary> Calls 5 parameter add with false for all TermVector parameters.
</summary>
<param name="name">The name of the Fieldable
</param>
<param name="isIndexed">true if the field is indexed
</param>
<seealso cref="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean)">
<summary> Calls 5 parameter add with false for term vector positions and offsets.
</summary>
<param name="name">The name of the field
</param>
<param name="isIndexed"> true if the field is indexed
</param>
<param name="storeTermVector">true if the term vector should be stored
</param>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
<summary>If the field is not yet known, adds it. If it is known, checks to make
sure that the isIndexed flag is the same as was given previously for this
field. If not - marks it as being indexed. Same goes for the TermVector
parameters.
</summary>
<param name="name">The name of the field
</param>
<param name="isIndexed">true if the field is indexed
</param>
<param name="storeTermVector">true if the term vector should be stored
</param>
<param name="storePositionWithTermVector">true if the term vector with positions should be stored
</param>
<param name="storeOffsetWithTermVector">true if the term vector with offsets should be stored
</param>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
<summary>If the field is not yet known, adds it. If it is known, checks to make
sure that the isIndexed flag is the same as was given previously for this
field. If not - marks it as being indexed. Same goes for the TermVector
parameters.
</summary>
<param name="name">The name of the field
</param>
<param name="isIndexed">true if the field is indexed
</param>
<param name="storeTermVector">true if the term vector should be stored
</param>
<param name="storePositionWithTermVector">true if the term vector with positions should be stored
</param>
<param name="storeOffsetWithTermVector">true if the term vector with offsets should be stored
</param>
<param name="omitNorms">true if the norms for the indexed field should be omitted
</param>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.Add(System.String,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
<summary>If the field is not yet known, adds it. If it is known, checks to make
sure that the isIndexed flag is the same as was given previously for this
field. If not - marks it as being indexed. Same goes for the TermVector
parameters.
</summary>
<param name="name">The name of the field
</param>
<param name="isIndexed">true if the field is indexed
</param>
<param name="storeTermVector">true if the term vector should be stored
</param>
<param name="storePositionWithTermVector">true if the term vector with positions should be stored
</param>
<param name="storeOffsetWithTermVector">true if the term vector with offsets should be stored
</param>
<param name="omitNorms">true if the norms for the indexed field should be omitted
</param>
<param name="storePayloads">true if payloads should be stored for this field
</param>
<param name="omitTermFreqAndPositions">true if term freqs should be omitted for this field
</param>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.FieldName(System.Int32)">
<summary> Return the fieldName identified by its number.
</summary>
<param name="fieldNumber">
</param>
<returns> the fieldName or an empty string when the field
with the given number doesn't exist.
</returns>
</member>
<member name="M:Lucene.Net.Index.FieldInfos.FieldInfo(System.Int32)">
<summary> Return the fieldinfo object referenced by the fieldNumber.</summary>
<param name="fieldNumber">
</param>
<returns> the FieldInfo object or null when the given fieldNumber
doesn't exist.
</returns>
</member>
<member name="T:Lucene.Net.Index.FieldInvertState">
<summary> This class tracks the number and position / offset parameters of terms
being added to the index. The information collected in this class is
also used to calculate the normalization factor for a field.
<p/><b>WARNING</b>: This API is new and experimental, and may suddenly
change.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldInvertState.Reset(System.Single)">
<summary> Re-initialize the state, using this boost value.</summary>
<param name="docBoost">boost value to use.
</param>
</member>
<member name="P:Lucene.Net.Index.FieldInvertState.Position">
<summary> Get the last processed term position.</summary>
<value> the position </value>
</member>
<member name="P:Lucene.Net.Index.FieldInvertState.Length">
<summary> Get total number of terms in this field.</summary>
<value> the length </value>
</member>
<member name="P:Lucene.Net.Index.FieldInvertState.NumOverlap">
<summary> Get the number of terms with <c>positionIncrement == 0</c>.</summary>
<value> the numOverlap </value>
</member>
<member name="P:Lucene.Net.Index.FieldInvertState.Offset">
<summary> Get end offset of the last processed term.</summary>
<value> the offset </value>
</member>
<member name="P:Lucene.Net.Index.FieldInvertState.Boost">
<summary> Get boost value. This is the cumulative product of
document boost and field boost for all field instances
sharing the same field name.
</summary>
<value> the boost </value>
</member>
<member name="T:Lucene.Net.Index.FieldReaderException">
<summary>
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldReaderException.#ctor">
<summary> Constructs a new runtime exception with <c>null</c> as its
detail message. The cause is not initialized, and may subsequently be
initialized by a call to <see cref="P:System.Exception.InnerException"/>.
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldReaderException.#ctor(System.Exception)">
<summary> Constructs a new runtime exception with the specified cause and a
detail message of <tt>(cause==null ? null : cause.toString())</tt>
(which typically contains the class and detail message of
<tt>cause</tt>).
<p/>
This constructor is useful for runtime exceptions
that are little more than wrappers for other throwables.
</summary>
<param name="cause">the cause (which is saved for later retrieval by the
<see cref="P:System.Exception.InnerException"/>). (A <tt>null</tt> value is
permitted, and indicates that the cause is nonexistent or
unknown.)
</param>
<since> 1.4
</since>
</member>
<member name="M:Lucene.Net.Index.FieldReaderException.#ctor(System.String)">
<summary> Constructs a new runtime exception with the specified detail message.
The cause is not initialized, and may subsequently be initialized by a
call to <see cref="P:System.Exception.InnerException"/>.
</summary>
<param name="message">the detail message. The detail message is saved for
later retrieval by the <see cref="P:System.Exception.Message"/> method.
</param>
</member>
<member name="M:Lucene.Net.Index.FieldReaderException.#ctor(System.String,System.Exception)">
<summary> Constructs a new runtime exception with the specified detail message and
cause. <p/>Note that the detail message associated with
<c>cause</c> is <i>not</i> automatically incorporated in
this runtime exception's detail message.
</summary>
<param name="message">the detail message (which is saved for later retrieval
by the <see cref="P:System.Exception.Message"/> method).
</param>
<param name="cause"> the cause (which is saved for later retrieval by the
<see cref="P:System.Exception.InnerException"/> method). (A <tt>null</tt> value is
permitted, and indicates that the cause is nonexistent or
unknown.)
</param>
<since> 1.4
</since>
</member>
<member name="T:Lucene.Net.Index.FieldSortedTermVectorMapper">
<summary> For each Field, store a sorted collection of <see cref="T:Lucene.Net.Index.TermVectorEntry"/>s
<p/>
This is not thread-safe.
</summary>
</member>
<member name="T:Lucene.Net.Index.TermVectorMapper">
<summary> The TermVectorMapper can be used to map Term Vectors into your own
structure instead of the parallel array structure used by
<see cref="M:Lucene.Net.Index.IndexReader.GetTermFreqVector(System.Int32,System.String)"/>.
<p/>
It is up to the implementation to make sure it is thread-safe.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermVectorMapper.#ctor(System.Boolean,System.Boolean)">
<summary> </summary>
<param name="ignoringPositions">true if this mapper should tell Lucene to ignore positions even if they are stored
</param>
<param name="ignoringOffsets">similar to ignoringPositions
</param>
</member>
<member name="M:Lucene.Net.Index.TermVectorMapper.SetExpectations(System.String,System.Int32,System.Boolean,System.Boolean)">
<summary> Tell the mapper what to expect in regards to field, number of terms, offset and position storage.
This method will be called once before retrieving the vector for a field.
This method will be called before <see cref="M:Lucene.Net.Index.TermVectorMapper.Map(System.String,System.Int32,Lucene.Net.Index.TermVectorOffsetInfo[],System.Int32[])"/>.
</summary>
<param name="field">The field the vector is for
</param>
<param name="numTerms">The number of terms that need to be mapped
</param>
<param name="storeOffsets">true if the mapper should expect offset information
</param>
<param name="storePositions">true if the mapper should expect positions info
</param>
</member>
<member name="M:Lucene.Net.Index.TermVectorMapper.Map(System.String,System.Int32,Lucene.Net.Index.TermVectorOffsetInfo[],System.Int32[])">
<summary> Map the Term Vector information into your own structure</summary>
<param name="term">The term to add to the vector
</param>
<param name="frequency">The frequency of the term in the document
</param>
<param name="offsets">null if the offset is not specified, otherwise the offset into the field of the term
</param>
<param name="positions">null if the position is not specified, otherwise the position in the field of the term
</param>
</member>
<member name="M:Lucene.Net.Index.TermVectorMapper.SetDocumentNumber(System.Int32)">
<summary> Passes down the index of the document whose term vector is currently being mapped,
once for each top level call to a term vector reader.
<p/>
Default implementation IGNORES the document number. Override if your implementation needs the document number.
<p/>
NOTE: Document numbers are internal to Lucene and subject to change depending on indexing operations.
</summary>
<param name="documentNumber">index of document currently being mapped
</param>
</member>
<member name="P:Lucene.Net.Index.TermVectorMapper.IsIgnoringPositions">
<summary> Indicate to Lucene that even if there are positions stored, this mapper is not interested in them and they
can be skipped over. Derived classes should set this to true if they want to ignore positions. The default
is false, meaning positions will be loaded if they are stored.
</summary>
<value> false </value>
</member>
<member name="P:Lucene.Net.Index.TermVectorMapper.IsIgnoringOffsets">
<summary> </summary>
<seealso cref="P:Lucene.Net.Index.TermVectorMapper.IsIgnoringPositions"> Same principal as <see cref="P:Lucene.Net.Index.TermVectorMapper.IsIgnoringPositions"/>, but applied to offsets. false by default.
</seealso>
<value> false </value>
</member>
<member name="M:Lucene.Net.Index.FieldSortedTermVectorMapper.#ctor(System.Collections.Generic.IComparer{Lucene.Net.Index.TermVectorEntry})">
<summary> </summary>
<param name="comparator">A Comparator for sorting <see cref="T:Lucene.Net.Index.TermVectorEntry"/>s
</param>
</member>
<member name="P:Lucene.Net.Index.FieldSortedTermVectorMapper.FieldToTerms">
<summary> Get the mapping between fields and terms, sorted by the comparator
</summary>
<value> A map between field names and <see cref="System.Collections.Generic.SortedDictionary{Object,Object}" />s per field. SortedSet entries are <see cref="TermVectorEntry" /> </value>
</member>
<member name="T:Lucene.Net.Index.FieldsReader">
<summary> Class responsible for access to stored document fields.
<p/>
It uses <segment>.fdt and <segment>.fdx; files.
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldsReader.Clone">
<summary>Returns a cloned FieldsReader that shares open
IndexInputs with the original one. It is the caller's
job not to close the original FieldsReader until all
clones are called (eg, currently SegmentReader manages
this logic).
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldsReader.EnsureOpen">
<throws> AlreadyClosedException if this FieldsReader is closed </throws>
</member>
<member name="M:Lucene.Net.Index.FieldsReader.Dispose">
<summary> Closes the underlying <see cref="T:Lucene.Net.Store.IndexInput"/> streams, including any ones associated with a
lazy implementation of a Field. This means that the Fields values will not be accessible.
</summary>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.FieldsReader.RawDocs(System.Int32[],System.Int32,System.Int32)">
<summary>Returns the length in bytes of each raw document in a
contiguous range of length numDocs starting with
startDocID. Returns the IndexInput (the fieldStream),
already seeked to the starting point for startDocID.
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldsReader.SkipField(System.Boolean,System.Boolean)">
<summary> Skip the field. We still have to read some of the information about the field, but can skip past the actual content.
This will have the most payoff on large fields.
</summary>
</member>
<member name="T:Lucene.Net.Index.FieldsReader.LazyField">
<summary> A Lazy implementation of Fieldable that differs loading of fields until asked for, instead of when the Document is
loaded.
</summary>
</member>
<member name="P:Lucene.Net.Index.FieldsReader.LazyField.ReaderValue">
<summary>The value of the field as a Reader, or null. If null, the String value,
binary value, or TokenStream value is used. Exactly one of StringValue(),
ReaderValue(), GetBinaryValue(), and TokenStreamValue() must be set.
</summary>
</member>
<member name="P:Lucene.Net.Index.FieldsReader.LazyField.TokenStreamValue">
<summary>The value of the field as a TokenStream, or null. If null, the Reader value,
String value, or binary value is used. Exactly one of StringValue(),
ReaderValue(), GetBinaryValue(), and TokenStreamValue() must be set.
</summary>
</member>
<member name="P:Lucene.Net.Index.FieldsReader.LazyField.StringValue">
<summary>The value of the field as a String, or null. If null, the Reader value,
binary value, or TokenStream value is used. Exactly one of StringValue(),
ReaderValue(), GetBinaryValue(), and TokenStreamValue() must be set.
</summary>
</member>
<member name="M:Lucene.Net.Index.FieldsWriter.AddRawDocuments(Lucene.Net.Store.IndexInput,System.Int32[],System.Int32)">
<summary>Bulk write a contiguous series of documents. The
lengths array is the length (in bytes) of each raw
document. The stream IndexInput is the
fieldsStream from which we should bulk-copy all
bytes.
</summary>
</member>
<member name="T:Lucene.Net.Index.FilterIndexReader">
<summary>A <c>FilterIndexReader</c> contains another IndexReader, which it
uses as its basic source of data, possibly transforming the data along the
way or providing additional functionality. The class
<c>FilterIndexReader</c> itself simply implements all abstract methods
of <c>IndexReader</c> with versions that pass all requests to the
contained index reader. Subclasses of <c>FilterIndexReader</c> may
further override some of these methods and may also provide additional
methods and fields.
</summary>
</member>
<member name="M:Lucene.Net.Index.FilterIndexReader.#ctor(Lucene.Net.Index.IndexReader)">
<summary> <p/>Construct a FilterIndexReader based on the specified base reader.
Directory locking for delete, undeleteAll, and setNorm operations is
left to the base reader.<p/>
<p/>Note that base reader is closed if this FilterIndexReader is closed.<p/>
</summary>
<param name="in_Renamed">specified base reader.
</param>
</member>
<member name="P:Lucene.Net.Index.FilterIndexReader.FieldCacheKey">
<summary>
If the subclass of FilteredIndexReader modifies the
contents of the FieldCache, you must override this
method to provide a different key */
</summary>
</member>
<member name="P:Lucene.Net.Index.FilterIndexReader.DeletesCacheKey">
<summary>
If the subclass of FilteredIndexReader modifies the
deleted docs, you must override this method to provide
a different key */
</summary>
</member>
<member name="T:Lucene.Net.Index.FilterIndexReader.FilterTermDocs">
<summary>Base class for filtering <see cref="T:Lucene.Net.Index.TermDocs"/> implementations. </summary>
</member>
<member name="T:Lucene.Net.Index.FilterIndexReader.FilterTermPositions">
<summary>Base class for filtering <see cref="M:Lucene.Net.Index.FilterIndexReader.TermPositions"/> implementations. </summary>
</member>
<member name="T:Lucene.Net.Index.FilterIndexReader.FilterTermEnum">
<summary>Base class for filtering <see cref="T:Lucene.Net.Index.TermEnum"/> implementations. </summary>
</member>
<member name="T:Lucene.Net.Index.FormatPostingsDocsConsumer">
<summary> NOTE: this API is experimental and will likely change</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsDocsConsumer.AddDoc(System.Int32,System.Int32)">
<summary>Adds a new doc in this term. If this returns null
then we just skip consuming positions/payloads.
</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsDocsConsumer.Finish">
<summary>Called when we are done adding docs to this term </summary>
</member>
<member name="T:Lucene.Net.Index.FormatPostingsDocsWriter">
<summary>Consumes doc and freq, writing them using the current
index file format
</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsDocsWriter.AddDoc(System.Int32,System.Int32)">
<summary>Adds a new doc in this term. If this returns null
then we just skip consuming positions/payloads.
</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsDocsWriter.Finish">
<summary>Called when we are done adding docs to this term </summary>
</member>
<member name="T:Lucene.Net.Index.FormatPostingsFieldsConsumer">
<summary>Abstract API that consumes terms, doc, freq, prox and
payloads postings. Concrete implementations of this
actually do "something" with the postings (write it into
the index in a specific format).
NOTE: this API is experimental and will likely change
</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsFieldsConsumer.AddField(Lucene.Net.Index.FieldInfo)">
<summary>Add a new field </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsFieldsConsumer.Finish">
<summary>Called when we are done adding everything. </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsFieldsWriter.AddField(Lucene.Net.Index.FieldInfo)">
<summary>Add a new field </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsFieldsWriter.Finish">
<summary>Called when we are done adding everything. </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsPositionsConsumer.AddPosition(System.Int32,System.Byte[],System.Int32,System.Int32)">
<summary>Add a new position & payload. If payloadLength > 0
you must read those bytes from the IndexInput.
</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsPositionsConsumer.Finish">
<summary>Called when we are done adding positions & payloads </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsPositionsWriter.AddPosition(System.Int32,System.Byte[],System.Int32,System.Int32)">
<summary>Add a new position & payload </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsPositionsWriter.Finish">
<summary>Called when we are done adding positions & payloads </summary>
</member>
<member name="T:Lucene.Net.Index.FormatPostingsTermsConsumer">
<summary> NOTE: this API is experimental and will likely change</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsTermsConsumer.AddTerm(System.Char[],System.Int32)">
<summary>Adds a new term in this field; term ends with U+FFFF
char
</summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsTermsConsumer.Finish">
<summary>Called when we are done adding terms to this field </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsTermsWriter.AddTerm(System.Char[],System.Int32)">
<summary>Adds a new term in this field </summary>
</member>
<member name="M:Lucene.Net.Index.FormatPostingsTermsWriter.Finish">
<summary>Called when we are done adding terms to this field </summary>
</member>
<member name="T:Lucene.Net.Index.FreqProxFieldMergeState">
<summary>Used by DocumentsWriter to merge the postings from
multiple ThreadStates when creating a segment
</summary>
</member>
<member name="T:Lucene.Net.Index.RawPostingList">
<summary>This is the base class for an in-memory posting list,
keyed by a Token. <see cref="T:Lucene.Net.Index.TermsHash"/> maintains a hash
table holding one instance of this per unique Token.
Consumers of TermsHash (<see cref="T:Lucene.Net.Index.TermsHashConsumer"/>) must
subclass this class with its own concrete class.
FreqProxTermsWriter.PostingList is a private inner class used
for the freq/prox postings, and
TermVectorsTermsWriter.PostingList is a private inner class
used to hold TermVectors postings.
</summary>
</member>
<member name="T:Lucene.Net.Index.TermsHashConsumerPerField">
<summary>Implement this class to plug into the TermsHash
processor, which inverts and stores Tokens into a hash
table and provides an API for writing bytes into
multiple streams for each unique Token.
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexDeletionPolicy">
<summary> <p/>Expert: policy for deletion of stale <see cref="T:Lucene.Net.Index.IndexCommit">index commits</see>.
<p/>Implement this interface, and pass it to one
of the <see cref="T:Lucene.Net.Index.IndexWriter"/> or <see cref="T:Lucene.Net.Index.IndexReader"/>
constructors, to customize when older
<see cref="T:Lucene.Net.Index.IndexCommit">point-in-time commits</see>
are deleted from the index directory. The default deletion policy
is <see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>, which always
removes old commits as soon as a new commit is done (this
matches the behavior before 2.2).<p/>
<p/>One expected use case for this (and the reason why it
was first created) is to work around problems with an
index directory accessed via filesystems like NFS because
NFS does not provide the "delete on last close" semantics
that Lucene's "point in time" search normally relies on.
By implementing a custom deletion policy, such as "a
commit is only removed once it has been stale for more
than X minutes", you can give your readers time to
refresh to the new commit before <see cref="T:Lucene.Net.Index.IndexWriter"/>
removes the old commits. Note that doing so will
increase the storage requirements of the index. See <a target="top" href="http://issues.apache.org/jira/browse/LUCENE-710">LUCENE-710</a>
for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexDeletionPolicy.OnInit``1(System.Collections.Generic.IList{``0})">
<summary> <p/>This is called once when a writer is first
instantiated to give the policy a chance to remove old
commit points.<p/>
<p/>The writer locates all index commits present in the
index directory and calls this method. The policy may
choose to delete some of the commit points, doing so by
calling method <see cref="M:Lucene.Net.Index.IndexCommit.Delete"/>
of <see cref="T:Lucene.Net.Index.IndexCommit"/>.<p/>
<p/><u>Note:</u> the last CommitPoint is the most recent one,
i.e. the "front index state". Be careful not to delete it,
unless you know for sure what you are doing, and unless
you can afford to lose the index content while doing that.
</summary>
<param name="commits">List of current
<see cref="T:Lucene.Net.Index.IndexCommit">point-in-time commits</see>,
sorted by age (the 0th one is the oldest commit).
</param>
</member>
<member name="M:Lucene.Net.Index.IndexDeletionPolicy.OnCommit``1(System.Collections.Generic.IList{``0})">
<summary>
<p>This is called each time the writer completed a commit.
This gives the policy a chance to remove old commit points
with each commit.</p>
<p>The policy may now choose to delete old commit points
by calling method <see cref="M:Lucene.Net.Index.IndexCommit.Delete"/>
of <see cref="T:Lucene.Net.Index.IndexCommit"/>.</p>
<p>This method is only called when <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>
or <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> is called, or possibly not at
all if the <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/> is called.</p>
<p><u>Note:</u> the last CommitPoint is the most recent one,
i.e. the "front index state". Be careful not to delete it,
unless you know for sure what you are doing, and unless
you can afford to lose the index content while doing that.</p>
</summary>
<param name="commits">
List of <see cref="T:Lucene.Net.Index.IndexCommit"/>, sorted by age (the 0th one is the oldest commit).
</param>
</member>
<member name="T:Lucene.Net.Index.IndexFileDeleter">
<summary>
<para>This class keeps track of each SegmentInfos instance that
is still "live", either because it corresponds to a
segments_N file in the Directory (a "commit", i.e. a
committed SegmentInfos) or because it's an in-memory
SegmentInfos that a writer is actively updating but has
not yet committed. This class uses simple reference
counting to map the live SegmentInfos instances to
individual files in the Directory.</para>
<para>The same directory file may be referenced by more than
one IndexCommit, i.e. more than one SegmentInfos.
Therefore we count how many commits reference each file.
When all the commits referencing a certain file have been
deleted, the refcount for that file becomes zero, and the
file is deleted.</para>
<para>A separate deletion policy interface
(IndexDeletionPolicy) is consulted on creation (onInit)
and once per commit (onCommit), to decide when a commit
should be removed.</para>
<para>It is the business of the IndexDeletionPolicy to choose
when to delete commit points. The actual mechanics of
file deletion, retrying, etc, derived from the deletion
of commit points is the business of the IndexFileDeleter.</para>
<para>The current default deletion policy is
<see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>, which removes all
prior commits when a new commit has completed. This
matches the behavior before 2.2.</para>
<para>Note that you must hold the write.lock before
instantiating this class. It opens segments_N file(s)
directly with no retry logic.</para>
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileDeleter.deletable">
because they are open and we are running on Windows),
so we will retry them again later: ////
</member>
<member name="F:Lucene.Net.Index.IndexFileDeleter.refCounts">
Counts how many existing commits reference a file.
Maps String to RefCount (class below) instances: ////
</member>
<member name="F:Lucene.Net.Index.IndexFileDeleter.commits">
This will have just 1 commit if you are using the
default delete policy (KeepOnlyLastCommitDeletionPolicy).
Other policies may leave commit points live for longer
in which case this list would be longer than 1: ////
</member>
<member name="F:Lucene.Net.Index.IndexFileDeleter.lastFiles">
non-commit checkpoint: ////
</member>
<member name="F:Lucene.Net.Index.IndexFileDeleter.VERBOSE_REF_COUNTS">
<summary>Change to true to see details of reference counts when
infoStream != null
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexFileDeleter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.SegmentInfos,System.IO.StreamWriter,Lucene.Net.Index.DocumentsWriter,System.Collections.Generic.HashSet{System.String})">
<summary> Initialize the deleter: find all previous commits in
the Directory, incref the files they reference, call
the policy to let it delete commits. This will remove
any files not referenced by any of the commits.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexFileDeleter.DeleteCommits">
<summary> Remove the CommitPoints in the commitsToDelete List by
DecRef'ing all files from each SegmentInfos.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexFileDeleter.Refresh(System.String)">
<summary> Writer calls this when it has hit an error and had to
roll back, to tell us that there may now be
unreferenced files in the filesystem. So we re-list
the filesystem and delete such files. If segmentName
is non-null, we will only delete files corresponding to
that segment.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexFileDeleter.Checkpoint(Lucene.Net.Index.SegmentInfos,System.Boolean)">
<summary> For definition of "check point" see IndexWriter comments:
"Clarification: Check Points (and commits)".
Writer calls this when it has made a "consistent
change" to the index, meaning new files are written to
the index and the in-memory SegmentInfos have been
modified to point to those files.
This may or may not be a commit (segments_N may or may
not have been written).
We simply incref the files referenced by the new
SegmentInfos and decref the files we had previously
seen (if any).
If this is a commit, we also call the policy to give it
a chance to remove other commits. If any commits are
removed, we decref their files as well.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexFileDeleter.DeleteNewFiles(System.Collections.Generic.ICollection{System.String})">
<summary>Deletes the specified files, but only if they are new
(have not yet been incref'd).
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexFileDeleter.RefCount">
<summary> Tracks the reference count for a single index file:</summary>
</member>
<member name="T:Lucene.Net.Index.IndexFileDeleter.CommitPoint">
<summary> Holds details for each commit point. This class is
also passed to the deletion policy. Note: this class
has a natural ordering that is inconsistent with
equals.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexFileDeleter.CommitPoint.Delete">
<summary> Called only be the deletion policy, to remove this
commit point from the index.
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexFileNameFilter">
<summary>Filename filter that accept filenames and extensions only created by Lucene. </summary>
</member>
<member name="M:Lucene.Net.Index.IndexFileNameFilter.IsCFSFile(System.String)">
<summary> Returns true if this is a file that would be contained
in a CFS file. This function should only be called on
files that pass the above "accept" (ie, are already
known to be a Lucene index file).
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexFileNames">
<summary>Useful constants representing filenames and extensions used by lucene</summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.SEGMENTS">
<summary>Name of the index segment file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.SEGMENTS_GEN">
<summary>Name of the generation reference file name </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.DELETABLE">
<summary>Name of the index deletable file (only used in
pre-lockless indices)
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.NORMS_EXTENSION">
<summary>Extension of norms file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.FREQ_EXTENSION">
<summary>Extension of freq postings file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.PROX_EXTENSION">
<summary>Extension of prox postings file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.TERMS_EXTENSION">
<summary>Extension of terms file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.TERMS_INDEX_EXTENSION">
<summary>Extension of terms index file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.FIELDS_INDEX_EXTENSION">
<summary>Extension of stored fields index file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.FIELDS_EXTENSION">
<summary>Extension of stored fields file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.VECTORS_FIELDS_EXTENSION">
<summary>Extension of vectors fields file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.VECTORS_DOCUMENTS_EXTENSION">
<summary>Extension of vectors documents file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.VECTORS_INDEX_EXTENSION">
<summary>Extension of vectors index file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.COMPOUND_FILE_EXTENSION">
<summary>Extension of compound file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.COMPOUND_FILE_STORE_EXTENSION">
<summary>Extension of compound file for doc store files</summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.DELETES_EXTENSION">
<summary>Extension of deletes </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.FIELD_INFOS_EXTENSION">
<summary>Extension of field infos </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.PLAIN_NORMS_EXTENSION">
<summary>Extension of plain norms </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.SEPARATE_NORMS_EXTENSION">
<summary>Extension of separate norms </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.GEN_EXTENSION">
<summary>Extension of gen file </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.INDEX_EXTENSIONS">
<summary> This array contains all filename extensions used by
Lucene's index files, with two exceptions, namely the
extension made up from <c>.f</c> + a number and
from <c>.s</c> + a number. Also note that
Lucene's <c>segments_N</c> files do not have any
filename extension.
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.INDEX_EXTENSIONS_IN_COMPOUND_FILE">
<summary>File extensions that are added to a compound file
(same as above, minus "del", "gen", "cfs").
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.COMPOUND_EXTENSIONS">
<summary>File extensions of old-style index files </summary>
</member>
<member name="F:Lucene.Net.Index.IndexFileNames.VECTOR_EXTENSIONS">
<summary>File extensions for term vector support </summary>
</member>
<member name="M:Lucene.Net.Index.IndexFileNames.FileNameFromGeneration(System.String,System.String,System.Int64)">
<summary> Computes the full file name from base, extension and
generation. If the generation is -1, the file name is
null. If it's 0, the file name is
If it's > 0, the file name is
</summary>
<param name="base_Renamed">-- main part of the file name
</param>
<param name="extension">-- extension of the filename (including .)
</param>
<param name="gen">-- generation
</param>
</member>
<member name="M:Lucene.Net.Index.IndexFileNames.IsDocStoreFile(System.String)">
<summary> Returns true if the provided filename is one of the doc
store files (ends with an extension in
STORE_INDEX_EXTENSIONS).
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexWriter">
<summary>An <c>IndexWriter</c> creates and maintains an index.
<p/>The <c>create</c> argument to the
<see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)">constructor</see> determines
whether a new index is created, or whether an existing index is
opened. Note that you can open an index with <c>create=true</c>
even while readers are using the index. The old readers will
continue to search the "point in time" snapshot they had opened,
and won't see the newly created index until they re-open. There are
also <see cref="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)">constructors</see>
with no <c>create</c> argument which will create a new index
if there is not already an index at the provided path and otherwise
open the existing index.<p/>
<p/>In either case, documents are added with <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
and removed with <see cref="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Index.Term)"/> or
<see cref="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Search.Query)"/>. A document can be updated with
<see cref="M:Lucene.Net.Index.IndexWriter.UpdateDocument(Lucene.Net.Index.Term,Lucene.Net.Documents.Document)"/> (which just deletes
and then adds the entire document). When finished adding, deleting
and updating documents, <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> should be called.<p/>
<a name="flush"></a>
<p/>These changes are buffered in memory and periodically
flushed to the <see cref="P:Lucene.Net.Index.IndexWriter.Directory"/> (during the above method
calls). A flush is triggered when there are enough
buffered deletes (see <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDeleteTerms(System.Int32)"/>)
or enough added documents since the last flush, whichever
is sooner. For the added documents, flushing is triggered
either by RAM usage of the documents (see
<see cref="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)"/>) or the number of added documents.
The default is to flush when RAM usage hits 16 MB. For
best indexing speed you should flush by RAM usage with a
large RAM buffer. Note that flushing just moves the
internal buffered state in IndexWriter into the index, but
these changes are not visible to IndexReader until either
<see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> or <see cref="M:Lucene.Net.Index.IndexWriter.Close"/> is called. A flush may
also trigger one or more segment merges which by default
run with a background thread so as not to block the
addDocument calls (see <a href="#mergePolicy">below</a>
for changing the <see cref="P:Lucene.Net.Index.IndexWriter.MergeScheduler"/>).
<p/>
If an index will not have more documents added for a while and optimal search
performance is desired, then either the full <see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/>
method or partial <see cref="M:Lucene.Net.Index.IndexWriter.Optimize(System.Int32)"/> method should be
called before the index is closed.
<p/>
Opening an <c>IndexWriter</c> creates a lock file for the directory in use. Trying to open
another <c>IndexWriter</c> on the same directory will lead to a
<see cref="T:Lucene.Net.Store.LockObtainFailedException"/>. The <see cref="T:Lucene.Net.Store.LockObtainFailedException"/>
is also thrown if an IndexReader on the same directory is used to delete documents
from the index.<p/>
</summary>
<summary><a name="deletionPolicy"></a>
<p/>Expert: <c>IndexWriter</c> allows an optional
<see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> implementation to be
specified. You can use this to control when prior commits
are deleted from the index. The default policy is <see cref="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy"/>
which removes all prior
commits as soon as a new commit is done (this matches
behavior before 2.2). Creating your own policy can allow
you to explicitly keep previous "point in time" commits
alive in the index for some time, to allow readers to
refresh to the new commit without having the old commit
deleted out from under them. This is necessary on
filesystems like NFS that do not support "delete on last
close" semantics, which Lucene's "point in time" search
normally relies on. <p/>
<a name="mergePolicy"></a> <p/>Expert:
<c>IndexWriter</c> allows you to separately change
the <see cref="P:Lucene.Net.Index.IndexWriter.MergePolicy"/> and the <see cref="P:Lucene.Net.Index.IndexWriter.MergeScheduler"/>.
The <see cref="P:Lucene.Net.Index.IndexWriter.MergePolicy"/> is invoked whenever there are
changes to the segments in the index. Its role is to
select which merges to do, if any, and return a <see cref="T:Lucene.Net.Index.MergePolicy.MergeSpecification"/>
describing the merges. It
also selects merges to do for optimize(). (The default is
<see cref="T:Lucene.Net.Index.LogByteSizeMergePolicy"/>. Then, the <see cref="P:Lucene.Net.Index.IndexWriter.MergeScheduler"/>
is invoked with the requested merges and
it decides when and how to run the merges. The default is
<see cref="T:Lucene.Net.Index.ConcurrentMergeScheduler"/>. <p/>
<a name="OOME"></a><p/><b>NOTE</b>: if you hit an
OutOfMemoryError then IndexWriter will quietly record this
fact and block all future segment commits. This is a
defensive measure in case any internal state (buffered
documents and deletions) were corrupted. Any subsequent
calls to <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> will throw an
IllegalStateException. The only course of action is to
call <see cref="M:Lucene.Net.Index.IndexWriter.Close"/>, which internally will call <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>
, to undo any changes to the index since the
last commit. You can also just call <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>
directly.<p/>
<a name="thread-safety"></a><p/><b>NOTE</b>:
<see cref="T:Lucene.Net.Index.IndexWriter"/> instances are completely thread
safe, meaning multiple threads can call any of its
methods, concurrently. If your application requires
external synchronization, you should <b>not</b>
synchronize on the <c>IndexWriter</c> instance as
this may cause deadlock; use your own (non-Lucene) objects
instead. <p/>
<b>NOTE:</b> if you call
<c>Thread.Interrupt()</c> on a thread that's within
IndexWriter, IndexWriter will try to catch this (eg, if
it's in a Wait() or Thread.Sleep()), and will then throw
the unchecked exception <see cref="T:System.Threading.ThreadInterruptedException"/>
and <b>clear</b> the interrupt status on the thread<p/>
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.WRITE_LOCK_NAME">
<summary> Name of the write lock in the index.</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.DISABLE_AUTO_FLUSH">
<summary> Value to denote a flush trigger is disabled</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_RAM_BUFFER_SIZE_MB">
<summary> Default value is 16 MB (which means flush when buffered
docs consume 16 MB RAM). Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_FIELD_LENGTH">
<summary> Default value is 10,000. Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)"/>.</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_TERM_INDEX_INTERVAL">
<summary> Default value is 128. Change using <see cref="P:Lucene.Net.Index.IndexWriter.TermIndexInterval"/>.</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.WRITE_LOCK_TIMEOUT">
<summary> Default value for the write lock timeout (1,000).</summary>
<seealso cref="P:Lucene.Net.Index.IndexWriter.DefaultWriteLockTimeout">
</seealso>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_BUFFERED_DOCS">
<summary> Disabled by default (because IndexWriter flushes by RAM usage
by default). Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDocs(System.Int32)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_BUFFERED_DELETE_TERMS">
<summary> Disabled by default (because IndexWriter flushes by RAM usage
by default). Change using <see cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDeleteTerms(System.Int32)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.MAX_TERM_LENGTH">
<summary> Absolute hard maximum length for a term. If a term
arrives from the analyzer longer than this length, it
is skipped and a message is printed to infoStream, if
set (see <see cref="M:Lucene.Net.Index.IndexWriter.SetInfoStream(System.IO.StreamWriter)"/>).
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetReader">
<summary> Expert: returns a readonly reader, covering all committed as well as
un-committed changes to the index. This provides "near real-time"
searching, in that changes made during an IndexWriter session can be
quickly made available for searching without closing the writer nor
calling <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>.
<p/>
Note that this is functionally equivalent to calling {#commit} and then
using <see cref="M:Lucene.Net.Index.IndexReader.Open(Lucene.Net.Store.Directory,System.Boolean)"/> to open a new reader. But the turarnound
time of this method should be faster since it avoids the potentially
costly <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>.
<p/>
You must close the <see cref="T:Lucene.Net.Index.IndexReader"/> returned by this method once you are done using it.
<p/>
It's <i>near</i> real-time because there is no hard
guarantee on how quickly you can get a new reader after
making changes with IndexWriter. You'll have to
experiment in your situation to determine if it's
faster enough. As this is a new and experimental
feature, please report back on your findings so we can
learn, improve and iterate.<p/>
<p/>The resulting reader suppports <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>
, but that call will simply forward
back to this method (though this may change in the
future).<p/>
<p/>The very first time this method is called, this
writer instance will make every effort to pool the
readers that it opens for doing merges, applying
deletes, etc. This means additional resources (RAM,
file descriptors, CPU time) will be consumed.<p/>
<p/>For lower latency on reopening a reader, you should call <see cref="P:Lucene.Net.Index.IndexWriter.MergedSegmentWarmer"/>
to call <see cref="P:Lucene.Net.Index.IndexWriter.MergedSegmentWarmer"/> to
pre-warm a newly merged segment before it's committed
to the index. This is important for minimizing index-to-search
delay after a large merge.
<p/>If an addIndexes* call is running in another thread,
then this reader will only search those segments from
the foreign index that have been successfully copied
over, so far<p/>.
<p/><b>NOTE</b>: Once the writer is closed, any
outstanding readers may continue to be used. However,
if you attempt to reopen any of those readers, you'll
hit an <see cref="T:Lucene.Net.Store.AlreadyClosedException"/>.<p/>
<p/><b>NOTE:</b> This API is experimental and might
change in incompatible ways in the next release.<p/>
</summary>
<returns> IndexReader that covers entire index plus all
changes made so far by this IndexWriter instance
</returns>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetReader(System.Int32)">
<summary>Expert: like <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>, except you can
specify which termInfosIndexDivisor should be used for
any newly opened readers.
</summary>
<param name="termInfosIndexDivisor">Subsambles which indexed
terms are loaded into RAM. This has the same effect as <see cref="P:Lucene.Net.Index.IndexWriter.TermIndexInterval"/>
except that setting
must be done at indexing time while this setting can be
set per reader. When set to N, then one in every
N*termIndexInterval terms in the index is loaded into
memory. By setting this to a value > 1 you can reduce
memory usage, at the expense of higher latency when
loading a TermInfo. The default value is 1. Set this
to -1 to skip loading the terms index entirely.
</param>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.NumDeletedDocs(Lucene.Net.Index.SegmentInfo)">
<summary> Obtain the number of deleted docs for a pooled reader.
If the reader isn't being pooled, the segmentInfo's
delCount is returned.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.EnsureOpen(System.Boolean)">
<summary> Used internally to throw an <see cref="T:Lucene.Net.Store.AlreadyClosedException"/>
if this IndexWriter has been
closed.
</summary>
<throws> AlreadyClosedException if this IndexWriter is </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Message(System.String)">
<summary> Prints a message to the infoStream (if non-null),
prefixed with the identifying information for this
writer and the thread that's calling it.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetSimilarity(Lucene.Net.Search.Similarity)">
<summary>Expert: Set the Similarity implementation used by this IndexWriter.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
<summary> Constructs an IndexWriter for the index in <c>d</c>.
Text will be analyzed with <c>a</c>. If <c>create</c>
is true, then a new, empty index will be created in
<c>d</c>, replacing the index already there, if any.
</summary>
<param name="d">the index directory
</param>
<param name="a">the analyzer to use
</param>
<param name="create"><c>true</c> to create the index or overwrite
the existing one; <c>false</c> to append to the existing
index
</param>
<param name="mfl">Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
via the MaxFieldLength constructor.
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> LockObtainFailedException if another writer </throws>
<summary> has this index open (<c>write.lock</c> could not
be obtained)
</summary>
<throws> IOException if the directory cannot be read/written to, or </throws>
<summary> if it does not exist and <c>create</c> is
<c>false</c> or if there is any other low-level
IO error
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
<summary> Constructs an IndexWriter for the index in
<c>d</c>, first creating it if it does not
already exist.
</summary>
<param name="d">the index directory
</param>
<param name="a">the analyzer to use
</param>
<param name="mfl">Maximum field length in number of terms/tokens: LIMITED, UNLIMITED, or user-specified
via the MaxFieldLength constructor.
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> LockObtainFailedException if another writer </throws>
<summary> has this index open (<c>write.lock</c> could not
be obtained)
</summary>
<throws> IOException if the directory cannot be </throws>
<summary> read/written to or if there is any other low-level
IO error
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
<summary> Expert: constructs an IndexWriter with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
, for the index in <c>d</c>,
first creating it if it does not already exist. Text
will be analyzed with <c>a</c>.
</summary>
<param name="d">the index directory
</param>
<param name="a">the analyzer to use
</param>
<param name="deletionPolicy">see <a href="#deletionPolicy">above</a>
</param>
<param name="mfl">whether or not to limit field lengths
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> LockObtainFailedException if another writer </throws>
<summary> has this index open (<c>write.lock</c> could not
be obtained)
</summary>
<throws> IOException if the directory cannot be </throws>
<summary> read/written to or if there is any other low-level
IO error
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.IndexWriter.MaxFieldLength)">
<summary> Expert: constructs an IndexWriter with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
, for the index in <c>d</c>.
Text will be analyzed with <c>a</c>. If
<c>create</c> is true, then a new, empty index
will be created in <c>d</c>, replacing the index
already there, if any.
</summary>
<param name="d">the index directory
</param>
<param name="a">the analyzer to use
</param>
<param name="create"><c>true</c> to create the index or overwrite
the existing one; <c>false</c> to append to the existing
index
</param>
<param name="deletionPolicy">see <a href="#deletionPolicy">above</a>
</param>
<param name="mfl"><see cref="T:Lucene.Net.Index.IndexWriter.MaxFieldLength"/>, whether or not to limit field lengths. Value is in number of terms/tokens
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> LockObtainFailedException if another writer </throws>
<summary> has this index open (<c>write.lock</c> could not
be obtained)
</summary>
<throws> IOException if the directory cannot be read/written to, or </throws>
<summary> if it does not exist and <c>create</c> is
<c>false</c> or if there is any other low-level
IO error
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,System.Boolean,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.IndexWriter.MaxFieldLength,Lucene.Net.Index.DocumentsWriter.IndexingChain,Lucene.Net.Index.IndexCommit)">
<summary> Expert: constructs an IndexWriter with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
and <see cref="T:Lucene.Net.Index.DocumentsWriter.IndexingChain"/>,
for the index in <c>d</c>.
Text will be analyzed with <c>a</c>. If
<c>create</c> is true, then a new, empty index
will be created in <c>d</c>, replacing the index
already there, if any.
</summary>
<param name="d">the index directory
</param>
<param name="a">the analyzer to use
</param>
<param name="create"><c>true</c> to create the index or overwrite
the existing one; <c>false</c> to append to the existing
index
</param>
<param name="deletionPolicy">see <a href="#deletionPolicy">above</a>
</param>
<param name="mfl">whether or not to limit field lengths, value is in number of terms/tokens. See <see cref="T:Lucene.Net.Index.IndexWriter.MaxFieldLength"/>.
</param>
<param name="indexingChain">the <see cref="T:Lucene.Net.Index.DocConsumer"/> chain to be used to
process documents
</param>
<param name="commit">which commit to open
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> LockObtainFailedException if another writer </throws>
<summary> has this index open (<c>write.lock</c> could not
be obtained)
</summary>
<throws> IOException if the directory cannot be read/written to, or </throws>
<summary> if it does not exist and <c>create</c> is
<c>false</c> or if there is any other low-level
IO error
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.#ctor(Lucene.Net.Store.Directory,Lucene.Net.Analysis.Analyzer,Lucene.Net.Index.IndexDeletionPolicy,Lucene.Net.Index.IndexWriter.MaxFieldLength,Lucene.Net.Index.IndexCommit)">
<summary> Expert: constructs an IndexWriter on specific commit
point, with a custom <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>, for
the index in <c>d</c>. Text will be analyzed
with <c>a</c>.
<p/> This is only meaningful if you've used a <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
in that past that keeps more than
just the last commit.
<p/>This operation is similar to <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>,
except that method can only rollback what's been done
with the current instance of IndexWriter since its last
commit, whereas this method can rollback to an
arbitrary commit point from the past, assuming the
<see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> has preserved past
commits.
</summary>
<param name="d">the index directory
</param>
<param name="a">the analyzer to use
</param>
<param name="deletionPolicy">see <a href="#deletionPolicy">above</a>
</param>
<param name="mfl">whether or not to limit field lengths, value is in number of terms/tokens. See <see cref="T:Lucene.Net.Index.IndexWriter.MaxFieldLength"/>.
</param>
<param name="commit">which commit to open
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> LockObtainFailedException if another writer </throws>
<summary> has this index open (<c>write.lock</c> could not
be obtained)
</summary>
<throws> IOException if the directory cannot be read/written to, or </throws>
<summary> if it does not exist and <c>create</c> is
<c>false</c> or if there is any other low-level
IO error
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetMergePolicy(Lucene.Net.Index.MergePolicy)">
<summary> Expert: set the merge policy used by this writer.</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetMergeScheduler(Lucene.Net.Index.MergeScheduler)">
<summary> Expert: set the merge scheduler used by this writer.</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)">
<summary> The maximum number of terms that will be indexed for a single field in a
document. This limits the amount of memory required for indexing, so that
collections with very large files will not crash the indexing process by
running out of memory. This setting refers to the number of running terms,
not to the number of different terms.<p/>
<strong>Note:</strong> this silently truncates large documents, excluding from the
index all terms that occur further in the document. If you know your source
documents are large, be sure to set this value high enough to accomodate
the expected size. If you set it to Integer.MAX_VALUE, then the only limit
is your memory, but you should anticipate an OutOfMemoryError.<p/>
By default, no more than <see cref="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_FIELD_LENGTH"/> terms
will be indexed for a field.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetMaxFieldLength">
<summary> Returns the maximum number of terms that will be
indexed for a single field in a document.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDocs(System.Int32)">
<summary>Determines the minimal number of documents required
before the buffered in-memory documents are flushed as
a new Segment. Large values generally gives faster
indexing.
<p/>When this is set, the writer will flush every
maxBufferedDocs added documents. Pass in <see cref="F:Lucene.Net.Index.IndexWriter.DISABLE_AUTO_FLUSH"/>
to prevent triggering a flush due
to number of buffered documents. Note that if flushing
by RAM usage is also enabled, then the flush will be
triggered by whichever comes first.<p/>
<p/>Disabled by default (writer flushes by RAM usage).<p/>
</summary>
<throws> IllegalArgumentException if maxBufferedDocs is </throws>
<summary> enabled but smaller than 2, or it disables maxBufferedDocs
when ramBufferSize is already disabled
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.PushMaxBufferedDocs">
<summary> If we are flushing by doc count (not by RAM usage), and
using LogDocMergePolicy then push maxBufferedDocs down
as its minMergeDocs, to keep backwards compatibility.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetMaxBufferedDocs">
<summary> Returns the number of buffered added documents that will
trigger a flush if enabled.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDocs(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)">
<summary>Determines the amount of RAM that may be used for
buffering added documents and deletions before they are
flushed to the Directory. Generally for faster
indexing performance it's best to flush by RAM usage
instead of document count and use as large a RAM buffer
as you can.
<p/>When this is set, the writer will flush whenever
buffered documents and deletions use this much RAM.
Pass in <see cref="F:Lucene.Net.Index.IndexWriter.DISABLE_AUTO_FLUSH"/> to prevent
triggering a flush due to RAM usage. Note that if
flushing by document count is also enabled, then the
flush will be triggered by whichever comes first.<p/>
<p/> <b>NOTE</b>: the account of RAM usage for pending
deletions is only approximate. Specifically, if you
delete by Query, Lucene currently has no way to measure
the RAM usage if individual Queries so the accounting
will under-estimate and you should compensate by either
calling commit() periodically yourself, or by using
<see cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDeleteTerms(System.Int32)"/> to flush by count
instead of RAM usage (each buffered delete Query counts
as one).
<p/>
<b>NOTE</b>: because IndexWriter uses <c>int</c>s when managing its
internal storage, the absolute maximum value for this setting is somewhat
less than 2048 MB. The precise limit depends on various factors, such as
how large your documents are, how many fields have norms, etc., so it's
best to set this value comfortably under 2048.
<p/>
<p/> The default value is <see cref="F:Lucene.Net.Index.IndexWriter.DEFAULT_RAM_BUFFER_SIZE_MB"/>.<p/>
</summary>
<throws> IllegalArgumentException if ramBufferSize is </throws>
<summary> enabled but non-positive, or it disables ramBufferSize
when maxBufferedDocs is already disabled
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetRAMBufferSizeMB">
<summary> Returns the value set by <see cref="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)"/> if enabled.</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDeleteTerms(System.Int32)">
<summary> <p/>Determines the minimal number of delete terms required before the buffered
in-memory delete terms are applied and flushed. If there are documents
buffered in memory at the time, they are merged and a new segment is
created.<p/>
<p/>Disabled by default (writer flushes by RAM usage).<p/>
</summary>
<throws> IllegalArgumentException if maxBufferedDeleteTerms </throws>
<summary> is enabled but smaller than 1
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetRAMBufferSizeMB(System.Double)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetMaxBufferedDeleteTerms">
<summary> Returns the number of buffered deleted terms that will
trigger a flush if enabled.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetMaxBufferedDeleteTerms(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.SetInfoStream(System.IO.StreamWriter)">
<summary>If non-null, information about merges, deletes and a
message when maxFieldLength is reached will be printed
to this.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Close">
<summary> Commits all changes to an index and closes all
associated files. Note that this may be a costly
operation, so, try to re-use a single writer instead of
closing and opening a new one. See <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> for
caveats about write caching done by some IO devices.
<p/> If an Exception is hit during close, eg due to disk
full or some other reason, then both the on-disk index
and the internal state of the IndexWriter instance will
be consistent. However, the close will not be complete
even though part of it (flushing buffered documents)
may have succeeded, so the write lock will still be
held.<p/>
<p/> If you can correct the underlying cause (eg free up
some disk space) then you can call close() again.
Failing that, if you want to force the write lock to be
released (dangerous, because you may then lose buffered
docs in the IndexWriter instance) then you can do
something like this:<p/>
<code>
try {
writer.close();
} finally {
if (IndexWriter.isLocked(directory)) {
IndexWriter.unlock(directory);
}
}
</code>
after which, you must be certain not to use the writer
instance anymore.<p/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer, again. See <a href="#OOME">above</a> for details.<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Dispose">
<summary> Commits all changes to an index and closes all
associated files. Note that this may be a costly
operation, so, try to re-use a single writer instead of
closing and opening a new one. See <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> for
caveats about write caching done by some IO devices.
<p/> If an Exception is hit during close, eg due to disk
full or some other reason, then both the on-disk index
and the internal state of the IndexWriter instance will
be consistent. However, the close will not be complete
even though part of it (flushing buffered documents)
may have succeeded, so the write lock will still be
held.<p/>
<p/> If you can correct the underlying cause (eg free up
some disk space) then you can call close() again.
Failing that, if you want to force the write lock to be
released (dangerous, because you may then lose buffered
docs in the IndexWriter instance) then you can do
something like this:<p/>
<code>
try {
writer.close();
} finally {
if (IndexWriter.isLocked(directory)) {
IndexWriter.unlock(directory);
}
}
</code>
after which, you must be certain not to use the writer
instance anymore.<p/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer, again. See <a href="#OOME">above</a> for details.<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Dispose(System.Boolean)">
<summary> Closes the index with or without waiting for currently
running merges to finish. This is only meaningful when
using a MergeScheduler that runs merges in background
threads.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer, again. See <a
href="#OOME">above</a> for details.<p/>
<p/><b>NOTE</b>: it is dangerous to always call
close(false), especially when IndexWriter is not open
for very long, because this can result in "merge
starvation" whereby long merges will never have a
chance to finish. This will cause too many segments in
your index over time.<p/>
</summary>
<param name="waitForMerges">if true, this call will block
until all merges complete; else, it will ask all
running merges to abort, wait until those merges have
finished (which should be at most a few seconds), and
then return.
</param>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Close(System.Boolean)">
<summary> Closes the index with or without waiting for currently
running merges to finish. This is only meaningful when
using a MergeScheduler that runs merges in background
threads.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer, again. See <a
href="#OOME">above</a> for details.<p/>
<p/><b>NOTE</b>: it is dangerous to always call
close(false), especially when IndexWriter is not open
for very long, because this can result in "merge
starvation" whereby long merges will never have a
chance to finish. This will cause too many segments in
your index over time.<p/>
</summary>
<param name="waitForMerges">if true, this call will block
until all merges complete; else, it will ask all
running merges to abort, wait until those merges have
finished (which should be at most a few seconds), and
then return.
</param>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.FlushDocStores">
<summary>Tells the docWriter to close its currently open shared
doc stores (stored fields & vectors files).
Return value specifices whether new doc store files are compound or not.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MaxDoc">
<summary>Returns total number of docs in this index, including
docs not yet flushed (still in the RAM buffer),
not counting deletions.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.NumDocs">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.NumDocs">
<summary>Returns total number of docs in this index, including
docs not yet flushed (still in the RAM buffer), and
including deletions. <b>NOTE:</b> buffered deletions
are not counted. If you really need these to be
counted you should call <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> first.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.NumDocs">
</seealso>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.maxFieldLength">
<summary> The maximum number of terms that will be indexed for a single field in a
document. This limits the amount of memory required for indexing, so that
collections with very large files will not crash the indexing process by
running out of memory.<p/>
Note that this effectively truncates large documents, excluding from the
index terms that occur further in the document. If you know your source
documents are large, be sure to set this value high enough to accomodate
the expected size. If you set it to Integer.MAX_VALUE, then the only limit
is your memory, but you should anticipate an OutOfMemoryError.<p/>
By default, no more than 10,000 terms will be indexed for a field.
</summary>
<seealso cref="T:Lucene.Net.Index.IndexWriter.MaxFieldLength">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)">
<summary> Adds a document to this index. If the document contains more than
<see cref="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)"/> terms for a given field, the remainder are
discarded.
<p/> Note that if an Exception is hit (for example disk full)
then the index will be consistent, but this document
may not have been added. Furthermore, it's possible
the index will have one segment in non-compound format
even when using compound files (when a merge has
partially succeeded).<p/>
<p/> This method periodically flushes pending documents
to the Directory (see <a href="#flush">above</a>), and
also periodically triggers segment merges in the index
according to the <see cref="P:Lucene.Net.Index.IndexWriter.MergePolicy"/> in use.<p/>
<p/>Merges temporarily consume space in the
directory. The amount of space required is up to 1X the
size of all segments being merged, when no
readers/searchers are open against the index, and up to
2X the size of all segments being merged when
readers/searchers are open against the index (see
<see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/> for details). The sequence of
primitive merge operations performed is governed by the
merge policy.
<p/>Note that each term in the document can be no longer
than 16383 characters, otherwise an
IllegalArgumentException will be thrown.<p/>
<p/>Note that it's possible to create an invalid Unicode
string in java if a UTF16 surrogate pair is malformed.
In this case, the invalid characters are silently
replaced with the Unicode replacement character
U+FFFD.<p/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document,Lucene.Net.Analysis.Analyzer)">
<summary> Adds a document to this index, using the provided analyzer instead of the
value of <see cref="P:Lucene.Net.Index.IndexWriter.Analyzer"/>. If the document contains more than
<see cref="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)"/> terms for a given field, the remainder are
discarded.
<p/>See <see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/> for details on
index and IndexWriter state after an Exception, and
flushing/merging temporary free space requirements.<p/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Index.Term)">
<summary> Deletes the document(s) containing <c>term</c>.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a
href="#OOME">above</a> for details.<p/>
</summary>
<param name="term">the term to identify the documents to be deleted
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Index.Term[])">
<summary> Deletes the document(s) containing any of the
terms. All deletes are flushed at the same time.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a
href="#OOME">above</a> for details.<p/>
</summary>
<param name="terms">array of terms to identify the documents
to be deleted
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Search.Query)">
<summary> Deletes the document(s) matching the provided query.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a
href="#OOME">above</a> for details.<p/>
</summary>
<param name="query">the query to identify the documents to be deleted
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.DeleteDocuments(Lucene.Net.Search.Query[])">
<summary> Deletes the document(s) matching any of the provided queries.
All deletes are flushed at the same time.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a
href="#OOME">above</a> for details.<p/>
</summary>
<param name="queries">array of queries to identify the documents
to be deleted
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.UpdateDocument(Lucene.Net.Index.Term,Lucene.Net.Documents.Document)">
<summary> Updates a document by first deleting the document(s)
containing <c>term</c> and then adding the new
document. The delete and then add are atomic as seen
by a reader on the same index (flush may happen only after
the add).
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a
href="#OOME">above</a> for details.<p/>
</summary>
<param name="term">the term to identify the document(s) to be
deleted
</param>
<param name="doc">the document to be added
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.UpdateDocument(Lucene.Net.Index.Term,Lucene.Net.Documents.Document,Lucene.Net.Analysis.Analyzer)">
<summary> Updates a document by first deleting the document(s)
containing <c>term</c> and then adding the new
document. The delete and then add are atomic as seen
by a reader on the same index (flush may happen only after
the add).
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a
href="#OOME">above</a> for details.<p/>
</summary>
<param name="term">the term to identify the document(s) to be
deleted
</param>
<param name="doc">the document to be added
</param>
<param name="analyzer">the analyzer to use when analyzing the document
</param>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.infoStream">
<summary>If non-null, information about merges will be printed to this.</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Optimize">
<summary> Requests an "optimize" operation on an index, priming the index
for the fastest available search. Traditionally this has meant
merging all segments into a single segment as is done in the
default merge policy, but individaul merge policies may implement
optimize in different ways.
<p/>It is recommended that this method be called upon completion of indexing. In
environments with frequent updates, optimize is best done during low volume times, if at all.
<p/>
<p/>See http://www.gossamer-threads.com/lists/lucene/java-dev/47895 for more discussion. <p/>
<p/>Note that optimize requires 2X the index size free
space in your Directory (3X if you're using compound
file format). For example, if your index
size is 10 MB then you need 20 MB free for optimize to
complete (30 MB if you're using compound fiel format).<p/>
<p/>If some but not all readers re-open while an
optimize is underway, this will cause > 2X temporary
space to be consumed as those new readers will then
hold open the partially optimized segments at that
time. It is best not to re-open readers while optimize
is running.<p/>
<p/>The actual temporary usage could be much less than
these figures (it depends on many factors).<p/>
<p/>In general, once the optimize completes, the total size of the
index will be less than the size of the starting index.
It could be quite a bit smaller (if there were many
pending deletes) or just slightly smaller.<p/>
<p/>If an Exception is hit during optimize(), for example
due to disk full, the index will not be corrupt and no
documents will have been lost. However, it may have
been partially optimized (some segments were merged but
not all), and it's possible that one of the segments in
the index will be in non-compound format even when
using compound file format. This will occur when the
Exception is hit during conversion of the segment into
compound format.<p/>
<p/>This call will optimize those segments present in
the index when the call started. If other threads are
still adding documents and flushing segments, those
newly created segments will not be optimized unless you
call optimize again.<p/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
<seealso cref="M:Lucene.Net.Index.LogMergePolicy.FindMergesForOptimize(Lucene.Net.Index.SegmentInfos,System.Int32,System.Collections.Generic.ISet{Lucene.Net.Index.SegmentInfo})">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Optimize(System.Int32)">
<summary> Optimize the index down to <= maxNumSegments. If
maxNumSegments==1 then this is the same as <see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/>
.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<param name="maxNumSegments">maximum number of segments left
in the index after optimization finishes
</param>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Optimize(System.Boolean)">
<summary>Just like <see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/>, except you can specify
whether the call should block until the optimize
completes. This is only meaningful with a
<see cref="P:Lucene.Net.Index.IndexWriter.MergeScheduler"/> that is able to run merges in
background threads.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Optimize(System.Int32,System.Boolean)">
<summary>Just like <see cref="M:Lucene.Net.Index.IndexWriter.Optimize(System.Int32)"/>, except you can
specify whether the call should block until the
optimize completes. This is only meaningful with a
<see cref="P:Lucene.Net.Index.IndexWriter.MergeScheduler"/> that is able to run merges in
background threads.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.OptimizeMergesPending">
<summary>Returns true if any merges in pendingMerges or
runningMerges are optimization merges.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ExpungeDeletes(System.Boolean)">
<summary>Just like <see cref="M:Lucene.Net.Index.IndexWriter.ExpungeDeletes"/>, except you can
specify whether the call should block until the
operation completes. This is only meaningful with a
<see cref="P:Lucene.Net.Index.IndexWriter.MergeScheduler"/> that is able to run merges in
background threads.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ExpungeDeletes">
<summary>Expunges all deletes from the index. When an index
has many document deletions (or updates to existing
documents), it's best to either call optimize or
expungeDeletes to remove all unused data in the index
associated with the deleted documents. To see how
many deletions you have pending in your index, call
<see cref="P:Lucene.Net.Index.IndexReader.NumDeletedDocs"/>
This saves disk space and memory usage while
searching. expungeDeletes should be somewhat faster
than optimize since it does not insist on reducing the
index to a single segment (though, this depends on the
<see cref="P:Lucene.Net.Index.IndexWriter.MergePolicy"/>; see <see cref="M:Lucene.Net.Index.MergePolicy.FindMergesToExpungeDeletes(Lucene.Net.Index.SegmentInfos)"/>.). Note that
this call does not first commit any buffered
documents, so you must do so yourself if necessary.
See also <seealso cref="M:Lucene.Net.Index.IndexWriter.ExpungeDeletes(System.Boolean)"/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MaybeMerge">
<summary> Expert: asks the mergePolicy whether any merges are
necessary now and if so, runs the requested merges and
then iterate (test again if merges are needed) until no
more merges are returned by the mergePolicy.
Explicit calls to maybeMerge() are usually not
necessary. The most common case is when merge policy
parameters have changed.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a
href="#OOME">above</a> for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetNextMerge">
<summary>Expert: the <see cref="P:Lucene.Net.Index.IndexWriter.MergeScheduler"/> calls this method
to retrieve the next merge requested by the
MergePolicy
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.GetNextExternalMerge">
<summary>Like getNextMerge() except only returns a merge if it's
external.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Rollback">
<summary> Close the <c>IndexWriter</c> without committing
any changes that have occurred since the last commit
(or since it was opened, if commit hasn't been called).
This removes any temporary files that had been created,
after which the state of the index will be the same as
it was when commit() was last called or when this
writer was first opened. This also clears a previous
call to <see cref="M:Lucene.Net.Index.IndexWriter.PrepareCommit"/>.
</summary>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.DeleteAll">
<summary> Delete all documents in the index.
<p/>This method will drop all buffered documents and will
remove all segments from the index. This change will not be
visible until a <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/> has been called. This method
can be rolled back using <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>.<p/>
<p/>NOTE: this method is much faster than using deleteDocuments( new MatchAllDocsQuery() ).<p/>
<p/>NOTE: this method will forcefully abort all merges
in progress. If other threads are running <see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/>
or any of the addIndexes methods, they
will receive <see cref="T:Lucene.Net.Index.MergePolicy.MergeAbortedException"/>s.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.WaitForMerges">
<summary> Wait for any currently outstanding merges to finish.
<p/>It is guaranteed that any merges started prior to calling this method
will have completed once this method completes.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.AddIndexesNoOptimize(Lucene.Net.Store.Directory[])">
<summary> Merges all segments from an array of indexes into this
index.
<p/>This may be used to parallelize batch indexing. A large document
collection can be broken into sub-collections. Each sub-collection can be
indexed in parallel, on a different thread, process or machine. The
complete index can then be created by merging sub-collection indexes
with this method.
<p/><b>NOTE:</b> the index in each Directory must not be
changed (opened by a writer) while this method is
running. This method does not acquire a write lock in
each input Directory, so it is up to the caller to
enforce this.
<p/><b>NOTE:</b> while this is running, any attempts to
add or delete documents (with another thread) will be
paused until this method completes.
<p/>This method is transactional in how Exceptions are
handled: it does not commit a new segments_N file until
all indexes are added. This means if an Exception
occurs (for example disk full), then either no indexes
will have been added or they all will have been.<p/>
<p/>Note that this requires temporary free space in the
Directory up to 2X the sum of all input indexes
(including the starting index). If readers/searchers
are open against the starting index, then temporary
free space required will be higher by the size of the
starting index (see <see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/> for details).
<p/>
<p/>Once this completes, the final size of the index
will be less than the sum of all input index sizes
(including the starting index). It could be quite a
bit smaller (if there were many pending deletes) or
just slightly smaller.<p/>
<p/>
This requires this index not be among those to be added.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.AddIndexes(Lucene.Net.Index.IndexReader[])">
<summary>Merges the provided indexes into this index.
<p/>After this completes, the index is optimized. <p/>
<p/>The provided IndexReaders are not closed.<p/>
<p/><b>NOTE:</b> while this is running, any attempts to
add or delete documents (with another thread) will be
paused until this method completes.
<p/>See <see cref="M:Lucene.Net.Index.IndexWriter.AddIndexesNoOptimize(Lucene.Net.Store.Directory[])"/> for
details on transactional semantics, temporary free
space required in the Directory, and non-CFS segments
on an Exception.<p/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.DoAfterFlush">
<summary>
A hook for extending classes to execute operations after pending added and
deleted documents have been flushed to the Directory but before the change
is committed (new segments_N file written).
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.DoBeforeFlush">
<summary>
A hook for extending classes to execute operations before pending added and
deleted documents are flushed to the Directory.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.PrepareCommit">
<summary>Expert: prepare for commit.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.PrepareCommit(System.Collections.Generic.IDictionary{System.String,System.String})">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.PrepareCommit(System.Collections.Generic.IDictionary{System.String,System.String})">
<summary><p/>Expert: prepare for commit, specifying
commitUserData Map (String -> String). This does the
first phase of 2-phase commit. This method does all steps
necessary to commit changes since this writer was
opened: flushes pending added and deleted docs, syncs
the index files, writes most of next segments_N file.
After calling this you must call either <see cref="M:Lucene.Net.Index.IndexWriter.Commit"/>
to finish the commit, or <see cref="M:Lucene.Net.Index.IndexWriter.Rollback"/>
to revert the commit and undo all changes
done since the writer was opened.<p/>
You can also just call <see cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})"/> directly
without prepareCommit first in which case that method
will internally call prepareCommit.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<param name="commitUserData">Opaque Map (String->String)
that's recorded into the segments file in the index,
and retrievable by <see cref="M:Lucene.Net.Index.IndexReader.GetCommitUserData(Lucene.Net.Store.Directory)"/>.
Note that when IndexWriter commits itself, during <see cref="M:Lucene.Net.Index.IndexWriter.Close"/>, the
commitUserData is unchanged (just carried over from
the prior commit). If this is null then the previous
commitUserData is kept. Also, the commitUserData will
only "stick" if there are actually changes in the
index to commit.
</param>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Commit">
<summary> <p/>Commits all pending changes (added & deleted
documents, optimizations, segment merges, added
indexes, etc.) to the index, and syncs all referenced
index files, such that a reader will see the changes
and the index updates will survive an OS or machine
crash or power loss. Note that this does not wait for
any running background merges to finish. This may be a
costly operation, so you should test the cost in your
application and do it only when really necessary.<p/>
<p/> Note that this operation calls Directory.sync on
the index files. That call should not return until the
file contents & metadata are on stable storage. For
FSDirectory, this calls the OS's fsync. But, beware:
some hardware devices may in fact cache writes even
during fsync, and return before the bits are actually
on stable storage, to give the appearance of faster
performance. If you have such a device, and it does
not have a battery backup (for example) then on power
loss it may still lose data. Lucene cannot guarantee
consistency on such devices. <p/>
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.PrepareCommit">
</seealso>
<seealso cref="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})">
</seealso>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Commit(System.Collections.Generic.IDictionary{System.String,System.String})">
<summary>Commits all changes to the index, specifying a
commitUserData Map (String -> String). This just
calls <see cref="M:Lucene.Net.Index.IndexWriter.PrepareCommit(System.Collections.Generic.IDictionary{System.String,System.String})"/> (if you didn't
already call it) and then <see cref="M:Lucene.Net.Index.IndexWriter.FinishCommit"/>.
<p/><b>NOTE</b>: if this method hits an OutOfMemoryError
you should immediately close the writer. See <a href="#OOME">above</a> for details.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Flush(System.Boolean,System.Boolean,System.Boolean)">
<summary> Flush all in-memory buffered udpates (adds and deletes)
to the Directory.
</summary>
<param name="triggerMerge">if true, we may merge segments (if
deletes or docs were flushed) if necessary
</param>
<param name="flushDocStores">if false we are allowed to keep
doc stores open to share with the next segment
</param>
<param name="flushDeletes">whether pending deletes should also
be flushed
</param>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.RamSizeInBytes">
<summary>Expert: Return the total size of all index files currently cached in memory.
Useful for size management with flushRamDocs()
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.NumRamDocs">
<summary>Expert: Return the number of documents currently
buffered in RAM.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.CommitMergedDeletes(Lucene.Net.Index.MergePolicy.OneMerge,Lucene.Net.Index.SegmentReader)">
<summary>Carefully merges deletes for the segments we just
merged. This is tricky because, although merging will
clear all deletes (compacts the documents), new
deletes may have been flushed to the segments since
the merge was started. This method "carries over"
such new deletes onto the newly merged segment, and
saves the resulting deletes file (incrementing the
delete generation for merge.info). If no deletes were
flushed, no new deletes file is saved.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Merge(Lucene.Net.Index.MergePolicy.OneMerge)">
<summary> Merges the indicated segments, replacing them in the stack with a
single segment.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MergeSuccess(Lucene.Net.Index.MergePolicy.OneMerge)">
<summary>Hook that's called when the specified merge is complete. </summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.RegisterMerge(Lucene.Net.Index.MergePolicy.OneMerge)">
<summary>Checks whether this merge involves any segments
already participating in a merge. If not, this merge
is "registered", meaning we record that its segments
are now participating in a merge, and true is
returned. Else (the merge conflicts) false is
returned.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MergeInit(Lucene.Net.Index.MergePolicy.OneMerge)">
<summary>Does initial setup for a merge, which is fast but holds
the synchronized lock on IndexWriter instance.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MergeFinish(Lucene.Net.Index.MergePolicy.OneMerge)">
<summary>Does fininishing for a merge, which is fast but holds
the synchronized lock on IndexWriter instance.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MergeMiddle(Lucene.Net.Index.MergePolicy.OneMerge)">
<summary>Does the actual (time-consuming) work of the merge,
but without holding synchronized lock on IndexWriter
instance
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.WaitForAllSynced(System.Collections.Generic.ICollection{System.String})">
<summary>Blocks until all files in syncing are sync'd </summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.StartCommit(System.Int64,System.Collections.Generic.IDictionary{System.String,System.String})">
<summary>Walk through all files referenced by the current
segmentInfos and ask the Directory to sync each file,
if it wasn't already. If that succeeds, then we
prepare a new segments_N file but do not fully commit
it.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.IsLocked(Lucene.Net.Store.Directory)">
<summary> Returns <c>true</c> iff the index in the named directory is
currently locked.
</summary>
<param name="directory">the directory to check for a lock
</param>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.Unlock(Lucene.Net.Store.Directory)">
<summary> Forcibly unlocks the index in the named directory.
<p/>
Caution: this should only be used by failure recovery code,
when it is known that no other process nor thread is in fact
currently accessing this index.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.LogMergePolicy">
<summary> Casts current mergePolicy to LogMergePolicy, and throws
an exception if the mergePolicy is not a LogMergePolicy.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.UseCompoundFile">
<summary><p/>Gets or sets the current setting of whether newly flushed
segments will use the compound file format. Note that
this just returns the value previously set with
setUseCompoundFile(boolean), or the default value
(true). You cannot use this to query the status of
previously flushed segments.<p/>
<p/>Note that this method is a convenience method: it
just calls mergePolicy.getUseCompoundFile as long as
mergePolicy is an instance of <see cref="P:Lucene.Net.Index.IndexWriter.LogMergePolicy"/>.
Otherwise an IllegalArgumentException is thrown.<p/>
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.Similarity">
<summary>Expert: Return the Similarity implementation used by this IndexWriter.
<p/>This defaults to the current value of <see cref="P:Lucene.Net.Search.Similarity.Default"/>.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.TermIndexInterval">
<summary>Expert: Gets or sets the interval between indexed terms. Large values cause less
memory to be used by IndexReader, but slow random-access to terms. Small
values cause more memory to be used by an IndexReader, and speed
random-access to terms.
This parameter determines the amount of computation required per query
term, regardless of the number of documents that contain that term. In
particular, it is the maximum number of other terms that must be
scanned before a term is located and its frequency and position information
may be processed. In a large index with user-entered query terms, query
processing time is likely to be dominated not by term lookup but rather
by the processing of frequency and positional data. In a small index
or when many uncommon query terms are generated (e.g., by wildcard
queries) term lookup may become a dominant cost.
In particular, <c>numUniqueTerms/interval</c> terms are read into
memory by an IndexReader, and, on average, <c>interval/2</c> terms
must be scanned for each random term access.
</summary>
<seealso cref="F:Lucene.Net.Index.IndexWriter.DEFAULT_TERM_INDEX_INTERVAL">
</seealso>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.MergePolicy">
<summary> Expert: returns the current MergePolicy in use by this writer.</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetMergePolicy(Lucene.Net.Index.MergePolicy)">
</seealso>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.MergeScheduler">
<summary> Expert: returns the current MergePolicy in use by this
writer.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetMergePolicy(Lucene.Net.Index.MergePolicy)">
</seealso>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.MaxMergeDocs">
<summary> <p/>Gets or sets the largest segment (measured by document
count) that may be merged with other segments.
<p/>
Small values (e.g., less than 10,000) are best for
interactive indexing, as this limits the length of
pauses while indexing to a few seconds. Larger values
are best for batched indexing and speedier
searches.
<p/>
The default value is <see cref="F:System.Int32.MaxValue"/>.
<p/>
Note that this method is a convenience method: it
just calls mergePolicy.getMaxMergeDocs as long as
mergePolicy is an instance of <see cref="P:Lucene.Net.Index.IndexWriter.LogMergePolicy"/>.
Otherwise an IllegalArgumentException is thrown.<p/>
The default merge policy (<see cref="T:Lucene.Net.Index.LogByteSizeMergePolicy"/>)
also allows you to set this
limit by net size (in MB) of the segment, using
<see cref="P:Lucene.Net.Index.LogByteSizeMergePolicy.MaxMergeMB"/>.<p/>
</summary>
<seealso cref="P:Lucene.Net.Index.IndexWriter.MaxMergeDocs">
</seealso>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.ReaderTermsIndexDivisor">
Gets or sets the termsIndexDivisor passed to any readers that
IndexWriter opens, for example when applying deletes
or creating a near-real-time reader in
<see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/>. Default value is
<see cref="F:Lucene.Net.Index.IndexReader.DEFAULT_TERMS_INDEX_DIVISOR"/>.
</member>
<member name="P:Lucene.Net.Index.IndexWriter.MergeFactor">
<summary>Gets or sets the number of segments that are merged at
once and also controls the total number of segments
allowed to accumulate in the index.
<p/>Determines how often segment indices are merged by addDocument(). With
smaller values, less RAM is used while indexing, and searches on
unoptimized indices are faster, but indexing speed is slower. With larger
values, more RAM is used during indexing, and while searches on unoptimized
indices are slower, indexing is faster. Thus larger values (> 10) are best
for batch index creation, and smaller values (< 10) for indices that are
interactively maintained.
<p/>Note that this method is a convenience method: it
just calls mergePolicy.setMergeFactor as long as
mergePolicy is an instance of <see cref="P:Lucene.Net.Index.IndexWriter.LogMergePolicy"/>.
Otherwise an IllegalArgumentException is thrown.<p/>
<p/>This must never be less than 2. The default value is 10.
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.DefaultInfoStream">
<summary>Gets or sets the default info stream.
If non-null, this will be the default infoStream used
by a newly instantiated IndexWriter.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetInfoStream(System.IO.StreamWriter)">
</seealso>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.InfoStream">
<summary> Returns the current infoStream in use by this writer.</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetInfoStream(System.IO.StreamWriter)">
</seealso>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.Verbose">
<summary>Returns true if verbosing is enabled (i.e., infoStream != null). </summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.WriteLockTimeout">
<summary>Gets or sets allowed timeout when acquiring the write lock.</summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.DefaultWriteLockTimeout">
<summary> Gets or sets the default (for any instance of IndexWriter) maximum time to wait for a write lock (in
milliseconds).
</summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.Directory">
<summary>Returns the Directory used by this index. </summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.Analyzer">
<summary>Returns the analyzer used by this index. </summary>
</member>
<member name="P:Lucene.Net.Index.IndexWriter.MergedSegmentWarmer">
<summary>Gets or sets the merged segment warmer. See <see cref="T:Lucene.Net.Index.IndexWriter.IndexReaderWarmer"/>
.
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexWriter.ReaderPool">
<summary>Holds shared SegmentReader instances. IndexWriter uses
SegmentReaders for 1) applying deletes, 2) doing
merges, 3) handing out a real-time reader. This pool
reuses instances of the SegmentReaders in all these
places if it is in "near real-time mode" (getReader()
has been called on this instance).
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.Clear(Lucene.Net.Index.SegmentInfos)">
<summary>Forcefully clear changes for the specifed segments,
and remove from the pool. This is called on succesful merge.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.Release(Lucene.Net.Index.SegmentReader)">
<summary> Release the segment reader (i.e. decRef it and close if there
are no more references.
</summary>
<param name="sr">
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.Release(Lucene.Net.Index.SegmentReader,System.Boolean)">
<summary> Release the segment reader (i.e. decRef it and close if there
are no more references.
</summary>
<param name="sr">
</param>
<param name="drop"></param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.Dispose">
<summary>Remove all our references to readers, and commits
any pending changes.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.Commit">
<summary> Commit all segment reader in the pool.</summary>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.GetReadOnlyClone(Lucene.Net.Index.SegmentInfo,System.Boolean,System.Int32)">
<summary> Returns a ref to a clone. NOTE: this clone is not
enrolled in the pool, so you should simply close()
it when you're done (ie, do not call release()).
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.Get(Lucene.Net.Index.SegmentInfo,System.Boolean)">
<summary> Obtain a SegmentReader from the readerPool. The reader
must be returned by calling <see cref="M:Lucene.Net.Index.IndexWriter.ReaderPool.Release(Lucene.Net.Index.SegmentReader)"/>
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.ReaderPool.Release(Lucene.Net.Index.SegmentReader)">
</seealso>
<param name="info">
</param>
<param name="doOpenStores">
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.ReaderPool.Get(Lucene.Net.Index.SegmentInfo,System.Boolean,System.Int32,System.Int32)">
<summary> Obtain a SegmentReader from the readerPool. The reader
must be returned by calling <see cref="M:Lucene.Net.Index.IndexWriter.ReaderPool.Release(Lucene.Net.Index.SegmentReader)"/>
</summary>
<seealso cref="M:Lucene.Net.Index.IndexWriter.ReaderPool.Release(Lucene.Net.Index.SegmentReader)">
</seealso>
<param name="info">
</param>
<param name="doOpenStores">
</param>
<param name="readBufferSize">
</param>
<param name="termsIndexDivisor">
</param>
<throws> IOException </throws>
</member>
<member name="T:Lucene.Net.Index.IndexWriter.MaxFieldLength">
<summary> Specifies maximum field length (in number of tokens/terms) in <see cref="T:Lucene.Net.Index.IndexWriter"/> constructors.
<see cref="M:Lucene.Net.Index.IndexWriter.SetMaxFieldLength(System.Int32)"/> overrides the value set by
the constructor.
</summary>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MaxFieldLength.#ctor(System.String,System.Int32)">
<summary> Private type-safe-enum-pattern constructor.
</summary>
<param name="name">instance name
</param>
<param name="limit">maximum field length
</param>
</member>
<member name="M:Lucene.Net.Index.IndexWriter.MaxFieldLength.#ctor(System.Int32)">
<summary> Public constructor to allow users to specify the maximum field size limit.
</summary>
<param name="limit">The maximum field length
</param>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.MaxFieldLength.UNLIMITED">
<summary>Sets the maximum field length to <see cref="F:System.Int32.MaxValue"/>. </summary>
</member>
<member name="F:Lucene.Net.Index.IndexWriter.MaxFieldLength.LIMITED">
<summary> Sets the maximum field length to
<see cref="F:Lucene.Net.Index.IndexWriter.DEFAULT_MAX_FIELD_LENGTH"/>
</summary>
</member>
<member name="T:Lucene.Net.Index.IndexWriter.IndexReaderWarmer">
<summary>If <see cref="M:Lucene.Net.Index.IndexWriter.GetReader"/> has been called (ie, this writer
is in near real-time mode), then after a merge
completes, this class can be invoked to warm the
reader on the newly merged segment, before the merge
commits. This is not required for near real-time
search, but will reduce search latency on opening a
new near real-time reader after a merge completes.
<p/><b>NOTE:</b> This API is experimental and might
change in incompatible ways in the next release.<p/>
<p/><b>NOTE</b>: warm is called before any deletes have
been carried over to the merged segment.
</summary>
</member>
<member name="M:Lucene.Net.Index.InvertedDocConsumer.AddThread(Lucene.Net.Index.DocInverterPerThread)">
<summary>Add a new thread </summary>
</member>
<member name="M:Lucene.Net.Index.InvertedDocConsumer.Abort">
<summary>Abort (called after hitting AbortException) </summary>
</member>
<member name="M:Lucene.Net.Index.InvertedDocConsumer.Flush(System.Collections.Generic.IDictionary{Lucene.Net.Index.InvertedDocConsumerPerThread,System.Collections.Generic.ICollection{Lucene.Net.Index.InvertedDocConsumerPerField}},Lucene.Net.Index.SegmentWriteState)">
<summary>Flush a new segment </summary>
</member>
<member name="M:Lucene.Net.Index.InvertedDocConsumer.CloseDocStore(Lucene.Net.Index.SegmentWriteState)">
<summary>Close doc stores </summary>
</member>
<member name="M:Lucene.Net.Index.InvertedDocConsumer.FreeRAM">
<summary>Attempt to free RAM, returning true if any RAM was
freed
</summary>
</member>
<member name="T:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy">
<summary> This <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> implementation that
keeps only the most recent commit and immediately removes
all prior commits after a new commit is done. This is
the default deletion policy.
</summary>
</member>
<member name="M:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy.OnInit``1(System.Collections.Generic.IList{``0})">
<summary> Deletes all commits except the most recent one.</summary>
</member>
<member name="M:Lucene.Net.Index.KeepOnlyLastCommitDeletionPolicy.OnCommit``1(System.Collections.Generic.IList{``0})">
<summary> Deletes all commits except the most recent one.</summary>
</member>
<member name="T:Lucene.Net.Index.LogByteSizeMergePolicy">
<summary>This is a <see cref="T:Lucene.Net.Index.LogMergePolicy"/> that measures size of a
segment as the total byte size of the segment's files.
</summary>
</member>
<member name="T:Lucene.Net.Index.LogMergePolicy">
<summary><p/>This class implements a <see cref="T:Lucene.Net.Index.MergePolicy"/> that tries
to merge segments into levels of exponentially
increasing size, where each level has fewer segments than
the value of the merge factor. Whenever extra segments
(beyond the merge factor upper bound) are encountered,
all segments within the level are merged. You can get or
set the merge factor using <see cref="P:Lucene.Net.Index.LogMergePolicy.MergeFactor"/> and
<see cref="P:Lucene.Net.Index.LogMergePolicy.MergeFactor"/> respectively.<p/>
<p/>This class is abstract and requires a subclass to
define the <see cref="M:Lucene.Net.Index.LogMergePolicy.Size(Lucene.Net.Index.SegmentInfo)"/> method which specifies how a
segment's size is determined. <see cref="T:Lucene.Net.Index.LogDocMergePolicy"/>
is one subclass that measures size by document count in
the segment. <see cref="T:Lucene.Net.Index.LogByteSizeMergePolicy"/> is another
subclass that measures size as the total byte size of the
file(s) for the segment.<p/>
</summary>
</member>
<member name="T:Lucene.Net.Index.MergePolicy">
<summary> <p/>Expert: a MergePolicy determines the sequence of
primitive merge operations to be used for overall merge
and optimize operations.<p/>
<p/>Whenever the segments in an index have been altered by
<see cref="T:Lucene.Net.Index.IndexWriter"/>, either the addition of a newly
flushed segment, addition of many segments from
addIndexes* calls, or a previous merge that may now need
to cascade, <see cref="T:Lucene.Net.Index.IndexWriter"/> invokes <see cref="M:Lucene.Net.Index.MergePolicy.FindMerges(Lucene.Net.Index.SegmentInfos)"/>
to give the MergePolicy a chance to pick
merges that are now required. This method returns a
<see cref="T:Lucene.Net.Index.MergePolicy.MergeSpecification"/> instance describing the set of
merges that should be done, or null if no merges are
necessary. When IndexWriter.optimize is called, it calls
<see cref="M:Lucene.Net.Index.MergePolicy.FindMergesForOptimize(Lucene.Net.Index.SegmentInfos,System.Int32,System.Collections.Generic.ISet{Lucene.Net.Index.SegmentInfo})"/> and the MergePolicy should
then return the necessary merges.<p/>
<p/>Note that the policy can return more than one merge at
a time. In this case, if the writer is using <see cref="T:Lucene.Net.Index.SerialMergeScheduler"/>
, the merges will be run
sequentially but if it is using <see cref="T:Lucene.Net.Index.ConcurrentMergeScheduler"/>
they will be run concurrently.<p/>
<p/>The default MergePolicy is <see cref="T:Lucene.Net.Index.LogByteSizeMergePolicy"/>
.<p/>
<p/><b>NOTE:</b> This API is new and still experimental
(subject to change suddenly in the next release)<p/>
<p/><b>NOTE</b>: This class typically requires access to
package-private APIs (e.g. <c>SegmentInfos</c>) to do its job;
if you implement your own MergePolicy, you'll need to put
it in package Lucene.Net.Index in order to use
these APIs.
</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.FindMerges(Lucene.Net.Index.SegmentInfos)">
<summary> Determine what set of merge operations are now necessary on the index.
<see cref="T:Lucene.Net.Index.IndexWriter"/> calls this whenever there is a change to the segments.
This call is always synchronized on the <see cref="T:Lucene.Net.Index.IndexWriter"/> instance so
only one thread at a time will call this method.
</summary>
<param name="segmentInfos">the total set of segments in the index
</param>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.FindMergesForOptimize(Lucene.Net.Index.SegmentInfos,System.Int32,System.Collections.Generic.ISet{Lucene.Net.Index.SegmentInfo})">
<summary> Determine what set of merge operations is necessary in order to optimize
the index. <see cref="T:Lucene.Net.Index.IndexWriter"/> calls this when its
<see cref="M:Lucene.Net.Index.IndexWriter.Optimize"/> method is called. This call is always
synchronized on the <see cref="T:Lucene.Net.Index.IndexWriter"/> instance so only one thread at a
time will call this method.
</summary>
<param name="segmentInfos">the total set of segments in the index
</param>
<param name="maxSegmentCount">requested maximum number of segments in the index (currently this
is always 1)
</param>
<param name="segmentsToOptimize">contains the specific SegmentInfo instances that must be merged
away. This may be a subset of all SegmentInfos.
</param>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.FindMergesToExpungeDeletes(Lucene.Net.Index.SegmentInfos)">
<summary> Determine what set of merge operations is necessary in order to expunge all
deletes from the index.
</summary>
<param name="segmentInfos">the total set of segments in the index
</param>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.Close">
<summary> Release all resources for the policy.</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.Dispose">
<summary> Release all resources for the policy.</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.UseCompoundFile(Lucene.Net.Index.SegmentInfos,Lucene.Net.Index.SegmentInfo)">
<summary> Returns true if a newly flushed (not from merge)
segment should use the compound file format.
</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.UseCompoundDocStore(Lucene.Net.Index.SegmentInfos)">
<summary> Returns true if the doc store files should use the
compound file format.
</summary>
</member>
<member name="T:Lucene.Net.Index.MergePolicy.OneMerge">
<summary>OneMerge provides the information necessary to perform
an individual primitive merge operation, resulting in
a single new segment. The merge spec includes the
subset of segments to be merged as well as whether the
new segment should use the compound file format.
</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.OneMerge.SetException(System.Exception)">
<summary>Record that an exception occurred while executing
this merge
</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.OneMerge.GetException">
<summary>Retrieve previous exception set by <see cref="M:Lucene.Net.Index.MergePolicy.OneMerge.SetException(System.Exception)"/>
.
</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.OneMerge.Abort">
<summary>Mark this merge as aborted. If this is called
before the merge is committed then the merge will
not be committed.
</summary>
</member>
<member name="M:Lucene.Net.Index.MergePolicy.OneMerge.IsAborted">
<summary>Returns true if this merge was aborted. </summary>
</member>
<member name="T:Lucene.Net.Index.MergePolicy.MergeSpecification">
<summary> A MergeSpecification instance provides the information
necessary to perform multiple merges. It simply
contains a list of <see cref="T:Lucene.Net.Index.MergePolicy.OneMerge"/> instances.
</summary>
</member>
<member name="F:Lucene.Net.Index.MergePolicy.MergeSpecification.merges">
<summary> The subset of segments to be included in the primitive merge.</summary>
</member>
<member name="T:Lucene.Net.Index.MergePolicy.MergeException">
<summary>Exception thrown if there are any problems while
executing a merge.
</summary>
</member>
<member name="P:Lucene.Net.Index.MergePolicy.MergeException.Directory">
<summary>Returns the <see cref="P:Lucene.Net.Index.MergePolicy.MergeException.Directory"/> of the index that hit
the exception.
</summary>
</member>
<member name="F:Lucene.Net.Index.LogMergePolicy.LEVEL_LOG_SPAN">
<summary>Defines the allowed range of log(size) for each
level. A level is computed by taking the max segment
log size, minus LEVEL_LOG_SPAN, and finding all
segments falling within that range.
</summary>
</member>
<member name="F:Lucene.Net.Index.LogMergePolicy.DEFAULT_MERGE_FACTOR">
<summary>Default merge factor, which is how many segments are
merged at a time
</summary>
</member>
<member name="F:Lucene.Net.Index.LogMergePolicy.DEFAULT_MAX_MERGE_DOCS">
<summary>Default maximum segment size. A segment of this size</summary>
<seealso cref="P:Lucene.Net.Index.LogMergePolicy.MaxMergeDocs">
</seealso>
</member>
<member name="F:Lucene.Net.Index.LogMergePolicy.DEFAULT_NO_CFS_RATIO">
<summary> Default noCFSRatio. If a merge's size is >= 10% of
the index, then we disable compound file for it.
See <see cref="P:Lucene.Net.Index.LogMergePolicy.NoCFSRatio"/>
</summary>
</member>
<member name="M:Lucene.Net.Index.LogMergePolicy.SetUseCompoundFile(System.Boolean)">
<summary>Gets or sets whether compound file format should be used for
newly flushed and newly merged segments.
</summary>
</member>
<member name="M:Lucene.Net.Index.LogMergePolicy.SetUseCompoundDocStore(System.Boolean)">
<summary>Sets whether compound file format should be used for
newly flushed and newly merged doc store
segment files (term vectors and stored fields).
</summary>
</member>
<member name="M:Lucene.Net.Index.LogMergePolicy.GetUseCompoundDocStore">
<summary>Returns true if newly flushed and newly merge doc
store segment files (term vectors and stored fields)
</summary>
<seealso cref="M:Lucene.Net.Index.LogMergePolicy.SetUseCompoundDocStore(System.Boolean)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.LogMergePolicy.IsOptimized(Lucene.Net.Index.SegmentInfo)">
<summary>Returns true if this single info is optimized (has no
pending norms or deletes, is in the same dir as the
writer, and matches the current compound file setting
</summary>
</member>
<member name="M:Lucene.Net.Index.LogMergePolicy.FindMergesForOptimize(Lucene.Net.Index.SegmentInfos,System.Int32,System.Collections.Generic.ISet{Lucene.Net.Index.SegmentInfo})">
<summary>Returns the merges necessary to optimize the index.
This merge policy defines "optimized" to mean only one
segment in the index, where that segment has no
deletions pending nor separate norms, and it is in
compound file format if the current useCompoundFile
setting is true. This method returns multiple merges
(mergeFactor at a time) so the <see cref="T:Lucene.Net.Index.MergeScheduler"/>
in use may make use of concurrency.
</summary>
</member>
<member name="M:Lucene.Net.Index.LogMergePolicy.FindMergesToExpungeDeletes(Lucene.Net.Index.SegmentInfos)">
<summary> Finds merges necessary to expunge all deletes from the
index. We simply merge adjacent segments that have
deletes, up to mergeFactor at a time.
</summary>
</member>
<member name="M:Lucene.Net.Index.LogMergePolicy.FindMerges(Lucene.Net.Index.SegmentInfos)">
<summary>Checks if any merges are now necessary and returns a
<see cref="T:Lucene.Net.Index.MergePolicy.MergeSpecification"/> if so. A merge
is necessary when there are more than <see cref="P:Lucene.Net.Index.LogMergePolicy.MergeFactor"/>
segments at a given level. When
multiple levels have too many segments, this method
will return multiple merges, allowing the <see cref="T:Lucene.Net.Index.MergeScheduler"/>
to use concurrency.
</summary>
</member>
<member name="P:Lucene.Net.Index.LogMergePolicy.MergeFactor">
<summary>Gets or sets how often segment indices are merged by
addDocument(). With smaller values, less RAM is used
while indexing, and searches on unoptimized indices are
faster, but indexing speed is slower. With larger
values, more RAM is used during indexing, and while
searches on unoptimized indices are slower, indexing is
faster. Thus larger values (> 10) are best for batch
index creation, and smaller values (< 10) for indices
that are interactively maintained.
</summary>
</member>
<member name="P:Lucene.Net.Index.LogMergePolicy.CalibrateSizeByDeletes">
<summary>Gets or sets whether the segment size should be calibrated by
the number of deletes when choosing segments for merge.
</summary>
</member>
<member name="P:Lucene.Net.Index.LogMergePolicy.MaxMergeDocs">
<summary>
Gets or sets the largest segment (measured by document
count) that may be merged with other segments.
<p/>Determines the largest segment (measured by
document count) that may be merged with other segments.
Small values (e.g., less than 10,000) are best for
interactive indexing, as this limits the length of
pauses while indexing to a few seconds. Larger values
are best for batched indexing and speedier
searches.<p/>
<p/>The default value is <see cref="F:System.Int32.MaxValue"/>.<p/>
<p/>The default merge policy (<see cref="T:Lucene.Net.Index.LogByteSizeMergePolicy"/>)
also allows you to set this
limit by net size (in MB) of the segment, using
<see cref="P:Lucene.Net.Index.LogByteSizeMergePolicy.MaxMergeMB"/>.<p/>
</summary>
</member>
<member name="F:Lucene.Net.Index.LogByteSizeMergePolicy.DEFAULT_MIN_MERGE_MB">
<seealso cref="P:Lucene.Net.Index.LogByteSizeMergePolicy.MinMergeMB">
</seealso>
</member>
<member name="F:Lucene.Net.Index.LogByteSizeMergePolicy.DEFAULT_MAX_MERGE_MB">
<summary>Default maximum segment size. A segment of this size</summary>
<seealso cref="P:Lucene.Net.Index.LogByteSizeMergePolicy.MaxMergeMB">
</seealso>
</member>
<member name="P:Lucene.Net.Index.LogByteSizeMergePolicy.MaxMergeMB">
<summary><p/>Gets or sets the largest segment (measured by total
byte size of the segment's files, in MB) that may be
merged with other segments. Small values (e.g., less
than 50 MB) are best for interactive indexing, as this
limits the length of pauses while indexing to a few
seconds. Larger values are best for batched indexing
and speedier searches.<p/>
<p/>Note that <see cref="P:Lucene.Net.Index.IndexWriter.MaxMergeDocs"/> is also
used to check whether a segment is too large for
merging (it's either or).<p/>
</summary>
</member>
<member name="P:Lucene.Net.Index.LogByteSizeMergePolicy.MinMergeMB">
<summary>Gets or sets the minimum size for the lowest level segments.
Any segments below this size are considered to be on
the same level (even if they vary drastically in size)
and will be merged whenever there are mergeFactor of
them. This effectively truncates the "long tail" of
small segments that would otherwise be created into a
single level. If you set this too large, it could
greatly increase the merging cost during indexing (if
you flush many small segments).
</summary>
</member>
<member name="T:Lucene.Net.Index.LogDocMergePolicy">
<summary>This is a <see cref="T:Lucene.Net.Index.LogMergePolicy"/> that measures size of a
segment as the number of documents (not taking deletions
into account).
</summary>
</member>
<member name="F:Lucene.Net.Index.LogDocMergePolicy.DEFAULT_MIN_MERGE_DOCS">
<seealso cref="P:Lucene.Net.Index.LogDocMergePolicy.MinMergeDocs">
</seealso>
</member>
<member name="P:Lucene.Net.Index.LogDocMergePolicy.MinMergeDocs">
<summary>Gets or sets the minimum size for the lowest level segments.
Any segments below this size are considered to be on
the same level (even if they vary drastically in size)
and will be merged whenever there are mergeFactor of
them. This effectively truncates the "long tail" of
small segments that would otherwise be created into a
single level. If you set this too large, it could
greatly increase the merging cost during indexing (if
you flush many small segments).
</summary>
</member>
<member name="T:Lucene.Net.Index.MergeDocIDRemapper">
<summary>Remaps docIDs after a merge has completed, where the
merged segments had at least one deletion. This is used
to renumber the buffered deletes in IndexWriter when a
merge of segments with deletions commits.
</summary>
</member>
<member name="T:Lucene.Net.Index.MultipleTermPositions">
<summary> Allows you to iterate over the <see cref="T:Lucene.Net.Index.TermPositions"/> for multiple <see cref="T:Lucene.Net.Index.Term"/>s as
a single <see cref="T:Lucene.Net.Index.TermPositions"/>.
</summary>
</member>
<member name="M:Lucene.Net.Index.MultipleTermPositions.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term[])">
<summary> Creates a new <c>MultipleTermPositions</c> instance.
</summary>
<exception cref="T:System.IO.IOException">
</exception>
</member>
<member name="M:Lucene.Net.Index.MultipleTermPositions.Seek(Lucene.Net.Index.Term)">
<summary> Not implemented.</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="M:Lucene.Net.Index.MultipleTermPositions.Seek(Lucene.Net.Index.TermEnum)">
<summary> Not implemented.</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="M:Lucene.Net.Index.MultipleTermPositions.Read(System.Int32[],System.Int32[])">
<summary> Not implemented.</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="M:Lucene.Net.Index.MultipleTermPositions.GetPayload(System.Byte[],System.Int32)">
<summary> Not implemented.</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="P:Lucene.Net.Index.MultipleTermPositions.PayloadLength">
<summary> Not implemented.</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="P:Lucene.Net.Index.MultipleTermPositions.IsPayloadAvailable">
<summary> </summary>
<value> false </value>
</member>
<member name="T:Lucene.Net.Util.PriorityQueue`1">
<summary>A PriorityQueue maintains a partial ordering of its elements such that the
least element can always be found in constant time. Put()'s and pop()'s
require log(size) time.
<p/><b>NOTE</b>: This class pre-allocates a full array of
length <c>maxSize+1</c>, in <see cref="M:Lucene.Net.Util.PriorityQueue`1.Initialize(System.Int32)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.LessThan(`0,`0)">
<summary>Determines the ordering of objects in this priority queue. Subclasses
must define this one method.
</summary>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.Initialize(System.Int32)">
<summary>Subclass constructors must call this. </summary>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.Add(`0)">
<summary>
Adds an Object to a PriorityQueue in log(size) time. If one tries to add
more objects than maxSize from initialize an
<see cref="T:System.IndexOutOfRangeException"/> is thrown.
</summary>
<returns> the new 'top' element in the queue.
</returns>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.InsertWithOverflow(`0)">
<summary> Adds an Object to a PriorityQueue in log(size) time.
It returns the object (if any) that was
dropped off the heap because it was full. This can be
the given parameter (in case it is smaller than the
full heap's minimum, and couldn't be added), or another
object that was previously the smallest value in the
heap and now has been replaced by a larger one, or null
if the queue wasn't yet full with maxSize elements.
</summary>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.Top">
<summary>Returns the least element of the PriorityQueue in constant time. </summary>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.Pop">
<summary>
Removes and returns the least element of the
PriorityQueue in log(size) time.
</summary>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.UpdateTop">
<summary> Should be called when the Object at top changes values.
Still log(n) worst case, but it's at least twice as fast to
<code>
pq.top().change();
pq.updateTop();
</code>
instead of
<code>
o = pq.pop();
o.change();
pq.push(o);
</code>
</summary>
<returns> the new 'top' element.</returns>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.Size">
<summary>Returns the number of elements currently stored in the PriorityQueue. </summary>
</member>
<member name="M:Lucene.Net.Util.PriorityQueue`1.Clear">
<summary>Removes all entries from the PriorityQueue. </summary>
</member>
<member name="P:Lucene.Net.Util.PriorityQueue`1.SentinelObject">
<summary> This method can be overridden by extending classes to return a sentinel
object which will be used by <see cref="M:Lucene.Net.Util.PriorityQueue`1.Initialize(System.Int32)"/> to fill the queue, so
that the code which uses that queue can always assume it's full and only
change the top without attempting to insert any new object.<br/>
Those sentinel values should always compare worse than any non-sentinel
value (i.e., <see cref="M:Lucene.Net.Util.PriorityQueue`1.LessThan(`0,`0)"/> should always favor the
non-sentinel values).<br/>
By default, this method returns false, which means the queue will not be
filled with sentinel values. Otherwise, the value returned will be used to
pre-populate the queue. Adds sentinel values to the queue.<br/>
If this method is extended to return a non-null value, then the following
usage pattern is recommended:
<code>
// extends getSentinelObject() to return a non-null value.
PriorityQueue<MyObject> pq = new MyQueue<MyObject>(numHits);
// save the 'top' element, which is guaranteed to not be null.
MyObject pqTop = pq.top();
<...>
// now in order to add a new element, which is 'better' than top (after
// you've verified it is better), it is as simple as:
pqTop.change().
pqTop = pq.updateTop();
</code>
<b>NOTE:</b> if this method returns a non-null value, it will be called by
<see cref="M:Lucene.Net.Util.PriorityQueue`1.Initialize(System.Int32)"/> <see cref="M:Lucene.Net.Util.PriorityQueue`1.Size"/> times, relying on a new object to
be returned and will not check if it's null again. Therefore you should
ensure any call to this method creates a new instance and behaves
consistently, e.g., it cannot return null if it previously returned
non-null.
</summary>
<value> the sentinel object to use to pre-populate the queue, or null if sentinel objects are not supported. </value>
</member>
<member name="T:Lucene.Net.Index.MultiReader">
<summary>An IndexReader which reads multiple indexes, appending
their content.
</summary>
</member>
<member name="M:Lucene.Net.Index.MultiReader.#ctor(Lucene.Net.Index.IndexReader[])">
<summary> <p/>Construct a MultiReader aggregating the named set of (sub)readers.
Directory locking for delete, undeleteAll, and setNorm operations is
left to the subreaders. <p/>
<p/>Note that all subreaders are closed if this Multireader is closed.<p/>
</summary>
<param name="subReaders">set of (sub)readers
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.MultiReader.#ctor(Lucene.Net.Index.IndexReader[],System.Boolean)">
<summary> <p/>Construct a MultiReader aggregating the named set of (sub)readers.
Directory locking for delete, undeleteAll, and setNorm operations is
left to the subreaders. <p/>
</summary>
<param name="closeSubReaders">indicates whether the subreaders should be closed
when this MultiReader is closed
</param>
<param name="subReaders">set of (sub)readers
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.MultiReader.Reopen">
<summary> Tries to reopen the subreaders.
<br/>
If one or more subreaders could be re-opened (i. e. subReader.reopen()
returned a new instance != subReader), then a new MultiReader instance
is returned, otherwise this instance is returned.
<p/>
A re-opened instance might share one or more subreaders with the old
instance. Index modification operations result in undefined behavior
when performed before the old instance is closed.
(see <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>).
<p/>
If subreaders are shared, then the reference count of those
readers is increased to ensure that the subreaders remain open
until the last referring reader is closed.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.MultiReader.Clone">
<summary> Clones the subreaders.
(see <see cref="M:Lucene.Net.Index.IndexReader.Clone"/>).
<br/>
<p/>
If subreaders are shared, then the reference count of those
readers is increased to ensure that the subreaders remain open
until the last referring reader is closed.
</summary>
</member>
<member name="M:Lucene.Net.Index.MultiReader.DoReopen(System.Boolean)">
<summary> If clone is true then we clone each of the subreaders</summary>
<param name="doClone">
</param>
<returns> New IndexReader, or same one (this) if
reopen/clone is not necessary
</returns>
<throws> CorruptIndexException </throws>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.MultiReader.IsCurrent">
<summary> Checks recursively if all subreaders are up to date. </summary>
</member>
<member name="P:Lucene.Net.Index.MultiReader.Version">
<summary>Not implemented.</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="T:Lucene.Net.Index.NormsWriter">
<summary>Writes norms. Each thread X field accumulates the norms
for the doc/fields it saw, then the flush method below
merges all of these together into a single _X.nrm file.
</summary>
</member>
<member name="M:Lucene.Net.Index.NormsWriter.Flush(System.Collections.Generic.IDictionary{Lucene.Net.Index.InvertedDocEndConsumerPerThread,System.Collections.Generic.ICollection{Lucene.Net.Index.InvertedDocEndConsumerPerField}},Lucene.Net.Index.SegmentWriteState)">
<summary>Produce _X.nrm if any document had a field with norms
not disabled
</summary>
</member>
<member name="T:Lucene.Net.Index.NormsWriterPerField">
<summary>Taps into DocInverter, as an InvertedDocEndConsumer,
which is called at the end of inverting each field. We
just look at the length for the field (docState.length)
and record the norm.
</summary>
</member>
<member name="T:Lucene.Net.Index.ParallelReader">
<summary>An IndexReader which reads multiple, parallel indexes. Each index added
must have the same number of documents, but typically each contains
different fields. Each document contains the union of the fields of all
documents with the same document number. When searching, matches for a
query term are from the first index added that has the field.
<p/>This is useful, e.g., with collections that have large fields which
change rarely and small fields that change more frequently. The smaller
fields may be re-indexed in a new index and both indexes may be searched
together.
<p/><strong>Warning:</strong> It is up to you to make sure all indexes
are created and modified the same way. For example, if you add
documents to one index, you need to add the same documents in the
same order to the other indexes. <em>Failure to do so will result in
undefined behavior</em>.
</summary>
</member>
<member name="M:Lucene.Net.Index.ParallelReader.#ctor">
<summary>Construct a ParallelReader.
<p/>Note that all subreaders are closed if this ParallelReader is closed.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.ParallelReader.#ctor(System.Boolean)">
<summary>Construct a ParallelReader. </summary>
<param name="closeSubReaders">indicates whether the subreaders should be closed
when this ParallelReader is closed
</param>
</member>
<member name="M:Lucene.Net.Index.ParallelReader.Add(Lucene.Net.Index.IndexReader)">
<summary>Add an IndexReader.</summary>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.ParallelReader.Add(Lucene.Net.Index.IndexReader,System.Boolean)">
<summary>Add an IndexReader whose stored fields will not be returned. This can
accellerate search when stored fields are only needed from a subset of
the IndexReaders.
</summary>
<throws> IllegalArgumentException if not all indexes contain the same number </throws>
<summary> of documents
</summary>
<throws> IllegalArgumentException if not all indexes have the same value </throws>
<summary> of <see cref="P:Lucene.Net.Index.IndexReader.MaxDoc"/>
</summary>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.ParallelReader.Reopen">
<summary> Tries to reopen the subreaders.
<br/>
If one or more subreaders could be re-opened (i. e. subReader.reopen()
returned a new instance != subReader), then a new ParallelReader instance
is returned, otherwise this instance is returned.
<p/>
A re-opened instance might share one or more subreaders with the old
instance. Index modification operations result in undefined behavior
when performed before the old instance is closed.
(see <see cref="M:Lucene.Net.Index.IndexReader.Reopen"/>).
<p/>
If subreaders are shared, then the reference count of those
readers is increased to ensure that the subreaders remain open
until the last referring reader is closed.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.ParallelReader.IsCurrent">
<summary> Checks recursively if all subreaders are up to date. </summary>
</member>
<member name="M:Lucene.Net.Index.ParallelReader.IsOptimized">
<summary> Checks recursively if all subindexes are optimized </summary>
</member>
<member name="P:Lucene.Net.Index.ParallelReader.Version">
<summary>Not implemented.</summary>
<throws> UnsupportedOperationException </throws>
</member>
<member name="T:Lucene.Net.Index.Payload">
<summary> A Payload is metadata that can be stored together with each occurrence
of a term. This metadata is stored inline in the posting list of the
specific term.
<p/>
To store payloads in the index a <see cref="T:Lucene.Net.Analysis.TokenStream"/> has to be used that
produces payload data.
<p/>
Use <see cref="P:Lucene.Net.Index.TermPositions.PayloadLength"/> and <see cref="M:Lucene.Net.Index.TermPositions.GetPayload(System.Byte[],System.Int32)"/>
to retrieve the payloads from the index.<br/>
</summary>
</member>
<member name="F:Lucene.Net.Index.Payload.data">
<summary>the byte array containing the payload data </summary>
</member>
<member name="F:Lucene.Net.Index.Payload.internalOffset">
<summary>the offset within the byte array </summary>
</member>
<member name="F:Lucene.Net.Index.Payload.internalLength">
<summary>the length of the payload data </summary>
</member>
<member name="M:Lucene.Net.Index.Payload.#ctor">
<summary>Creates an empty payload and does not allocate a byte array. </summary>
</member>
<member name="M:Lucene.Net.Index.Payload.#ctor(System.Byte[])">
<summary> Creates a new payload with the the given array as data.
A reference to the passed-in array is held, i. e. no
copy is made.
</summary>
<param name="data">the data of this payload
</param>
</member>
<member name="M:Lucene.Net.Index.Payload.#ctor(System.Byte[],System.Int32,System.Int32)">
<summary> Creates a new payload with the the given array as data.
A reference to the passed-in array is held, i. e. no
copy is made.
</summary>
<param name="data">the data of this payload
</param>
<param name="offset">the offset in the data byte array
</param>
<param name="length">the length of the data
</param>
</member>
<member name="M:Lucene.Net.Index.Payload.SetData(System.Byte[],System.Int32,System.Int32)">
<summary> Sets this payloads data.
A reference to the passed-in array is held, i. e. no
copy is made.
</summary>
</member>
<member name="M:Lucene.Net.Index.Payload.SetData(System.Byte[])">
<summary> Gets or sets a reference to the underlying byte array
that holds this payloads data. Data is not copied.
</summary>
</member>
<member name="M:Lucene.Net.Index.Payload.GetData">
<summary> Gets or sets a reference to the underlying byte array
that holds this payloads data. Data is not copied.
</summary>
</member>
<member name="M:Lucene.Net.Index.Payload.ByteAt(System.Int32)">
<summary> Returns the byte at the given index.</summary>
</member>
<member name="M:Lucene.Net.Index.Payload.ToByteArray">
<summary> Allocates a new byte array, copies the payload data into it and returns it. </summary>
</member>
<member name="M:Lucene.Net.Index.Payload.CopyTo(System.Byte[],System.Int32)">
<summary> Copies the payload data to a byte array.
</summary>
<param name="target">the target byte array
</param>
<param name="targetOffset">the offset in the target byte array
</param>
</member>
<member name="M:Lucene.Net.Index.Payload.Clone">
<summary> Clones this payload by creating a copy of the underlying
byte array.
</summary>
</member>
<member name="P:Lucene.Net.Index.Payload.Offset">
<summary> Returns the offset in the underlying byte array </summary>
</member>
<member name="P:Lucene.Net.Index.Payload.Length">
<summary> Returns the length of the payload data. </summary>
</member>
<member name="T:Lucene.Net.Index.PositionBasedTermVectorMapper">
<summary> For each Field, store position by position information. It ignores frequency information
<p/>
This is not thread-safe.
</summary>
</member>
<member name="F:Lucene.Net.Index.PositionBasedTermVectorMapper.currentPositions">
<summary> A Map of Integer and TVPositionInfo</summary>
</member>
<member name="M:Lucene.Net.Index.PositionBasedTermVectorMapper.Map(System.String,System.Int32,Lucene.Net.Index.TermVectorOffsetInfo[],System.Int32[])">
<summary> Callback for the TermVectorReader. </summary>
<param name="term">
</param>
<param name="frequency">
</param>
<param name="offsets">
</param>
<param name="positions">
</param>
</member>
<member name="M:Lucene.Net.Index.PositionBasedTermVectorMapper.SetExpectations(System.String,System.Int32,System.Boolean,System.Boolean)">
<summary> Callback mechanism used by the TermVectorReader</summary>
<param name="field"> The field being read
</param>
<param name="numTerms">The number of terms in the vector
</param>
<param name="storeOffsets">Whether offsets are available
</param>
<param name="storePositions">Whether positions are available
</param>
</member>
<member name="P:Lucene.Net.Index.PositionBasedTermVectorMapper.IsIgnoringPositions">
<summary> Never ignores positions. This mapper doesn't make much sense unless there are positions</summary>
<value> false </value>
</member>
<member name="P:Lucene.Net.Index.PositionBasedTermVectorMapper.FieldToTerms">
<summary> Get the mapping between fields and terms, sorted by the comparator
</summary>
<value> A map between field names and a Map. The sub-Map key is the position as the integer, the value is <see cref="Lucene.Net.Index.PositionBasedTermVectorMapper.TVPositionInfo" />. </value>
</member>
<member name="T:Lucene.Net.Index.PositionBasedTermVectorMapper.TVPositionInfo">
<summary> Container for a term at a position</summary>
</member>
<member name="P:Lucene.Net.Index.PositionBasedTermVectorMapper.TVPositionInfo.Position">
<summary> </summary>
<returns> The position of the term
</returns>
</member>
<member name="P:Lucene.Net.Index.PositionBasedTermVectorMapper.TVPositionInfo.Terms">
<summary> Note, there may be multiple terms at the same position</summary>
<returns> A List of Strings
</returns>
</member>
<member name="P:Lucene.Net.Index.PositionBasedTermVectorMapper.TVPositionInfo.Offsets">
<summary> Parallel list (to <see cref="P:Lucene.Net.Index.PositionBasedTermVectorMapper.TVPositionInfo.Terms"/>) of TermVectorOffsetInfo objects.
There may be multiple entries since there may be multiple terms at a position</summary>
<returns> A List of TermVectorOffsetInfo objects, if offsets are store.
</returns>
</member>
<member name="T:Lucene.Net.Index.SegmentReader">
<version> $Id
</version>
<summary> <p/><b>NOTE:</b> This API is new and still experimental
(subject to change suddenly in the next release)<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.Get(System.Boolean,Lucene.Net.Index.SegmentInfo,System.Int32)">
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.Get(System.Boolean,Lucene.Net.Store.Directory,Lucene.Net.Index.SegmentInfo,System.Int32,System.Boolean,System.Int32)">
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.CloneNormBytes(System.Byte[])">
<summary> Clones the norm bytes. May be overridden by subclasses. New and experimental.</summary>
<param name="bytes">Byte array to clone
</param>
<returns> New BitVector
</returns>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.CloneDeletedDocs(Lucene.Net.Util.BitVector)">
<summary> Clones the deleteDocs BitVector. May be overridden by subclasses. New and experimental.</summary>
<param name="bv">BitVector to clone
</param>
<returns> New BitVector
</returns>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.GetFieldNames(Lucene.Net.Index.IndexReader.FieldOption)">
<seealso cref="M:Lucene.Net.Index.IndexReader.GetFieldNames(Lucene.Net.Index.IndexReader.FieldOption)">
</seealso>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.Norms(System.String,System.Byte[],System.Int32)">
<summary>Read norms into a pre-allocated array. </summary>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.GetTermVectorsReader">
<summary> Create a clone from the initial TermVectorsReader and store it in the ThreadLocal.</summary>
<returns> TermVectorsReader
</returns>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.GetTermFreqVector(System.Int32,System.String)">
<summary>Return a term frequency vector for the specified document and field. The
vector returned contains term numbers and frequencies for all terms in
the specified field of this document, if the field had storeTermVector
flag set. If the flag was not set, the method returns null.
</summary>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.GetTermFreqVectors(System.Int32)">
<summary>Return an array of term frequency vectors for the specified document.
The array contains a vector for each vectorized field in the document.
Each vector vector contains term numbers and frequencies for all terms
in a given vectorized field.
If no such fields existed, the method returns null.
</summary>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.Directory">
<summary>Returns the directory this index resides in. </summary>
</member>
<member name="M:Lucene.Net.Index.SegmentReader.GetOnlySegmentReader(Lucene.Net.Store.Directory)">
<summary> Lotsa tests did hacks like:<br/>
SegmentReader reader = (SegmentReader) IndexReader.open(dir);<br/>
They broke. This method serves as a hack to keep hacks working
We do it with R/W access for the tests (BW compatibility)
</summary>
</member>
<member name="P:Lucene.Net.Index.SegmentReader.SegmentName">
<summary> Return the name of the segment this reader is reading.</summary>
</member>
<member name="P:Lucene.Net.Index.SegmentReader.SegmentInfo">
<summary> Return the SegmentInfo of the segment this reader is reading.</summary>
</member>
<member name="T:Lucene.Net.Index.SegmentReader.FieldsReaderLocal">
<summary> Sets the initial value </summary>
</member>
<member name="T:Lucene.Net.Util.CloseableThreadLocal`1">
<summary>Java's builtin ThreadLocal has a serious flaw:
it can take an arbitrarily long amount of time to
dereference the things you had stored in it, even once the
ThreadLocal instance itself is no longer referenced.
This is because there is single, master map stored for
each thread, which all ThreadLocals share, and that
master map only periodically purges "stale" entries.
While not technically a memory leak, because eventually
the memory will be reclaimed, it can take a long time
and you can easily hit OutOfMemoryError because from the
GC's standpoint the stale entries are not reclaimaible.
This class works around that, by only enrolling
WeakReference values into the ThreadLocal, and
separately holding a hard reference to each stored
value. When you call <see cref="M:Lucene.Net.Util.CloseableThreadLocal`1.Close"/>, these hard
references are cleared and then GC is freely able to
reclaim space by objects stored in it.
</summary>
</member>
<member name="T:Lucene.Net.Index.SegmentReader.Norm">
<summary> Byte[] referencing is used because a new norm object needs
to be created for each clone, and the byte array is all
that is needed for sharing between cloned readers. The
current norm referencing is for sharing between readers
whereas the byte[] referencing is for copy on write which
is independent of reader references (i.e. incRef, decRef).
</summary>
</member>
<member name="T:Lucene.Net.Index.ReusableStringReader">
<summary>Used by DocumentsWriter to implemented a StringReader
that can be reset to a new string; we use this when
tokenizing the string value from a Field.
</summary>
</member>
<member name="T:Lucene.Net.Index.SegmentInfo">
<summary> Information about a segment such as it's name, directory, and files related
to the segment.
* <p/><b>NOTE:</b> This API is new and still experimental
(subject to change suddenly in the next release)<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.Reset(Lucene.Net.Index.SegmentInfo)">
<summary> Copy everything from src SegmentInfo into our instance.</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.#ctor(Lucene.Net.Store.Directory,System.Int32,Lucene.Net.Store.IndexInput)">
<summary> Construct a new SegmentInfo instance by reading a
previously saved SegmentInfo from input.
</summary>
<param name="dir">directory to load from
</param>
<param name="format">format of the segments info file
</param>
<param name="input">input handle to read segment info from
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.SizeInBytes">
<summary>Returns total size in bytes of all of files used by
this segment.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.HasSeparateNorms(System.Int32)">
<summary> Returns true if this field for this segment has saved a separate norms file (_<segment>_N.sX).
</summary>
<param name="fieldNumber">the field index to check
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.HasSeparateNorms">
<summary> Returns true if any fields in this segment have separate norms.</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.AdvanceNormGen(System.Int32)">
<summary> Increment the generation count for the norms file for
this field.
</summary>
<param name="fieldIndex">field whose norm file will be rewritten
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.GetNormFileName(System.Int32)">
<summary> Get the file name for the norms file for this field.
</summary>
<param name="number">field index
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.SetUseCompoundFile(System.Boolean)">
<summary> Returns true if this segment is stored as a compound
file; else, false.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.GetUseCompoundFile">
<summary> Returns true if this segment is stored as a compound
file; else, false.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.Write(Lucene.Net.Store.IndexOutput)">
<summary> Save this segment's info.</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.SegString(Lucene.Net.Store.Directory)">
<summary>Used for debugging </summary>
</member>
<member name="M:Lucene.Net.Index.SegmentInfo.Equals(System.Object)">
<summary>We consider another SegmentInfo instance equal if it
has the same dir and same name.
</summary>
</member>
<member name="T:Lucene.Net.Index.SegmentMerger">
<summary> The SegmentMerger class combines two or more Segments, represented by an IndexReader (<see cref="M:Lucene.Net.Index.SegmentMerger.Add(Lucene.Net.Index.IndexReader)"/>,
into a single Segment. After adding the appropriate readers, call the merge method to combine the
segments.
<p/>
If the compoundFile flag is set, then the segments will be merged into a compound file.
</summary>
<seealso cref="M:Lucene.Net.Index.SegmentMerger.Merge">
</seealso>
<seealso cref="M:Lucene.Net.Index.SegmentMerger.Add(Lucene.Net.Index.IndexReader)">
</seealso>
</member>
<member name="F:Lucene.Net.Index.SegmentMerger.MAX_RAW_MERGE_DOCS">
<summary>Maximum number of contiguous documents to bulk-copy
when merging stored fields
</summary>
</member>
<member name="F:Lucene.Net.Index.SegmentMerger.NORMS_HEADER">
<summary>norms header placeholder </summary>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.#ctor(Lucene.Net.Store.Directory,System.String)">
<summary>This ctor used only by test code.
</summary>
<param name="dir">The Directory to merge the other segments into
</param>
<param name="name">The name of the new segment
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.Add(Lucene.Net.Index.IndexReader)">
<summary> Add an IndexReader to the collection of readers that are to be merged</summary>
<param name="reader">
</param>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.SegmentReader(System.Int32)">
<summary> </summary>
<param name="i">The index of the reader to return
</param>
<returns> The ith reader to be merged
</returns>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.Merge">
<summary> Merges the readers specified by the <see cref="M:Lucene.Net.Index.SegmentMerger.Add(Lucene.Net.Index.IndexReader)"/> method into the directory passed to the constructor</summary>
<returns> The number of documents that were merged
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.Merge(System.Boolean)">
<summary> Merges the readers specified by the <see cref="M:Lucene.Net.Index.SegmentMerger.Add(Lucene.Net.Index.IndexReader)"/> method
into the directory passed to the constructor.
</summary>
<param name="mergeDocStores">if false, we will not merge the
stored fields nor vectors files
</param>
<returns> The number of documents that were merged
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.CloseReaders">
<summary> close all IndexReaders that have been added.
Should not be called before merge().
</summary>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.MergeFields">
<summary> </summary>
<returns> The number of documents in all of the readers
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.MergeVectors">
<summary> Merge the TermVectors from each of the segments into the new one.</summary>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.AppendPostings(Lucene.Net.Index.FormatPostingsTermsConsumer,Lucene.Net.Index.SegmentMergeInfo[],System.Int32)">
<summary>Process postings from multiple segments all positioned on the
same term. Writes out merged entries into freqOutput and
the proxOutput streams.
</summary>
<param name="smis">array of segments
</param>
<param name="n">number of cells in the array actually occupied
</param>
<returns> number of documents across all segments where this term was found
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Index.SegmentMerger.CheckAbort.Work(System.Double)">
<summary> Records the fact that roughly units amount of work
have been done since this method was last called.
When adding time-consuming code into SegmentMerger,
you should test different values for units to ensure
that the time in between calls to merge.checkAborted
is up to ~ 1 second.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermEnum.Next">
<summary>Increments the enumeration to the next element. True if one exists.</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermEnum.ScanTo(Lucene.Net.Index.Term)">
<summary>Optimized scan, without allocating new terms.
Return number of invocations to next().
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermEnum.Prev">
<summary>Returns the previous Term enumerated. Initially null.</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermEnum.TermInfo">
<summary>Returns the current TermInfo in the enumeration.
Initially invalid, valid after next() called for the first time.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermEnum.TermInfo(Lucene.Net.Index.TermInfo)">
<summary>Sets the argument to the current TermInfo in the enumeration.
Initially invalid, valid after next() called for the first time.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermEnum.DocFreq">
<summary>Returns the docFreq from the current TermInfo in the enumeration.
Initially invalid, valid after next() called for the first time.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermEnum.Dispose(System.Boolean)">
<summary>Closes the enumeration to further activity, freeing resources. </summary>
</member>
<member name="P:Lucene.Net.Index.SegmentTermEnum.Term">
<summary>Returns the current Term in the enumeration.
Initially invalid, valid after next() called for the first time.
</summary>
</member>
<member name="M:Lucene.Net.Index.SegmentTermPositions.SkipProx(System.Int64,System.Int32)">
<summary>Called by super.skipTo(). </summary>
</member>
<member name="T:Lucene.Net.Index.ITermFreqVector">
<summary>Provides access to stored term vector of
a document field. The vector consists of the name of the field, an array of the terms tha occur in the field of the
<see cref="T:Lucene.Net.Documents.Document"/> and a parallel array of frequencies. Thus, getTermFrequencies()[5] corresponds with the
frequency of getTerms()[5], assuming there are at least 5 terms in the Document.
</summary>
</member>
<member name="M:Lucene.Net.Index.ITermFreqVector.GetTerms">
<returns> An Array of term texts in ascending order.
</returns>
</member>
<member name="M:Lucene.Net.Index.ITermFreqVector.GetTermFrequencies">
<summary>Array of term frequencies. Locations of the array correspond one to one
to the terms in the array obtained from <c>getTerms</c>
method. Each location in the array contains the number of times this
term occurs in the document or the document field.
</summary>
</member>
<member name="M:Lucene.Net.Index.ITermFreqVector.IndexOf(System.String)">
<summary>Return an index in the term numbers array returned from
<c>getTerms</c> at which the term with the specified
<c>term</c> appears. If this term does not appear in the array,
return -1.
</summary>
</member>
<member name="M:Lucene.Net.Index.ITermFreqVector.IndexesOf(System.String[],System.Int32,System.Int32)">
<summary>Just like <c>indexOf(int)</c> but searches for a number of terms
at the same time. Returns an array that has the same size as the number
of terms searched for, each slot containing the result of searching for
that term number.
</summary>
<param name="terms">array containing terms to look for
</param>
<param name="start">index in the array where the list of terms starts
</param>
<param name="len">the number of terms in the list
</param>
</member>
<member name="P:Lucene.Net.Index.ITermFreqVector.Field">
<summary> The <see cref="T:Lucene.Net.Documents.IFieldable"/> name. </summary>
<value> The name of the field this vector is associated with. </value>
</member>
<member name="P:Lucene.Net.Index.ITermFreqVector.Size">
<value> The number of terms in the term vector. </value>
</member>
<member name="P:Lucene.Net.Index.SegmentTermVector.Field">
<summary> </summary>
<value> The number of the field this vector is associated with </value>
</member>
<member name="T:Lucene.Net.Index.TermPositionVector">
<summary>Extends <c>TermFreqVector</c> to provide additional information about
positions in which each of the terms is found. A TermPositionVector not necessarily
contains both positions and offsets, but at least one of these arrays exists.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermPositionVector.GetTermPositions(System.Int32)">
<summary>Returns an array of positions in which the term is found.
Terms are identified by the index at which its number appears in the
term String array obtained from the <c>indexOf</c> method.
May return null if positions have not been stored.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermPositionVector.GetOffsets(System.Int32)">
<summary> Returns an array of TermVectorOffsetInfo in which the term is found.
May return null if offsets have not been stored.
</summary>
<seealso cref="T:Lucene.Net.Analysis.Token">
</seealso>
<param name="index">The position in the array to get the offsets from
</param>
<returns> An array of TermVectorOffsetInfo objects or the empty list
</returns>
</member>
<member name="M:Lucene.Net.Index.SegmentTermPositionVector.GetOffsets(System.Int32)">
<summary> Returns an array of TermVectorOffsetInfo in which the term is found.
</summary>
<param name="index">The position in the array to get the offsets from
</param>
<returns> An array of TermVectorOffsetInfo objects or the empty list
</returns>
<seealso cref="T:Lucene.Net.Analysis.Token">
</seealso>
</member>
<member name="M:Lucene.Net.Index.SegmentTermPositionVector.GetTermPositions(System.Int32)">
<summary> Returns an array of positions in which the term is found.
Terms are identified by the index at which its number appears in the
term String array obtained from the <c>indexOf</c> method.
</summary>
</member>
<member name="T:Lucene.Net.Index.SerialMergeScheduler">
<summary>A <see cref="T:Lucene.Net.Index.MergeScheduler"/> that simply does each merge
sequentially, using the current thread.
</summary>
</member>
<member name="M:Lucene.Net.Index.SerialMergeScheduler.Merge(Lucene.Net.Index.IndexWriter)">
<summary>Just do the merges in sequence. We do this
"synchronized" so that even if the application is using
multiple threads, only one merge may run at a time.
</summary>
</member>
<member name="T:Lucene.Net.Index.SnapshotDeletionPolicy">
<summary>A <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> that wraps around any other
<see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/> and adds the ability to hold and
later release a single "snapshot" of an index. While
the snapshot is held, the <see cref="T:Lucene.Net.Index.IndexWriter"/> will not
remove any files associated with it even if the index is
otherwise being actively, arbitrarily changed. Because
we wrap another arbitrary <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>, this
gives you the freedom to continue using whatever <see cref="T:Lucene.Net.Index.IndexDeletionPolicy"/>
you would normally want to use with your
index. Note that you can re-use a single instance of
SnapshotDeletionPolicy across multiple writers as long
as they are against the same index Directory. Any
snapshot held when a writer is closed will "survive"
when the next writer is opened.
<p/><b>WARNING</b>: This API is a new and experimental and
may suddenly change.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Index.SnapshotDeletionPolicy.Snapshot">
<summary>Take a snapshot of the most recent commit to the
index. You must call release() to free this snapshot.
Note that while the snapshot is held, the files it
references will not be deleted, which will consume
additional disk space in your index. If you take a
snapshot at a particularly bad time (say just before
you call optimize()) then in the worst case this could
consume an extra 1X of your total index size, until
you release the snapshot.
</summary>
</member>
<member name="M:Lucene.Net.Index.SnapshotDeletionPolicy.Release">
<summary>Release the currently held snapshot. </summary>
</member>
<member name="T:Lucene.Net.Index.SortedTermVectorMapper">
<summary> Store a sorted collection of <see cref="T:Lucene.Net.Index.TermVectorEntry"/>s. Collects all term information
into a single, SortedSet.
<br/>
NOTE: This Mapper ignores all Field information for the Document. This means that if you are using offset/positions you will not
know what Fields they correlate with.
<br/>
This is not thread-safe
</summary>
</member>
<member name="F:Lucene.Net.Index.SortedTermVectorMapper.ALL">
<summary> Stand-in name for the field in <see cref="T:Lucene.Net.Index.TermVectorEntry"/>.</summary>
</member>
<member name="M:Lucene.Net.Index.SortedTermVectorMapper.#ctor(System.Collections.Generic.IComparer{Lucene.Net.Index.TermVectorEntry})">
<summary> </summary>
<param name="comparator">A Comparator for sorting <see cref="T:Lucene.Net.Index.TermVectorEntry"/>s
</param>
</member>
<member name="M:Lucene.Net.Index.SortedTermVectorMapper.Map(System.String,System.Int32,Lucene.Net.Index.TermVectorOffsetInfo[],System.Int32[])">
<summary> </summary>
<param name="term">The term to map
</param>
<param name="frequency">The frequency of the term
</param>
<param name="offsets">Offset information, may be null
</param>
<param name="positions">Position information, may be null
</param>
</member>
<member name="P:Lucene.Net.Index.SortedTermVectorMapper.TermVectorEntrySet">
<summary> The TermVectorEntrySet. A SortedSet of <see cref="T:Lucene.Net.Index.TermVectorEntry"/> objects. Sort is by the comparator passed into the constructor.
<br/>
This set will be empty until after the mapping process takes place.
</summary>
<value> The SortedSet of <see cref="TermVectorEntry" />. </value>
</member>
<member name="T:Lucene.Net.Index.StaleReaderException">
<summary> This exception is thrown when an <see cref="T:Lucene.Net.Index.IndexReader"/>
tries to make changes to the index (via <see cref="M:Lucene.Net.Index.IndexReader.DeleteDocument(System.Int32)"/>
, <see cref="M:Lucene.Net.Index.IndexReader.UndeleteAll"/>
or <see cref="M:Lucene.Net.Index.IndexReader.SetNorm(System.Int32,System.String,System.Single)"/>)
but changes have already been committed to the index
since this reader was instantiated. When this happens
you must open a new reader on the current index to make
the changes.
</summary>
</member>
<member name="T:Lucene.Net.Index.StoredFieldsWriter">
<summary>This is a DocFieldConsumer that writes stored fields. </summary>
</member>
<member name="M:Lucene.Net.Index.StoredFieldsWriter.Fill(System.Int32)">
<summary>Fills in any hole in the docIDs </summary>
</member>
<member name="T:Lucene.Net.Index.Term">
<summary>A Term represents a word from text. This is the unit of search. It is
composed of two elements, the text of the word, as a string, and the name of
the field that the text occured in, an interned string.
Note that terms may represent more than words from text fields, but also
things like dates, email addresses, urls, etc.
</summary>
</member>
<member name="M:Lucene.Net.Index.Term.#ctor(System.String,System.String)">
<summary>Constructs a Term with the given field and text.
<p/>Note that a null field or null text value results in undefined
behavior for most Lucene APIs that accept a Term parameter.
</summary>
</member>
<member name="M:Lucene.Net.Index.Term.#ctor(System.String)">
<summary>Constructs a Term with the given field and empty text.
This serves two purposes: 1) reuse of a Term with the same field.
2) pattern for a query.
</summary>
<param name="fld">
</param>
</member>
<member name="M:Lucene.Net.Index.Term.CreateTerm(System.String)">
<summary> Optimized construction of new Terms by reusing same field as this Term
- avoids field.intern() overhead
</summary>
<param name="text">The text of the new term (field is implicitly same as this Term instance)
</param>
<returns> A new Term
</returns>
</member>
<member name="M:Lucene.Net.Index.Term.CompareTo(Lucene.Net.Index.Term)">
<summary>Compares two terms, returning a negative integer if this
term belongs before the argument, zero if this term is equal to the
argument, and a positive integer if this term belongs after the argument.
The ordering of terms is first by field, then by text.
</summary>
</member>
<member name="P:Lucene.Net.Index.Term.Field">
<summary>Returns the field of this term, an interned string. The field indicates
the part of a document which this term came from.
</summary>
</member>
<member name="P:Lucene.Net.Index.Term.Text">
<summary>Returns the text of this term. In the case of words, this is simply the
text of the word. In the case of dates and other types, this is an
encoding of the object as a string.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermBuffer.SetPreUTF8Strings">
<summary>Call this if the IndexInput passed to <see cref="M:Lucene.Net.Index.TermBuffer.Read(Lucene.Net.Store.IndexInput,Lucene.Net.Index.FieldInfos)"/>
stores terms in the "modified UTF8" (pre LUCENE-510)
format.
</summary>
</member>
<member name="T:Lucene.Net.Index.TermInfo">
<summary>A TermInfo is the record of information stored for a term.</summary>
</member>
<member name="F:Lucene.Net.Index.TermInfo.docFreq">
<summary>The number of documents which contain the term. </summary>
</member>
<member name="T:Lucene.Net.Index.TermInfosReader">
<summary>This stores a monotonically increasing set of <Term, TermInfo> pairs in a
Directory. Pairs are accessed either by Term or by ordinal position the
set.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosReader.Size">
<summary>Returns the number of term/value pairs in the set. </summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosReader.GetIndexOffset(Lucene.Net.Index.Term)">
<summary>Returns the offset of the greatest index entry which is less than or equal to term.</summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosReader.Get(Lucene.Net.Index.Term)">
<summary>Returns the TermInfo for a Term in the set, or null. </summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosReader.Get(Lucene.Net.Index.Term,System.Boolean)">
<summary>Returns the TermInfo for a Term in the set, or null. </summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosReader.GetPosition(Lucene.Net.Index.Term)">
<summary>Returns the position of a Term in the set or -1. </summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosReader.Terms">
<summary>Returns an enumeration of all the Terms and TermInfos in the set. </summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosReader.Terms(Lucene.Net.Index.Term)">
<summary>Returns an enumeration of terms starting at or after the named term. </summary>
</member>
<member name="T:Lucene.Net.Index.TermInfosReader.ThreadResources">
<summary> Per-thread resources managed by ThreadLocal</summary>
</member>
<member name="T:Lucene.Net.Index.TermInfosWriter">
<summary>This stores a monotonically increasing set of <Term, TermInfo> pairs in a
Directory. A TermInfos can be written once, in order.
</summary>
</member>
<member name="F:Lucene.Net.Index.TermInfosWriter.FORMAT">
<summary>The file format version, a negative number. </summary>
</member>
<member name="F:Lucene.Net.Index.TermInfosWriter.indexInterval">
<summary>Expert: The fraction of terms in the "dictionary" which should be stored
in RAM. Smaller values use more memory, but make searching slightly
faster, while larger values use less memory and make searching slightly
slower. Searching is typically not dominated by dictionary lookup, so
tweaking this is rarely useful.
</summary>
</member>
<member name="F:Lucene.Net.Index.TermInfosWriter.skipInterval">
<summary>Expert: The fraction of <see cref="T:Lucene.Net.Index.TermDocs"/> entries stored in skip tables,
used to accellerate <see cref="M:Lucene.Net.Index.TermDocs.SkipTo(System.Int32)"/>. Larger values result in
smaller indexes, greater acceleration, but fewer accelerable cases, while
smaller values result in bigger indexes, less acceleration and more
accelerable cases. More detailed experiments would be useful here.
</summary>
</member>
<member name="F:Lucene.Net.Index.TermInfosWriter.maxSkipLevels">
<summary>Expert: The maximum number of skip levels. Smaller values result in
slightly smaller indexes, but slower skipping in big posting lists.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosWriter.Add(System.Int32,System.Byte[],System.Int32,Lucene.Net.Index.TermInfo)">
<summary>Adds a new <fieldNumber, termBytes>, TermInfo> pair to the set.
Term must be lexicographically greater than all previous Terms added.
TermInfo pointers must be positive and greater than all previous.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermInfosWriter.Dispose">
<summary>Called to complete TermInfos creation. </summary>
</member>
<member name="T:Lucene.Net.Index.TermsHash">
<summary>This class implements <see cref="T:Lucene.Net.Index.InvertedDocConsumer"/>, which
is passed each token produced by the analyzer on each
field. It stores these tokens in a hash table, and
allocates separate byte streams per token. Consumers of
this class, eg <see cref="T:Lucene.Net.Index.FreqProxTermsWriter"/> and <see cref="T:Lucene.Net.Index.TermVectorsTermsWriter"/>
, write their own byte streams
under each term.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermsHashPerField.SortPostings">
<summary>Collapse the hash table & sort in-place. </summary>
</member>
<member name="M:Lucene.Net.Index.TermsHashPerField.ComparePostings(Lucene.Net.Index.RawPostingList,Lucene.Net.Index.RawPostingList)">
<summary>Compares term text for two Posting instance and
returns -1 if p1 < p2; 1 if p1 > p2; else 0.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermsHashPerField.PostingEquals(System.Char[],System.Int32)">
<summary>Test whether the text for current RawPostingList p equals
current tokenText.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermsHashPerField.RehashPostings(System.Int32)">
<summary>Called when postings hash is too small (> 50%
occupied) or too large (< 20% occupied).
</summary>
</member>
<member name="T:Lucene.Net.Index.TermVectorEntry">
<summary> Convenience class for holding TermVector information.</summary>
</member>
<member name="T:Lucene.Net.Index.TermVectorEntryFreqSortedComparator">
<summary> Compares <see cref="T:Lucene.Net.Index.TermVectorEntry"/>s first by frequency and then by
the term (case-sensitive)
</summary>
</member>
<member name="T:Lucene.Net.Index.TermVectorOffsetInfo">
<summary> The TermVectorOffsetInfo class holds information pertaining to a Term in a <see cref="T:Lucene.Net.Index.TermPositionVector"/>'s
offset information. This offset information is the character offset as set during the Analysis phase (and thus may not be the actual offset in the
original content).
</summary>
</member>
<member name="F:Lucene.Net.Index.TermVectorOffsetInfo.EMPTY_OFFSET_INFO">
<summary> Convenience declaration when creating a <see cref="T:Lucene.Net.Index.TermPositionVector"/> that stores only position information.</summary>
</member>
<member name="P:Lucene.Net.Index.TermVectorOffsetInfo.EndOffset">
<summary> The accessor for the ending offset for the term</summary>
<value> The offset </value>
</member>
<member name="P:Lucene.Net.Index.TermVectorOffsetInfo.StartOffset">
<summary> The accessor for the starting offset of the term.
</summary>
<value> The offset </value>
</member>
<member name="M:Lucene.Net.Index.TermVectorsReader.RawDocs(System.Int32[],System.Int32[],System.Int32,System.Int32)">
<summary>Retrieve the length (in bytes) of the tvd and tvf
entries for the next numDocs starting with
startDocID. This is used for bulk copying when
merging segments, if the field numbers are
congruent. Once this returns, the tvf & tvd streams
are seeked to the startDocID.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermVectorsReader.Size">
<summary> </summary>
<returns> The number of documents in the reader
</returns>
</member>
<member name="M:Lucene.Net.Index.TermVectorsReader.Get(System.Int32,System.String)">
<summary> Retrieve the term vector for the given document and field</summary>
<param name="docNum">The document number to retrieve the vector for
</param>
<param name="field">The field within the document to retrieve
</param>
<returns> The TermFreqVector for the document and field or null if there is no termVector for this field.
</returns>
<throws> IOException if there is an error reading the term vector files </throws>
</member>
<member name="M:Lucene.Net.Index.TermVectorsReader.Get(System.Int32)">
<summary> Return all term vectors stored for this document or null if the could not be read in.
</summary>
<param name="docNum">The document number to retrieve the vector for
</param>
<returns> All term frequency vectors
</returns>
<throws> IOException if there is an error reading the term vector files </throws>
</member>
<member name="M:Lucene.Net.Index.TermVectorsReader.ReadTermVector(System.String,System.Int64,Lucene.Net.Index.TermVectorMapper)">
<summary> </summary>
<param name="field">The field to read in
</param>
<param name="tvfPointer">The pointer within the tvf file where we should start reading
</param>
<param name="mapper">The mapper used to map the TermVector
</param>
<throws> IOException </throws>
</member>
<member name="T:Lucene.Net.Index.ParallelArrayTermVectorMapper">
<summary> Models the existing parallel array structure</summary>
</member>
<member name="M:Lucene.Net.Index.ParallelArrayTermVectorMapper.MaterializeVector">
<summary> Construct the vector</summary>
<returns> The <see cref="T:Lucene.Net.Index.ITermFreqVector"/> based on the mappings.
</returns>
</member>
<member name="M:Lucene.Net.Index.TermVectorsTermsWriter.Fill(System.Int32)">
<summary>Fills in no-term-vectors for all docs we haven't seen
since the last doc that had term vectors.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermVectorsTermsWriterPerField.Finish">
<summary>Called once per field per document if term vectors
are enabled, to write the vectors to
RAMOutputStream, which is then quickly flushed to
the real term vectors files in the Directory.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermVectorsWriter.AddAllDocVectors(Lucene.Net.Index.ITermFreqVector[])">
<summary> Add a complete document specified by all its term vectors. If document has no
term vectors, add value for tvx.
</summary>
<param name="vectors">
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Index.TermVectorsWriter.AddRawDocuments(Lucene.Net.Index.TermVectorsReader,System.Int32[],System.Int32[],System.Int32)">
<summary> Do a bulk copy of numDocs documents from reader to our
streams. This is used to expedite merging, if the
field numbers are congruent.
</summary>
</member>
<member name="M:Lucene.Net.Index.TermVectorsWriter.Dispose">
<summary>Close all streams. </summary>
</member>
<member name="T:Lucene.Net.LucenePackage">
<summary>Lucene's package information, including version. *</summary>
</member>
<member name="T:Lucene.Net.Messages.Message">
<summary> Message Interface for a lazy loading.
For Native Language Support (NLS), system of software internationalization.
</summary>
</member>
<member name="T:Lucene.Net.Messages.MessageImpl">
<summary> Default implementation of Message interface.
For Native Language Support (NLS), system of software internationalization.
</summary>
</member>
<member name="T:Lucene.Net.Messages.NLS">
<summary> MessageBundles classes extend this class, to implement a bundle.
For Native Language Support (NLS), system of software internationalization.
This interface is similar to the NLS class in eclipse.osgi.util.NLS class -
initializeMessages() method resets the values of all static strings, should
only be called by classes that extend from NLS (see TestMessages.java for
reference) - performs validation of all message in a bundle, at class load
time - performs per message validation at runtime - see NLSTest.java for
usage reference
MessageBundle classes may subclass this type.
</summary>
</member>
<member name="M:Lucene.Net.Messages.NLS.InitializeMessages``1(System.String)">
<summary> Initialize a given class with the message bundle Keys Should be called from
a class that extends NLS in a static block at class load time.
</summary>
<param name="bundleName">Property file with that contains the message bundle
</param>
<param name="clazz">where constants will reside
</param>
</member>
<member name="M:Lucene.Net.Messages.NLS.LoadfieldValue``1(System.Reflection.FieldInfo,System.Boolean)">
<param name="field"></param>
<param name="isFieldAccessible"></param>
</member>
<member name="M:Lucene.Net.Messages.NLS.ValidateMessage``1(System.String)">
<param name="key">- Message Key
</param>
</member>
<member name="M:Lucene.Net.Messages.NLS.IPriviligedAction.Run">
<summary>
Performs the priviliged action.
</summary>
<returns>A value that may represent the result of the action.</returns>
</member>
<member name="T:Lucene.Net.Messages.INLSException">
<summary> Interface that exceptions should implement to support lazy loading of messages.
For Native Language Support (NLS), system of software internationalization.
This Interface should be implemented by all exceptions that require
translation
</summary>
</member>
<member name="P:Lucene.Net.Messages.INLSException.MessageObject">
<value> a instance of a class that implements the Message interface </value>
</member>
<member name="T:Lucene.Net.QueryParsers.ICharStream">
<summary> This interface describes a character stream that maintains line and
column number positions of the characters. It also has the capability
to backup the stream to some extent. An implementation of this
interface is used in the TokenManager implementation generated by
JavaCCParser.
All the methods except backup can be implemented in any fashion. backup
needs to be implemented correctly for the correct operation of the lexer.
Rest of the methods are all used to get information like line number,
column number and the String that constitutes a token and are not used
by the lexer. Hence their implementation won't affect the generated lexer's
operation.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ICharStream.ReadChar">
<summary> Returns the next character from the selected input. The method
of selecting the input is the responsibility of the class
implementing this interface. Can throw any java.io.IOException.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ICharStream.Backup(System.Int32)">
<summary> Backs up the input stream by amount steps. Lexer calls this method if it
had already read some characters, but could not use them to match a
(longer) token. So, they will be used again as the prefix of the next
token and it is the implemetation's responsibility to do this right.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ICharStream.BeginToken">
<summary> Returns the next character that marks the beginning of the next token.
All characters must remain in the buffer between two successive calls
to this method to implement backup correctly.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ICharStream.GetSuffix(System.Int32)">
<summary> Returns an array of characters that make up the suffix of length 'len' for
the currently matched token. This is used to build up the matched string
for use in actions in the case of MORE. A simple and inefficient
implementation of this is as follows :
{
String t = GetImage();
return t.substring(t.length() - len, t.length()).toCharArray();
}
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ICharStream.Done">
<summary> The lexer calls this function to indicate that it is done with the stream
and hence implementations can free any resources held by this class.
Again, the body of this function can be just empty and it will not
affect the lexer's operation.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.ICharStream.Column">
<summary> Returns the column position of the character last read.</summary>
<deprecated>
</deprecated>
<seealso cref="P:Lucene.Net.QueryParsers.ICharStream.EndColumn">
</seealso>
</member>
<member name="P:Lucene.Net.QueryParsers.ICharStream.Line">
<summary> Returns the line number of the character last read.</summary>
<deprecated>
</deprecated>
<seealso cref="P:Lucene.Net.QueryParsers.ICharStream.EndLine">
</seealso>
</member>
<member name="P:Lucene.Net.QueryParsers.ICharStream.EndColumn">
<summary> Returns the column number of the last character for current token (being
matched after the last call to BeginTOken).
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.ICharStream.EndLine">
<summary> Returns the line number of the last character for current token (being
matched after the last call to BeginTOken).
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.ICharStream.BeginColumn">
<summary> Returns the column number of the first character for current token (being
matched after the last call to BeginTOken).
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.ICharStream.BeginLine">
<summary> Returns the line number of the first character for current token (being
matched after the last call to BeginTOken).
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.ICharStream.Image">
<summary> Returns a string made up of characters from the marked token beginning
to the current buffer position. Implementations have the choice of returning
anything that they want to. For example, for efficiency, one might decide
to just return null, which is a valid implementation.
</summary>
</member>
<member name="T:Lucene.Net.QueryParsers.FastCharStream">
<summary>An efficient implementation of JavaCC's CharStream interface. <p/>Note that
this does not do line-number counting, but instead keeps track of the
character position of the token in the input, as required by Lucene's <see cref="T:Lucene.Net.Analysis.Token"/>
API.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.FastCharStream.#ctor(System.IO.TextReader)">
<summary>Constructs from a Reader. </summary>
</member>
<member name="T:Lucene.Net.QueryParsers.MultiFieldQueryParser">
<summary> A QueryParser which constructs queries to search multiple fields.
</summary>
<version> $Revision: 829231 $
</version>
</member>
<member name="T:Lucene.Net.QueryParsers.QueryParser">
<summary> This class is generated by JavaCC. The most important method is
<see cref="M:Lucene.Net.QueryParsers.QueryParser.Parse(System.String)"/>.
The syntax for query strings is as follows:
A Query is a series of clauses.
A clause may be prefixed by:
<list type="bullet">
<item> a plus (<c>+</c>) or a minus (<c>-</c>) sign, indicating
that the clause is required or prohibited respectively; or</item>
<item> a term followed by a colon, indicating the field to be searched.
This enables one to construct queries which search multiple fields.</item>
</list>
A clause may be either:
<list type="bullet">
<item> a term, indicating all the documents that contain this term; or</item>
<item> a nested query, enclosed in parentheses. Note that this may be used
with a <c>+</c>/<c>-</c> prefix to require any of a set of
terms.</item>
</list>
Thus, in BNF, the query grammar is:
<code>
Query ::= ( Clause )*
Clause ::= ["+", "-"] [<TERM> ":"] ( <TERM> | "(" Query ")" )
</code>
<p/>
Examples of appropriately formatted queries can be found in the <a href="../../../../../../queryparsersyntax.html">query syntax
documentation</a>.
<p/>
<p/>
In <see cref="T:Lucene.Net.Search.TermRangeQuery"/>s, QueryParser tries to detect date values, e.g.
<tt>date:[6/1/2005 TO 6/4/2005]</tt> produces a range query that searches
for "date" fields between 2005-06-01 and 2005-06-04. Note that the format
of the accepted input depends on the <see cref="P:Lucene.Net.QueryParsers.QueryParser.Locale"/>.
By default a date is converted into a search term using the deprecated
<see cref="T:Lucene.Net.Documents.DateField"/> for compatibility reasons.
To use the new <see cref="T:Lucene.Net.Documents.DateTools"/> to convert dates, a
<see cref="T:Lucene.Net.Documents.DateTools.Resolution"/> has to be set.
<p/>
<p/>
The date resolution that shall be used for RangeQueries can be set
using <see cref="M:Lucene.Net.QueryParsers.QueryParser.SetDateResolution(Lucene.Net.Documents.DateTools.Resolution)"/>
or <see cref="M:Lucene.Net.QueryParsers.QueryParser.SetDateResolution(System.String,Lucene.Net.Documents.DateTools.Resolution)"/>. The former
sets the default date resolution for all fields, whereas the latter can
be used to set field specific date resolutions. Field specific date
resolutions take, if set, precedence over the default date resolution.
<p/>
<p/>
If you use neither <see cref="T:Lucene.Net.Documents.DateField"/> nor <see cref="T:Lucene.Net.Documents.DateTools"/> in your
index, you can create your own
query parser that inherits QueryParser and overwrites
<see cref="M:Lucene.Net.QueryParsers.QueryParser.GetRangeQuery(System.String,System.String,System.String,System.Boolean)"/> to
use a different method for date conversion.
<p/>
<p/>Note that QueryParser is <em>not</em> thread-safe.<p/>
<p/><b>NOTE</b>: there is a new QueryParser in contrib, which matches
the same syntax as this class, but is more modular,
enabling substantial customization to how a query is created.
<p/><b>NOTE</b>: there is a new QueryParser in contrib, which matches
the same syntax as this class, but is more modular,
enabling substantial customization to how a query is created.
<b>NOTE</b>: You must specify the required <see cref="T:Lucene.Net.Util.Version"/> compatibility when
creating QueryParser:
<list type="bullet">
<item>As of 2.9, <see cref="P:Lucene.Net.QueryParsers.QueryParser.EnablePositionIncrements"/> is true by default.</item>
</list>
</summary>
</member>
<member name="T:Lucene.Net.QueryParsers.QueryParserConstants">
<summary> Token literal values and constants.
Generated by org.javacc.parser.OtherFilesGen#start()
</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.EndOfFileToken">
<summary>End of File. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.NumCharToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.EscapedCharToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.TermStartCharToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.TermCharToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.WhitespaceToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.QuotedCharToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.AndToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.OrToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.NotToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.PlusToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.MinusToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.LParanToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RParenToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.ColonToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.StarToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.CaratToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.QuotedToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.TermToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.FuzzySlopToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.PrefixTermToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.WildTermToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeInStartToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeExStartToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.NumberToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeInToToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeInEndToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeInQuotedToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeInGoopToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeExToToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeExEndToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeExQuotedToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeExGoopToken">
<summary>RegularExpression Id. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.BoostToken">
<summary>Lexical state. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeExToken">
<summary>Lexical state. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.RangeInToken">
<summary>Lexical state. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.DefaultToken">
<summary>Lexical state. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserConstants.tokenImage">
<summary>Literal token values. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParser.AND_OPERATOR">
<summary>Alternative form of QueryParser.Operator.AND </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParser.OR_OPERATOR">
<summary>Alternative form of QueryParser.Operator.OR </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParser.operator_Renamed">
<summary>The actual operator that parser uses to combine query terms </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.Parse(System.String)">
<summary>Parses a query string, returning a {@link Lucene.Net.Search.Query}.</summary>
<param name="query"> the query string to be parsed.
</param>
<throws> ParseException if the parsing fails </throws>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.SetDateResolution(Lucene.Net.Documents.DateTools.Resolution)">
<summary> Sets the default date resolution used by RangeQueries for fields for which no
specific date resolutions has been set. Field specific resolutions can be set
with {@link #SetDateResolution(String, DateTools.Resolution)}.
</summary>
<param name="dateResolution">the default date resolution to set
</param>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.SetDateResolution(System.String,Lucene.Net.Documents.DateTools.Resolution)">
<summary> Sets the date resolution used by RangeQueries for a specific field.
</summary>
<param name="fieldName">field for which the date resolution is to be set
</param>
<param name="dateResolution">date resolution to set
</param>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.getDateResolution(System.String)">
<summary> Returns the date resolution that is used by RangeQueries for the given field.
Returns null, if no default or field specific date resolution has been set
for the given field.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetFieldQuery(System.String,System.String)">
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetFieldQuery(System.String,System.String,System.Int32)">
<summary> Base implementation delegates to {@link #GetFieldQuery(String,String)}.
This method may be overridden, for example, to return
a SpanNearQuery instead of a PhraseQuery.
</summary>
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetRangeQuery(System.String,System.String,System.String,System.Boolean)">
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewBooleanQuery(System.Boolean)">
<summary> Builds a new BooleanQuery instance</summary>
<param name="disableCoord">disable coord
</param>
<returns> new BooleanQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewBooleanClause(Lucene.Net.Search.Query,Lucene.Net.Search.Occur)">
<summary> Builds a new BooleanClause instance</summary>
<param name="q">sub query
</param>
<param name="occur">how this clause should occur when matching documents
</param>
<returns> new BooleanClause instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewTermQuery(Lucene.Net.Index.Term)">
<summary> Builds a new TermQuery instance</summary>
<param name="term">term
</param>
<returns> new TermQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewPhraseQuery">
<summary> Builds a new PhraseQuery instance</summary>
<returns> new PhraseQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewMultiPhraseQuery">
<summary> Builds a new MultiPhraseQuery instance</summary>
<returns> new MultiPhraseQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewPrefixQuery(Lucene.Net.Index.Term)">
<summary> Builds a new PrefixQuery instance</summary>
<param name="prefix">Prefix term
</param>
<returns> new PrefixQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewFuzzyQuery(Lucene.Net.Index.Term,System.Single,System.Int32)">
<summary> Builds a new FuzzyQuery instance</summary>
<param name="term">Term
</param>
<param name="minimumSimilarity">minimum similarity
</param>
<param name="prefixLength">prefix length
</param>
<returns> new FuzzyQuery Instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewRangeQuery(System.String,System.String,System.String,System.Boolean)">
<summary> Builds a new TermRangeQuery instance</summary>
<param name="field">Field
</param>
<param name="part1">min
</param>
<param name="part2">max
</param>
<param name="inclusive">true if range is inclusive
</param>
<returns> new TermRangeQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewMatchAllDocsQuery">
<summary> Builds a new MatchAllDocsQuery instance</summary>
<returns> new MatchAllDocsQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.NewWildcardQuery(Lucene.Net.Index.Term)">
<summary> Builds a new WildcardQuery instance</summary>
<param name="t">wildcard term
</param>
<returns> new WildcardQuery instance
</returns>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetBooleanQuery(System.Collections.Generic.IList{Lucene.Net.Search.BooleanClause})">
<summary> Factory method for generating query, given a set of clauses.
By default creates a boolean query composed of clauses passed in.
Can be overridden by extending classes, to modify query being
returned.
</summary>
<param name="clauses">List that contains {@link BooleanClause} instances
to join.
</param>
<returns> Resulting {@link Query} object.
</returns>
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetBooleanQuery(System.Collections.Generic.IList{Lucene.Net.Search.BooleanClause},System.Boolean)">
<summary> Factory method for generating query, given a set of clauses.
By default creates a boolean query composed of clauses passed in.
Can be overridden by extending classes, to modify query being
returned.
</summary>
<param name="clauses">List that contains {@link BooleanClause} instances
to join.
</param>
<param name="disableCoord">true if coord scoring should be disabled.
</param>
<returns> Resulting {@link Query} object.
</returns>
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetWildcardQuery(System.String,System.String)">
<summary> Factory method for generating a query. Called when parser
parses an input term token that contains one or more wildcard
characters (? and *), but is not a prefix term token (one
that has just a single * character at the end)
<p/>
Depending on settings, prefix term may be lower-cased
automatically. It will not go through the default Analyzer,
however, since normal Analyzers are unlikely to work properly
with wildcard templates.
<p/>
Can be overridden by extending classes, to provide custom handling for
wildcard queries, which may be necessary due to missing analyzer calls.
</summary>
<param name="field">Name of the field query will use.
</param>
<param name="termStr">Term token that contains one or more wild card
characters (? or *), but is not simple prefix term
</param>
<returns> Resulting {@link Query} built for the term
</returns>
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetPrefixQuery(System.String,System.String)">
<summary> Factory method for generating a query (similar to
{@link #getWildcardQuery}). Called when parser parses an input term
token that uses prefix notation; that is, contains a single '*' wildcard
character as its last character. Since this is a special case
of generic wildcard term, and such a query can be optimized easily,
this usually results in a different query object.
<p/>
Depending on settings, a prefix term may be lower-cased
automatically. It will not go through the default Analyzer,
however, since normal Analyzers are unlikely to work properly
with wildcard templates.
<p/>
Can be overridden by extending classes, to provide custom handling for
wild card queries, which may be necessary due to missing analyzer calls.
</summary>
<param name="field">Name of the field query will use.
</param>
<param name="termStr">Term token to use for building term for the query
(<b>without</b> trailing '*' character!)
</param>
<returns> Resulting {@link Query} built for the term
</returns>
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetFuzzyQuery(System.String,System.String,System.Single)">
<summary> Factory method for generating a query (similar to
{@link #getWildcardQuery}). Called when parser parses
an input term token that has the fuzzy suffix (~) appended.
</summary>
<param name="field">Name of the field query will use.
</param>
<param name="termStr">Term token to use for building term for the query
</param>
<returns> Resulting {@link Query} built for the term
</returns>
<exception cref="T:Lucene.Net.QueryParsers.ParseException">throw in overridden method to disallow
</exception>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.DiscardEscapeChar(System.String)">
<summary> Returns a String where the escape char has been
removed, or kept only once if there was a double escape.
Supports escaped unicode characters, e. g. translates
<c>\\u0041</c> to <c>A</c>.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.HexToInt(System.Char)">
<summary>Returns the numeric value of the hexadecimal character </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.Escape(System.String)">
<summary> Returns a String where those characters that QueryParser
expects to be escaped are escaped by a preceding <c>\</c>.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.Main(System.String[])">
<summary> Command line tool to test QueryParser, using {@link Lucene.Net.Analysis.SimpleAnalyzer}.
Usage:<br/>
<c>java Lucene.Net.QueryParsers.QueryParser <input></c>
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.#ctor(Lucene.Net.QueryParsers.ICharStream)">
<summary>Constructor with user supplied CharStream. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.ReInit(Lucene.Net.QueryParsers.ICharStream)">
<summary>Reinitialise. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.#ctor(Lucene.Net.QueryParsers.QueryParserTokenManager)">
<summary>Constructor with generated Token Manager. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.ReInit(Lucene.Net.QueryParsers.QueryParserTokenManager)">
<summary>Reinitialise. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GetNextToken">
<summary>Get the next Token. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.getToken(System.Int32)">
<summary>Get the specific Token. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.GenerateParseException">
<summary>Generate ParseException. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.Enable_tracing">
<summary>Enable tracing. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParser.Disable_tracing">
<summary>Disable tracing. </summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.Analyzer">
<value> Returns the analyzer. </value>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.Field">
<value> Returns the field. </value>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.FuzzyMinSim">
<summary>
Gets or sets the minimal similarity for fuzzy queries.
Default is 0.5f.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.FuzzyPrefixLength">
<summary> Gets or sets the prefix length for fuzzy queries. </summary>
<value> Returns the fuzzyPrefixLength. </value>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.PhraseSlop">
<summary> Gets or sets the default slop for phrases. If zero, then exact phrase matches
are required. Default value is zero.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.AllowLeadingWildcard">
<summary> Set to <c>true</c> to allow leading wildcard characters.
<p/>
When set, <c>*</c> or <c>?</c> are allowed as
the first character of a PrefixQuery and WildcardQuery.
Note that this can produce very slow
queries on big indexes.
<p/>
Default: false.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.EnablePositionIncrements">
<summary>Set to <c>true</c> to enable position increments in result query.
<p/>
When set, result phrase and multi-phrase queries will
be aware of position increments.
Useful when e.g. a StopFilter increases the position increment of
the token that follows an omitted token.
<p/>
Default: false.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.DefaultOperator">
<summary> Gets or sets the boolean operator of the QueryParser.
In default mode (<c>OR_OPERATOR</c>) terms without any modifiers
are considered optional: for example <c>capital of Hungary</c> is equal to
<c>capital OR of OR Hungary</c>.<br/>
In <c>AND_OPERATOR</c> mode terms are considered to be in conjunction: the
above mentioned query is parsed as <c>capital AND of AND Hungary</c>
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.LowercaseExpandedTerms">
<summary> Whether terms of wildcard, prefix, fuzzy and range queries are to be automatically
lower-cased or not. Default is <c>true</c>.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.MultiTermRewriteMethod">
<summary> By default QueryParser uses <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/>
when creating a PrefixQuery, WildcardQuery or RangeQuery. This implementation is generally preferable because it
a) Runs faster b) Does not have the scarcity of terms unduly influence score
c) avoids any "TooManyBooleanClauses" exception.
However, if your application really needs to use the
old-fashioned BooleanQuery expansion rewriting and the above
points are not relevant then use this to change
the rewrite method.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.Locale">
<summary>Gets or sets locale used by date range parsing.</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.QueryParser.RangeCollator">
<summary> Gets or sets the collator used to determine index term inclusion in ranges
for RangeQuerys.
<p/>
<strong>WARNING:</strong> Setting the rangeCollator to a non-null
collator using this method will cause every single index Term in the
Field referenced by lowerTerm and/or upperTerm to be examined.
Depending on the number of index Terms in this Field, the operation could
be very slow.
</summary>
<value> the collator to use when constructing RangeQuerys </value>
</member>
<member name="M:Lucene.Net.QueryParsers.MultiFieldQueryParser.#ctor(Lucene.Net.Util.Version,System.String[],Lucene.Net.Analysis.Analyzer,System.Collections.Generic.IDictionary{System.String,System.Single})">
<summary> Creates a MultiFieldQueryParser. Allows passing of a map with term to
Boost, and the boost to apply to each term.
<p/>
It will, when parse(String query) is called, construct a query like this
(assuming the query consists of two terms and you specify the two fields
<c>title</c> and <c>body</c>):
<p/>
<code>
(title:term1 body:term1) (title:term2 body:term2)
</code>
<p/>
When setDefaultOperator(AND_OPERATOR) is set, the result will be:
<p/>
<code>
+(title:term1 body:term1) +(title:term2 body:term2)
</code>
<p/>
When you pass a boost (title=>5 body=>10) you can get
<p/>
<code>
+(title:term1^5.0 body:term1^10.0) +(title:term2^5.0 body:term2^10.0)
</code>
<p/>
In other words, all the query's terms must appear, but it doesn't matter
in what fields they appear.
<p/>
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.MultiFieldQueryParser.#ctor(Lucene.Net.Util.Version,System.String[],Lucene.Net.Analysis.Analyzer)">
<summary> Creates a MultiFieldQueryParser.
<p/>
It will, when parse(String query) is called, construct a query like this
(assuming the query consists of two terms and you specify the two fields
<c>title</c> and <c>body</c>):
<p/>
<code>
(title:term1 body:term1) (title:term2 body:term2)
</code>
<p/>
When setDefaultOperator(AND_OPERATOR) is set, the result will be:
<p/>
<code>
+(title:term1 body:term1) +(title:term2 body:term2)
</code>
<p/>
In other words, all the query's terms must appear, but it doesn't matter
in what fields they appear.
<p/>
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.MultiFieldQueryParser.Parse(Lucene.Net.Util.Version,System.String[],System.String[],Lucene.Net.Analysis.Analyzer)">
<summary> Parses a query which searches on the fields specified.
<p/>
If x fields are specified, this effectively constructs:
<code>
(field1:query1) (field2:query2) (field3:query3)...(fieldx:queryx)
</code>
</summary>
<param name="matchVersion">Lucene version to match; this is passed through to
QueryParser.
</param>
<param name="queries">Queries strings to parse
</param>
<param name="fields">Fields to search on
</param>
<param name="analyzer">Analyzer to use
</param>
<throws> ParseException </throws>
<summary> if query parsing fails
</summary>
<throws> IllegalArgumentException </throws>
<summary> if the length of the queries array differs from the length of
the fields array
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.MultiFieldQueryParser.Parse(Lucene.Net.Util.Version,System.String,System.String[],Lucene.Net.Search.Occur[],Lucene.Net.Analysis.Analyzer)">
<summary> Parses a query, searching on the fields specified. Use this if you need
to specify certain fields as required, and others as prohibited.
<p/>
Uasge:
<code>
String[] fields = {"filename", "contents", "description"};
BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
BooleanClause.Occur.MUST,
BooleanClause.Occur.MUST_NOT};
MultiFieldQueryParser.parse("query", fields, flags, analyzer);
</code>
<p/>
The code above would construct a query:
<code>
(filename:query) +(contents:query) -(description:query)
</code>
</summary>
<param name="matchVersion">Lucene version to match; this is passed through to
QueryParser.
</param>
<param name="query">Query string to parse
</param>
<param name="fields">Fields to search on
</param>
<param name="flags">Flags describing the fields
</param>
<param name="analyzer">Analyzer to use
</param>
<throws> ParseException </throws>
<summary> if query parsing fails
</summary>
<throws> IllegalArgumentException </throws>
<summary> if the length of the fields array differs from the length of
the flags array
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.MultiFieldQueryParser.Parse(Lucene.Net.Util.Version,System.String[],System.String[],Lucene.Net.Search.Occur[],Lucene.Net.Analysis.Analyzer)">
<summary> Parses a query, searching on the fields specified. Use this if you need
to specify certain fields as required, and others as prohibited.
<p/>
Usage:
<code>
String[] query = {"query1", "query2", "query3"};
String[] fields = {"filename", "contents", "description"};
BooleanClause.Occur[] flags = {BooleanClause.Occur.SHOULD,
BooleanClause.Occur.MUST,
BooleanClause.Occur.MUST_NOT};
MultiFieldQueryParser.parse(query, fields, flags, analyzer);
</code>
<p/>
The code above would construct a query:
<code>
(filename:query1) +(contents:query2) -(description:query3)
</code>
</summary>
<param name="matchVersion">Lucene version to match; this is passed through to
QueryParser.
</param>
<param name="queries">Queries string to parse
</param>
<param name="fields">Fields to search on
</param>
<param name="flags">Flags describing the fields
</param>
<param name="analyzer">Analyzer to use
</param>
<throws> ParseException </throws>
<summary> if query parsing fails
</summary>
<throws> IllegalArgumentException </throws>
<summary> if the length of the queries, fields, and flags array differ
</summary>
</member>
<member name="T:Lucene.Net.QueryParsers.ParseException">
<summary> This exception is thrown when parse errors are encountered.
You can explicitly create objects of this exception type by
calling the method generateParseException in the generated
parser.
You can modify this class to customize your error reporting
mechanisms so long as you retain the public fields.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ParseException.#ctor(Lucene.Net.QueryParsers.Token,System.Int32[][],System.String[])">
<summary> This constructor is used by the method "generateParseException"
in the generated parser. Calling this constructor generates
a new object of this type with the fields "currentToken",
"expectedTokenSequences", and "tokenImage" set. The boolean
flag "specialConstructor" is also set to true to indicate that
this constructor was used to create this object.
This constructor calls its super class with the empty string
to force the "toString" method of parent class "Throwable" to
print the error message in the form:
ParseException: <result of getMessage>
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ParseException.#ctor">
<summary> The following constructors are for use by you for whatever
purpose you can think of. Constructing the exception in this
manner makes the exception behave in the normal way - i.e., as
documented in the class "Throwable". The fields "errorToken",
"expectedTokenSequences", and "tokenImage" do not contain
relevant information. The JavaCC generated code does not use
these constructors.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ParseException.#ctor(System.String)">
<summary>Constructor with message. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ParseException.#ctor(System.String,System.Exception)">
<summary>Constructor with message. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.ParseException.specialConstructor">
<summary> This variable determines which constructor was used to create
this object and thereby affects the semantics of the
"getMessage" method (see below).
</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.ParseException.currentToken">
<summary> This is the last token that has been consumed successfully. If
this object has been created due to a parse error, the token
followng this token will (therefore) be the first error token.
</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.ParseException.expectedTokenSequences">
<summary> Each entry in this array is an array of integers. Each array
of integers represents a sequence of tokens (by their ordinal
values) that is expected at this point of the parse.
</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.ParseException.tokenImage">
<summary> This is a reference to the "tokenImage" array of the generated
parser within which the parse error occurred. This array is
defined in the generated ...Constants interface.
</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.ParseException.eol">
<summary> The end of line string for this machine.</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.ParseException.Add_escapes(System.String)">
<summary> Used to convert raw characters to their escaped version
when these raw version cannot be used as part of an ASCII
string literal.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.ParseException.Message">
<summary> This method has the standard behavior when this object has been
created using the standard constructors. Otherwise, it uses
"currentToken" and "expectedTokenSequences" to generate a parse
error message and returns it. If this object has been created
due to a parse error, and you do not catch it (it gets thrown
from the parser), then this method is called during the printing
of the final stack trace, and hence the correct error message
gets displayed.
</summary>
</member>
<member name="T:Lucene.Net.QueryParsers.QueryParserTokenManager">
<summary>Token Manager. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserTokenManager.debugStream">
<summary>Debug output. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParserTokenManager.SetDebugStream(System.IO.StreamWriter)">
<summary>Set debug output. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserTokenManager.jjstrLiteralImages">
<summary>Token literal values. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserTokenManager.lexStateNames">
<summary>Lexer state names. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.QueryParserTokenManager.jjnewLexState">
<summary>Lex State array. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParserTokenManager.#ctor(Lucene.Net.QueryParsers.ICharStream)">
<summary>Constructor. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParserTokenManager.#ctor(Lucene.Net.QueryParsers.ICharStream,System.Int32)">
<summary>Constructor. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParserTokenManager.ReInit(Lucene.Net.QueryParsers.ICharStream)">
<summary>Reinitialise parser. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParserTokenManager.ReInit(Lucene.Net.QueryParsers.ICharStream,System.Int32)">
<summary>Reinitialise parser. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParserTokenManager.SwitchTo(System.Int32)">
<summary>Switch to specified lex state. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.QueryParserTokenManager.GetNextToken">
<summary>Get the next Token. </summary>
</member>
<member name="T:Lucene.Net.QueryParsers.Token">
<summary> Describes the input token stream.</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.kind">
<summary> An integer that describes the kind of this token. This numbering
system is determined by JavaCCParser, and a table of these numbers is
stored in the file ...Constants.java.
</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.beginLine">
<summary>The line number of the first character of this Token. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.beginColumn">
<summary>The column number of the first character of this Token. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.endLine">
<summary>The line number of the last character of this Token. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.endColumn">
<summary>The column number of the last character of this Token. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.image">
<summary> The string image of the token.</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.next">
<summary> A reference to the next regular (non-special) token from the input
stream. If this is the last token from the input stream, or if the
token manager has not read tokens beyond this one, this field is
set to null. This is true only if this token is also a regular
token. Otherwise, see below for a description of the contents of
this field.
</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.Token.specialToken">
<summary> This field is used to access special tokens that occur prior to this
token, but after the immediately preceding regular (non-special) token.
If there are no such special tokens, this field is set to null.
When there are more than one such special token, this field refers
to the last of these special tokens, which in turn refers to the next
previous special token through its specialToken field, and so on
until the first special token (whose specialToken field is null).
The next fields of special tokens refer to other special tokens that
immediately follow it (without an intervening regular token). If there
is no such token, this field is null.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.Token.#ctor">
<summary> No-argument constructor</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.Token.#ctor(System.Int32)">
<summary> Constructs a new token for the specified Image.</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.Token.#ctor(System.Int32,System.String)">
<summary> Constructs a new token for the specified Image and Kind.</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.Token.ToString">
<summary> Returns the image.</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.Token.NewToken(System.Int32,System.String)">
<summary> Returns a new Token object, by default. However, if you want, you
can create and return subclass objects based on the value of ofKind.
Simply add the cases to the switch for all those special cases.
For example, if you have a subclass of Token called IDToken that
you want to create if ofKind is ID, simply add something like :
case MyParserConstants.ID : return new IDToken(ofKind, image);
to the following switch statement. Then you can cast matchedToken
variable to the appropriate type and use sit in your lexical actions.
</summary>
</member>
<member name="P:Lucene.Net.QueryParsers.Token.Value">
<summary> An optional attribute value of the Token.
Tokens which are not used as syntactic sugar will often contain
meaningful values that will be used later on by the compiler or
interpreter. This attribute value is often different from the image.
Any subclass of Token that actually wants to return a non-null value can
override this method as appropriate.
</summary>
</member>
<member name="T:Lucene.Net.QueryParsers.TokenMgrError">
<summary>Token Manager Error. </summary>
</member>
<member name="F:Lucene.Net.QueryParsers.TokenMgrError.LEXICAL_ERROR">
<summary> Lexical error occurred.</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.TokenMgrError.STATIC_LEXER_ERROR">
<summary> An attempt was made to create a second instance of a static token manager.</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.TokenMgrError.INVALID_LEXICAL_STATE">
<summary> Tried to change to an invalid lexical state.</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.TokenMgrError.LOOP_DETECTED">
<summary> Detected (and bailed out of) an infinite loop in the token manager.</summary>
</member>
<member name="F:Lucene.Net.QueryParsers.TokenMgrError.errorCode">
<summary> Indicates the reason why the exception is thrown. It will have
one of the above 4 values.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.TokenMgrError.addEscapes(System.String)">
<summary> Replaces unprintable characters by their escaped (or unicode escaped)
equivalents in the given string
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.TokenMgrError.LexicalError(System.Boolean,System.Int32,System.Int32,System.Int32,System.String,System.Char)">
<summary> Returns a detailed message for the Error when it is thrown by the
token manager to indicate a lexical error.
Parameters :
EOFSeen : indicates if EOF caused the lexical error
curLexState : lexical state in which this error occurred
errorLine : line number when the error occurred
errorColumn : column number when the error occurred
errorAfter : prefix that was seen before this error occurred
curchar : the offending character
Note: You can customize the lexical error message by modifying this method.
</summary>
</member>
<member name="M:Lucene.Net.QueryParsers.TokenMgrError.#ctor">
<summary>No arg constructor. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.TokenMgrError.#ctor(System.String,System.Int32)">
<summary>Constructor with message and reason. </summary>
</member>
<member name="M:Lucene.Net.QueryParsers.TokenMgrError.#ctor(System.Boolean,System.Int32,System.Int32,System.Int32,System.String,System.Char,System.Int32)">
<summary>Full Constructor. </summary>
</member>
<member name="P:Lucene.Net.QueryParsers.TokenMgrError.Message">
<summary> You can also modify the body of this method to customize your error messages.
For example, cases like LOOP_DETECTED and INVALID_LEXICAL_STATE are not
of end-users concern, so you can return something like :
"Internal Error : Please file a bug report .... "
from this method for such cases in the release version of your parser.
</summary>
</member>
<member name="T:Lucene.Net.Search.BooleanClause">
<summary>A clause in a BooleanQuery. </summary>
</member>
<member name="M:Lucene.Net.Search.BooleanClause.#ctor(Lucene.Net.Search.Query,Lucene.Net.Search.Occur)">
<summary>Constructs a BooleanClause.</summary>
</member>
<member name="M:Lucene.Net.Search.BooleanClause.Equals(System.Object)">
<summary>Returns true if <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.BooleanClause.GetHashCode">
<summary>Returns a hash code value for this object.</summary>
</member>
<member name="P:Lucene.Net.Search.BooleanClause.Query">
<summary>The query whose matching documents are combined by the boolean query.</summary>
</member>
<member name="T:Lucene.Net.Search.BooleanQuery">
<summary>A Query that matches documents matching boolean combinations of other
queries, e.g. <see cref="T:Lucene.Net.Search.TermQuery"/>s, <see cref="T:Lucene.Net.Search.PhraseQuery"/>s or other
BooleanQuerys.
</summary>
</member>
<member name="T:Lucene.Net.Search.Query">
<summary>The abstract base class for queries.
<p/>Instantiable subclasses are:
<list type="bullet">
<item> <see cref="T:Lucene.Net.Search.TermQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.MultiTermQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.BooleanQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.WildcardQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.PhraseQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.PrefixQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.MultiPhraseQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.FuzzyQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.TermRangeQuery"/> </item>
<item> <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> </item>
<item> <see cref="T:Lucene.Net.Search.Spans.SpanQuery"/> </item>
</list>
<p/>A parser for queries is contained in:
<list type="bullet">
<item><see cref="T:Lucene.Net.QueryParsers.QueryParser">QueryParser</see> </item>
</list>
</summary>
</member>
<member name="M:Lucene.Net.Search.Query.ToString(System.String)">
<summary>Prints a query to a string, with <c>field</c> assumed to be the
default field and omitted.
<p/>The representation used is one that is supposed to be readable
by <see cref="T:Lucene.Net.QueryParsers.QueryParser">QueryParser</see>. However,
there are the following limitations:
<list type="bullet">
<item>If the query was created by the parser, the printed
representation may not be exactly what was parsed. For example,
characters that need to be escaped will be represented without
the required backslash.</item>
<item>Some of the more complicated queries (e.g. span queries)
don't have a representation that can be parsed by QueryParser.</item>
</list>
</summary>
</member>
<member name="M:Lucene.Net.Search.Query.ToString">
<summary>Prints a query to a string. </summary>
</member>
<member name="M:Lucene.Net.Search.Query.CreateWeight(Lucene.Net.Search.Searcher)">
<summary> Expert: Constructs an appropriate Weight implementation for this query.
<p/>
Only implemented by primitive queries, which re-write to themselves.
</summary>
</member>
<member name="M:Lucene.Net.Search.Query.Weight(Lucene.Net.Search.Searcher)">
<summary> Expert: Constructs and initializes a Weight for a top-level query.</summary>
</member>
<member name="M:Lucene.Net.Search.Query.Rewrite(Lucene.Net.Index.IndexReader)">
<summary>Expert: called to re-write queries into primitive queries. For example,
a PrefixQuery will be rewritten into a BooleanQuery that consists
of TermQuerys.
</summary>
</member>
<member name="M:Lucene.Net.Search.Query.Combine(Lucene.Net.Search.Query[])">
<summary>Expert: called when re-writing queries under MultiSearcher.
Create a single query suitable for use by all subsearchers (in 1-1
correspondence with queries). This is an optimization of the OR of
all queries. We handle the common optimization cases of equal
queries and overlapping clauses of boolean OR queries (as generated
by MultiTermQuery.rewrite()).
Be careful overriding this method as queries[0] determines which
method will be called and is not necessarily of the same type as
the other queries.
</summary>
</member>
<member name="M:Lucene.Net.Search.Query.ExtractTerms(System.Collections.Generic.ISet{Lucene.Net.Index.Term})">
<summary> Expert: adds all terms occuring in this query to the terms set. Only
works if this query is in its <see cref="M:Lucene.Net.Search.Query.Rewrite(Lucene.Net.Index.IndexReader)">rewritten</see> form.
</summary>
<throws> UnsupportedOperationException if this query is not yet rewritten </throws>
</member>
<member name="M:Lucene.Net.Search.Query.MergeBooleanQueries(Lucene.Net.Search.BooleanQuery[])">
<summary>Expert: merges the clauses of a set of BooleanQuery's into a single
BooleanQuery.
<p/>A utility for use by <see cref="M:Lucene.Net.Search.Query.Combine(Lucene.Net.Search.Query[])"/> implementations.
</summary>
</member>
<member name="M:Lucene.Net.Search.Query.GetSimilarity(Lucene.Net.Search.Searcher)">
<summary>Expert: Returns the Similarity implementation to be used for this query.
Subclasses may override this method to specify their own Similarity
implementation, perhaps one that delegates through that of the Searcher.
By default the Searcher's Similarity implementation is returned.
</summary>
</member>
<member name="M:Lucene.Net.Search.Query.Clone">
<summary>Returns a clone of this query. </summary>
</member>
<member name="P:Lucene.Net.Search.Query.Boost">
<summary>Gets or sets the boost for this query clause to <c>b</c>. Documents
matching this clause will (in addition to the normal weightings) have
their score multiplied by <c>b</c>. The boost is 1.0 by default.
</summary>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.#ctor">
<summary>Constructs an empty boolean query. </summary>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.#ctor(System.Boolean)">
<summary>Constructs an empty boolean query.
<see cref="M:Lucene.Net.Search.Similarity.Coord(System.Int32,System.Int32)"/> may be disabled in scoring, as
appropriate. For example, this score factor does not make sense for most
automatically generated queries, like <see cref="T:Lucene.Net.Search.WildcardQuery"/> and <see cref="T:Lucene.Net.Search.FuzzyQuery"/>
.
</summary>
<param name="disableCoord">disables <see cref="M:Lucene.Net.Search.Similarity.Coord(System.Int32,System.Int32)"/> in scoring.
</param>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.IsCoordDisabled">
<summary>Returns true iff <see cref="M:Lucene.Net.Search.Similarity.Coord(System.Int32,System.Int32)"/> is disabled in
scoring for this query instance.
</summary>
<seealso cref="M:Lucene.Net.Search.BooleanQuery.#ctor(System.Boolean)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.Add(Lucene.Net.Search.Query,Lucene.Net.Search.Occur)">
<summary>Adds a clause to a boolean query.
</summary>
<throws> TooManyClauses if the new number of clauses exceeds the maximum clause number </throws>
<seealso cref="P:Lucene.Net.Search.BooleanQuery.MaxClauseCount">
</seealso>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.Add(Lucene.Net.Search.BooleanClause)">
<summary>Adds a clause to a boolean query.</summary>
<throws> TooManyClauses if the new number of clauses exceeds the maximum clause number </throws>
<seealso cref="P:Lucene.Net.Search.BooleanQuery.MaxClauseCount">
</seealso>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.GetClauses">
<summary>Returns the set of clauses in this query. </summary>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.GetEnumerator">
<summary>
Returns an iterator on the clauses in this query.
</summary>
<returns></returns>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.Equals(System.Object)">
<summary>Returns true iff <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.BooleanQuery.GetHashCode">
<summary>Returns a hash code value for this object.</summary>
</member>
<member name="P:Lucene.Net.Search.BooleanQuery.MaxClauseCount">
<summary>Gets or sets the maximum number of clauses permitted, 1024 by default.
Attempts to add more than the permitted number of clauses cause <see cref="T:Lucene.Net.Search.BooleanQuery.TooManyClauses"/>
to be thrown.
</summary>
</member>
<member name="P:Lucene.Net.Search.BooleanQuery.MinimumNumberShouldMatch">
<summary>
Specifies a minimum number of the optional BooleanClauses
which must be satisfied.
<para>
By default no optional clauses are necessary for a match
(unless there are no required clauses). If this method is used,
then the specified number of clauses is required.
</para>
<para>
Use of this method is totally independent of specifying that
any specific clauses are required (or prohibited). This number will
only be compared against the number of matching optional clauses.
</para>
</summary>
</member>
<member name="P:Lucene.Net.Search.BooleanQuery.Clauses">
<summary>Returns the list of clauses in this query. </summary>
</member>
<member name="T:Lucene.Net.Search.SimilarityDelegator">
<summary>Expert: Delegating scoring implementation. Useful in <see cref="M:Lucene.Net.Search.Query.GetSimilarity(Lucene.Net.Search.Searcher)"/>
implementations, to override only certain
methods of a Searcher's Similiarty implementation..
</summary>
</member>
<member name="T:Lucene.Net.Search.Similarity">
<summary>Expert: Scoring API.
<p/>Subclasses implement search scoring.
<p/>The score of query <c>q</c> for document <c>d</c> correlates to the
cosine-distance or dot-product between document and query vectors in a
<a href="http://en.wikipedia.org/wiki/Vector_Space_Model">
Vector Space Model (VSM) of Information Retrieval</a>.
A document whose vector is closer to the query vector in that model is scored higher.
The score is computed as follows:
<p/>
<table cellpadding="1" cellspacing="0" border="1" align="center">
<tr><td>
<table cellpadding="1" cellspacing="0" border="0" align="center">
<tr>
<td valign="middle" align="right" rowspan="1">
score(q,d) =
<A HREF="#formula_coord">coord(q,d)</A> ·
<A HREF="#formula_queryNorm">queryNorm(q)</A> ·
</td>
<td valign="bottom" align="center" rowspan="1">
<big><big><big>∑</big></big></big>
</td>
<td valign="middle" align="right" rowspan="1">
<big><big>(</big></big>
<A HREF="#formula_tf">tf(t in d)</A> ·
<A HREF="#formula_idf">idf(t)</A><sup>2</sup> ·
<A HREF="#formula_termBoost">t.Boost</A> ·
<A HREF="#formula_norm">norm(t,d)</A>
<big><big>)</big></big>
</td>
</tr>
<tr valigh="top">
<td></td>
<td align="center"><small>t in q</small></td>
<td></td>
</tr>
</table>
</td></tr>
</table>
<p/> where
<list type="bullet">
<item>
<A NAME="formula_tf"></A>
<b>tf(t in d)</b>
correlates to the term's <i>frequency</i>,
defined as the number of times term <i>t</i> appears in the currently scored document <i>d</i>.
Documents that have more occurrences of a given term receive a higher score.
The default computation for <i>tf(t in d)</i> in
<see cref="M:Lucene.Net.Search.DefaultSimilarity.Tf(System.Single)">DefaultSimilarity</see> is:
<br/> <br/>
<table cellpadding="2" cellspacing="2" border="0" align="center">
<tr>
<td valign="middle" align="right" rowspan="1">
<see cref="M:Lucene.Net.Search.DefaultSimilarity.Tf(System.Single)">tf(t in d)</see> =
</td>
<td valign="top" align="center" rowspan="1">
frequency<sup><big>½</big></sup>
</td>
</tr>
</table>
<br/> <br/>
</item>
<item>
<A NAME="formula_idf"></A>
<b>idf(t)</b> stands for Inverse Document Frequency. This value
correlates to the inverse of <i>docFreq</i>
(the number of documents in which the term <i>t</i> appears).
This means rarer terms give higher contribution to the total score.
The default computation for <i>idf(t)</i> in
<see cref="M:Lucene.Net.Search.DefaultSimilarity.Idf(System.Int32,System.Int32)">DefaultSimilarity</see> is:
<br/> <br/>
<table cellpadding="2" cellspacing="2" border="0" align="center">
<tr>
<td valign="middle" align="right">
<see cref="M:Lucene.Net.Search.DefaultSimilarity.Idf(System.Int32,System.Int32)">idf(t)</see> =
</td>
<td valign="middle" align="center">
1 + log <big>(</big>
</td>
<td valign="middle" align="center">
<table>
<tr><td align="center"><small>numDocs</small></td></tr>
<tr><td align="center">–––––––––</td></tr>
<tr><td align="center"><small>docFreq+1</small></td></tr>
</table>
</td>
<td valign="middle" align="center">
<big>)</big>
</td>
</tr>
</table>
<br/> <br/>
</item>
<item>
<A NAME="formula_coord"></A>
<b>coord(q,d)</b>
is a score factor based on how many of the query terms are found in the specified document.
Typically, a document that contains more of the query's terms will receive a higher score
than another document with fewer query terms.
This is a search time factor computed in
<see cref="M:Lucene.Net.Search.Similarity.Coord(System.Int32,System.Int32)">coord(q,d)</see>
by the Similarity in effect at search time.
<br/> <br/>
</item>
<item><b>
<A NAME="formula_queryNorm"></A>
queryNorm(q)
</b>
is a normalizing factor used to make scores between queries comparable.
This factor does not affect document ranking (since all ranked documents are multiplied by the same factor),
but rather just attempts to make scores from different queries (or even different indexes) comparable.
This is a search time factor computed by the Similarity in effect at search time.
The default computation in
<see cref="M:Lucene.Net.Search.DefaultSimilarity.QueryNorm(System.Single)">DefaultSimilarity</see>
is:
<br/> <br/>
<table cellpadding="1" cellspacing="0" border="0" align="center">
<tr>
<td valign="middle" align="right" rowspan="1">
queryNorm(q) =
<see cref="M:Lucene.Net.Search.DefaultSimilarity.QueryNorm(System.Single)">queryNorm(sumOfSquaredWeights)</see>
=
</td>
<td valign="middle" align="center" rowspan="1">
<table>
<tr><td align="center"><big>1</big></td></tr>
<tr><td align="center"><big>
––––––––––––––
</big></td></tr>
<tr><td align="center">sumOfSquaredWeights<sup><big>½</big></sup></td></tr>
</table>
</td>
</tr>
</table>
<br/> <br/>
The sum of squared weights (of the query terms) is
computed by the query <see cref="T:Lucene.Net.Search.Weight"/> object.
For example, a <see cref="T:Lucene.Net.Search.BooleanQuery">boolean query</see>
computes this value as:
<br/> <br/>
<table cellpadding="1" cellspacing="0" border="0" align="center">
<tr>
<td valign="middle" align="right" rowspan="1">
<see cref="M:Lucene.Net.Search.Weight.GetSumOfSquaredWeights">GetSumOfSquaredWeights</see> =
<see cref="P:Lucene.Net.Search.Query.Boost">q.Boost</see> <sup><big>2</big></sup>
·
</td>
<td valign="bottom" align="center" rowspan="1">
<big><big><big>∑</big></big></big>
</td>
<td valign="middle" align="right" rowspan="1">
<big><big>(</big></big>
<A HREF="#formula_idf">idf(t)</A> ·
<A HREF="#formula_termBoost">t.Boost</A>
<big><big>) <sup>2</sup> </big></big>
</td>
</tr>
<tr valigh="top">
<td></td>
<td align="center"><small>t in q</small></td>
<td></td>
</tr>
</table>
<br/> <br/>
</item>
<item>
<A NAME="formula_termBoost"></A>
<b>t.Boost</b>
is a search time boost of term <i>t</i> in the query <i>q</i> as
specified in the query text
(see <A HREF="../../../../../../queryparsersyntax.html#Boosting a Term">query syntax</A>),
or as set by application calls to
<see cref="P:Lucene.Net.Search.Query.Boost"/>.
Notice that there is really no direct API for accessing a boost of one term in a multi term query,
but rather multi terms are represented in a query as multi
<see cref="T:Lucene.Net.Search.TermQuery">TermQuery</see> objects,
and so the boost of a term in the query is accessible by calling the sub-query
<see cref="P:Lucene.Net.Search.Query.Boost"/>.
<br/> <br/>
</item>
<item>
<A NAME="formula_norm"></A>
<b>norm(t,d)</b> encapsulates a few (indexing time) boost and length factors:
<list type="bullet">
<item><b>Document boost</b> - set by calling
<see cref="P:Lucene.Net.Documents.Document.Boost">doc.Boost</see>
before adding the document to the index.
</item>
<item><b>Field boost</b> - set by calling
<see cref="P:Lucene.Net.Documents.IFieldable.Boost">field.Boost</see>
before adding the field to a document.
</item>
<item><see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)">LengthNorm(field)</see> - computed
when the document is added to the index in accordance with the number of tokens
of this field in the document, so that shorter fields contribute more to the score.
LengthNorm is computed by the Similarity class in effect at indexing.
</item>
</list>
<p/>
When a document is added to the index, all the above factors are multiplied.
If the document has multiple fields with the same name, all their boosts are multiplied together:
<br/> <br/>
<table cellpadding="1" cellspacing="0" border="0" align="center">
<tr>
<td valign="middle" align="right" rowspan="1">
norm(t,d) =
<see cref="P:Lucene.Net.Documents.Document.Boost">doc.Boost</see>
·
<see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)">LengthNorm(field)</see>
·
</td>
<td valign="bottom" align="center" rowspan="1">
<big><big><big>∏</big></big></big>
</td>
<td valign="middle" align="right" rowspan="1">
<see cref="P:Lucene.Net.Documents.IFieldable.Boost">field.Boost</see>
</td>
</tr>
<tr valigh="top">
<td></td>
<td align="center"><small>field <i><b>f</b></i> in <i>d</i> named as <i><b>t</b></i></small></td>
<td></td>
</tr>
</table>
<br/> <br/>
However the resulted <i>norm</i> value is <see cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)">encoded</see> as a single byte
before being stored.
At search time, the norm byte value is read from the index
<see cref="T:Lucene.Net.Store.Directory">directory</see> and
<see cref="M:Lucene.Net.Search.Similarity.DecodeNorm(System.Byte)">decoded</see> back to a float <i>norm</i> value.
This encoding/decoding, while reducing index size, comes with the price of
precision loss - it is not guaranteed that decode(encode(x)) = x.
For instance, decode(encode(0.89)) = 0.75.
Also notice that search time is too late to modify this <i>norm</i> part of scoring, e.g. by
using a different <see cref="T:Lucene.Net.Search.Similarity"/> for search.
<br/> <br/>
</item>
</list>
</summary>
<seealso cref="P:Lucene.Net.Search.Similarity.Default">
</seealso>
<seealso cref="P:Lucene.Net.Index.IndexWriter.Similarity">
</seealso>
<seealso cref="P:Lucene.Net.Search.Searcher.Similarity">
</seealso>
</member>
<member name="F:Lucene.Net.Search.Similarity.defaultImpl">
<summary>The Similarity implementation used by default.</summary>
</member>
<member name="F:Lucene.Net.Search.Similarity.NORM_TABLE">
<summary>Cache of decoded bytes. </summary>
</member>
<member name="M:Lucene.Net.Search.Similarity.DecodeNorm(System.Byte)">
<summary>Decodes a normalization factor stored in an index.</summary>
<seealso cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Similarity.GetNormDecoder">
<summary>Returns a table for decoding normalization bytes.</summary>
<seealso cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Similarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)">
<summary> Compute the normalization value for a field, given the accumulated
state of term processing for this field (see <see cref="T:Lucene.Net.Index.FieldInvertState"/>).
<p/>Implementations should calculate a float value based on the field
state and then return that value.
<p/>For backward compatibility this method by default calls
<see cref="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)"/> passing
<see cref="P:Lucene.Net.Index.FieldInvertState.Length"/> as the second argument, and
then multiplies this value by <see cref="P:Lucene.Net.Index.FieldInvertState.Boost"/>.<p/>
<p/><b>WARNING</b>: This API is new and experimental and may
suddenly change.<p/>
</summary>
<param name="field">field name
</param>
<param name="state">current processing state for this field
</param>
<returns> the calculated float norm
</returns>
</member>
<member name="M:Lucene.Net.Search.Similarity.LengthNorm(System.String,System.Int32)">
<summary>Computes the normalization value for a field given the total number of
terms contained in a field. These values, together with field boosts, are
stored in an index and multipled into scores for hits on each field by the
search code.
<p/>Matches in longer fields are less precise, so implementations of this
method usually return smaller values when <c>numTokens</c> is large,
and larger values when <c>numTokens</c> is small.
<p/>Note that the return values are computed under
<see cref="M:Lucene.Net.Index.IndexWriter.AddDocument(Lucene.Net.Documents.Document)"/>
and then stored using
<see cref="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)"/>.
Thus they have limited precision, and documents
must be re-indexed if this method is altered.
</summary>
<param name="fieldName">the name of the field
</param>
<param name="numTokens">the total number of tokens contained in fields named
<i>fieldName</i> of <i>doc</i>.
</param>
<returns> a normalization factor for hits on this field of this document
</returns>
<seealso cref="P:Lucene.Net.Documents.AbstractField.Boost"/>
</member>
<member name="M:Lucene.Net.Search.Similarity.QueryNorm(System.Single)">
<summary>Computes the normalization value for a query given the sum of the squared
weights of each of the query terms. This value is then multipled into the
weight of each query term.
<p/>This does not affect ranking, but rather just attempts to make scores
from different queries comparable.
</summary>
<param name="sumOfSquaredWeights">the sum of the squares of query term weights
</param>
<returns> a normalization factor for query weights
</returns>
</member>
<member name="M:Lucene.Net.Search.Similarity.EncodeNorm(System.Single)">
<summary>Encodes a normalization factor for storage in an index.
<p/>The encoding uses a three-bit mantissa, a five-bit exponent, and
the zero-exponent point at 15, thus
representing values from around 7x10^9 to 2x10^-9 with about one
significant decimal digit of accuracy. Zero is also represented.
Negative numbers are rounded up to zero. Values too large to represent
are rounded down to the largest representable value. Positive values too
small to represent are rounded up to the smallest positive representable
value.
</summary>
<seealso cref="P:Lucene.Net.Documents.AbstractField.Boost"/>
<seealso cref="T:Lucene.Net.Util.SmallFloat"/>
</member>
<member name="M:Lucene.Net.Search.Similarity.Tf(System.Int32)">
<summary>Computes a score factor based on a term or phrase's frequency in a
document. This value is multiplied by the <see cref="M:Lucene.Net.Search.Similarity.Idf(System.Int32,System.Int32)"/>
factor for each term in the query and these products are then summed to
form the initial score for a document.
<p/>Terms and phrases repeated in a document indicate the topic of the
document, so implementations of this method usually return larger values
when <c>freq</c> is large, and smaller values when <c>freq</c>
is small.
<p/>The default implementation calls <see cref="M:Lucene.Net.Search.Similarity.Tf(System.Single)"/>.
</summary>
<param name="freq">the frequency of a term within a document
</param>
<returns> a score factor based on a term's within-document frequency
</returns>
</member>
<member name="M:Lucene.Net.Search.Similarity.SloppyFreq(System.Int32)">
<summary>Computes the amount of a sloppy phrase match, based on an edit distance.
This value is summed for each sloppy phrase match in a document to form
the frequency that is passed to <see cref="M:Lucene.Net.Search.Similarity.Tf(System.Single)"/>.
<p/>A phrase match with a small edit distance to a document passage more
closely matches the document, so implementations of this method usually
return larger values when the edit distance is small and smaller values
when it is large.
</summary>
<seealso cref="P:Lucene.Net.Search.PhraseQuery.Slop"/>
<param name="distance">the edit distance of this sloppy phrase match </param>
<returns> the frequency increment for this match </returns>
</member>
<member name="M:Lucene.Net.Search.Similarity.Tf(System.Single)">
<summary>Computes a score factor based on a term or phrase's frequency in a
document. This value is multiplied by the <see cref="M:Lucene.Net.Search.Similarity.Idf(System.Int32,System.Int32)"/>
factor for each term in the query and these products are then summed to
form the initial score for a document.
<p/>Terms and phrases repeated in a document indicate the topic of the
document, so implementations of this method usually return larger values
when <c>freq</c> is large, and smaller values when <c>freq</c>
is small.
</summary>
<param name="freq">the frequency of a term within a document
</param>
<returns> a score factor based on a term's within-document frequency
</returns>
</member>
<member name="M:Lucene.Net.Search.Similarity.IdfExplain(Lucene.Net.Index.Term,Lucene.Net.Search.Searcher)">
<summary> Computes a score factor for a simple term and returns an explanation
for that score factor.
<p/>
The default implementation uses:
<code>
idf(searcher.docFreq(term), searcher.MaxDoc);
</code>
Note that <see cref="P:Lucene.Net.Search.Searcher.MaxDoc"/> is used instead of
<see cref="M:Lucene.Net.Index.IndexReader.NumDocs"/> because it is
proportional to <see cref="M:Lucene.Net.Search.Searcher.DocFreq(Lucene.Net.Index.Term)"/> , i.e., when one is
inaccurate, so is the other, and in the same direction.
</summary>
<param name="term">the term in question
</param>
<param name="searcher">the document collection being searched
</param>
<returns> an IDFExplain object that includes both an idf score factor
and an explanation for the term.
</returns>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.Similarity.IdfExplain(System.Collections.Generic.ICollection{Lucene.Net.Index.Term},Lucene.Net.Search.Searcher)">
<summary> Computes a score factor for a phrase.
<p/>
The default implementation sums the idf factor for
each term in the phrase.
</summary>
<param name="terms">the terms in the phrase
</param>
<param name="searcher">the document collection being searched
</param>
<returns> an IDFExplain object that includes both an idf
score factor for the phrase and an explanation
for each term.
</returns>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.Similarity.Idf(System.Int32,System.Int32)">
<summary>Computes a score factor based on a term's document frequency (the number
of documents which contain the term). This value is multiplied by the
<see cref="M:Lucene.Net.Search.Similarity.Tf(System.Int32)"/> factor for each term in the query and these products are
then summed to form the initial score for a document.
<p/>Terms that occur in fewer documents are better indicators of topic, so
implementations of this method usually return larger values for rare terms,
and smaller values for common terms.
</summary>
<param name="docFreq">the number of documents which contain the term
</param>
<param name="numDocs">the total number of documents in the collection
</param>
<returns> a score factor based on the term's document frequency
</returns>
</member>
<member name="M:Lucene.Net.Search.Similarity.Coord(System.Int32,System.Int32)">
<summary>Computes a score factor based on the fraction of all query terms that a
document contains. This value is multiplied into scores.
<p/>The presence of a large portion of the query terms indicates a better
match with the query, so implementations of this method usually return
larger values when the ratio between these parameters is large and smaller
values when the ratio between them is small.
</summary>
<param name="overlap">the number of query terms matched in the document
</param>
<param name="maxOverlap">the total number of terms in the query
</param>
<returns> a score factor based on term overlap with the query
</returns>
</member>
<member name="M:Lucene.Net.Search.Similarity.ScorePayload(System.Int32,System.String,System.Int32,System.Int32,System.Byte[],System.Int32,System.Int32)">
<summary> Calculate a scoring factor based on the data in the payload. Overriding implementations
are responsible for interpreting what is in the payload. Lucene makes no assumptions about
what is in the byte array.
<p/>
The default implementation returns 1.
</summary>
<param name="docId">The docId currently being scored. If this value is <see cref="F:Lucene.Net.Search.Similarity.NO_DOC_ID_PROVIDED"/>, then it should be assumed that the PayloadQuery implementation does not provide document information
</param>
<param name="fieldName">The fieldName of the term this payload belongs to
</param>
<param name="start">The start position of the payload
</param>
<param name="end">The end position of the payload
</param>
<param name="payload">The payload byte array to be scored
</param>
<param name="offset">The offset into the payload array
</param>
<param name="length">The length in the array
</param>
<returns> An implementation dependent float to be used as a scoring factor
</returns>
</member>
<member name="P:Lucene.Net.Search.Similarity.Default">
<summary>Gets or sets the default Similarity implementation
used by indexing and search code.
<p/>This is initially an instance of <see cref="T:Lucene.Net.Search.DefaultSimilarity"/>.
</summary>
<seealso cref="P:Lucene.Net.Search.Searcher.Similarity">
</seealso>
<seealso cref="M:Lucene.Net.Index.IndexWriter.SetSimilarity(Lucene.Net.Search.Similarity)">
</seealso>
</member>
<member name="T:Lucene.Net.Search.Explanation.IDFExplanation">
<summary> Small Util class used to pass both an idf factor as well as an
explanation for that factor.
This class will likely be held on a <see cref="T:Lucene.Net.Search.Weight"/>, so be aware
before storing any large or un-serializable fields.
</summary>
</member>
<member name="T:Lucene.Net.Search.Explanation">
<summary>Expert: Describes the score computation for document and query. </summary>
</member>
<member name="M:Lucene.Net.Search.Explanation.GetDetails">
<summary>The sub-nodes of this explanation node. </summary>
</member>
<member name="M:Lucene.Net.Search.Explanation.AddDetail(Lucene.Net.Search.Explanation)">
<summary>Adds a sub-node to this explanation node. </summary>
</member>
<member name="M:Lucene.Net.Search.Explanation.ToString">
<summary>Render an explanation as text. </summary>
</member>
<member name="M:Lucene.Net.Search.Explanation.ToHtml">
<summary>Render an explanation as HTML. </summary>
</member>
<member name="P:Lucene.Net.Search.Explanation.IsMatch">
<summary> Indicates whether or not this Explanation models a good match.
<p/>
By default, an Explanation represents a "match" if the value is positive.
<p/>
</summary>
<seealso cref="P:Lucene.Net.Search.Explanation.Value">
</seealso>
</member>
<member name="P:Lucene.Net.Search.Explanation.Value">
<summary>The value assigned to this explanation node. </summary>
</member>
<member name="P:Lucene.Net.Search.Explanation.Description">
<summary>A description of this explanation node. </summary>
</member>
<member name="P:Lucene.Net.Search.Explanation.Summary">
<summary> A short one line summary which should contain all high level
information about this Explanation, without the "Details"
</summary>
</member>
<member name="T:Lucene.Net.Search.Explanation.IDFExplanation">
<summary> Small Util class used to pass both an idf factor as well as an
explanation for that factor.
This class will likely be held on a <see cref="T:Lucene.Net.Search.Weight"/>, so be aware
before storing any large or un-serializable fields.
</summary>
</member>
<member name="M:Lucene.Net.Search.Explanation.IDFExplanation.Explain">
<summary> This should be calculated lazily if possible.
</summary>
<returns> the explanation for the idf factor.
</returns>
</member>
<member name="P:Lucene.Net.Search.Explanation.IDFExplanation.Idf">
<value> the idf factor </value>
</member>
<member name="M:Lucene.Net.Search.SimilarityDelegator.#ctor(Lucene.Net.Search.Similarity)">
<summary>Construct a <see cref="T:Lucene.Net.Search.Similarity"/> that delegates all methods to another.</summary>
<param name="delegee">the Similarity implementation to delegate to</param>
</member>
<member name="T:Lucene.Net.Search.BooleanQuery.TooManyClauses">
<summary>Thrown when an attempt is made to add more than <see cref="P:Lucene.Net.Search.BooleanQuery.MaxClauseCount"/>
clauses. This typically happens if
a PrefixQuery, FuzzyQuery, WildcardQuery, or TermRangeQuery
is expanded to many terms during search.
</summary>
</member>
<member name="T:Lucene.Net.Search.BooleanQuery.BooleanWeight">
<summary> Expert: the Weight for BooleanQuery, used to
normalize, score and explain these queries.
<p/>NOTE: this API and implementation is subject to
change suddenly in the next release.<p/>
</summary>
</member>
<member name="T:Lucene.Net.Search.Weight">
<summary> Expert: Calculate query weights and build query scorers.
<p/>
The purpose of <see cref="T:Lucene.Net.Search.Weight"/> is to ensure searching does not
modify a <see cref="P:Lucene.Net.Search.Weight.Query"/>, so that a <see cref="P:Lucene.Net.Search.Weight.Query"/> instance can be reused. <br/>
<see cref="T:Lucene.Net.Search.Searcher"/> dependent state of the query should reside in the
<see cref="T:Lucene.Net.Search.Weight"/>. <br/>
<see cref="T:Lucene.Net.Index.IndexReader"/> dependent state should reside in the <see cref="M:Lucene.Net.Search.Weight.Scorer(Lucene.Net.Index.IndexReader,System.Boolean,System.Boolean)"/>.
<p/>
A <c>Weight</c> is used in the following way:
<list type="bullet">
<item>A <c>Weight</c> is constructed by a top-level query, given a
<c>Searcher</c> (<see cref="M:Lucene.Net.Search.Query.CreateWeight(Lucene.Net.Search.Searcher)"/>).</item>
<item>The <see cref="M:Lucene.Net.Search.Weight.GetSumOfSquaredWeights"/> method is called on the
<c>Weight</c> to compute the query normalization factor
<see cref="M:Lucene.Net.Search.Similarity.QueryNorm(System.Single)"/> of the query clauses contained in the
query.</item>
<item>The query normalization factor is passed to <see cref="M:Lucene.Net.Search.Weight.Normalize(System.Single)"/>. At
this point the weighting is complete.</item>
<item>A <c>Scorer</c> is constructed by <see cref="M:Lucene.Net.Search.Weight.Scorer(Lucene.Net.Index.IndexReader,System.Boolean,System.Boolean)"/>.</item>
</list>
</summary>
<since> 2.9
</since>
</member>
<member name="M:Lucene.Net.Search.Weight.Explain(Lucene.Net.Index.IndexReader,System.Int32)">
<summary> An explanation of the score computation for the named document.
</summary>
<param name="reader">sub-reader containing the give doc
</param>
<param name="doc">
</param>
<returns> an Explanation for the score
</returns>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.Weight.Normalize(System.Single)">
<summary>Assigns the query normalization factor to this. </summary>
</member>
<member name="M:Lucene.Net.Search.Weight.Scorer(Lucene.Net.Index.IndexReader,System.Boolean,System.Boolean)">
<summary> Returns a <see cref="M:Lucene.Net.Search.Weight.Scorer(Lucene.Net.Index.IndexReader,System.Boolean,System.Boolean)"/> which scores documents in/out-of order according
to <c>scoreDocsInOrder</c>.
<p/>
<b>NOTE:</b> even if <c>scoreDocsInOrder</c> is false, it is
recommended to check whether the returned <c>Scorer</c> indeed scores
documents out of order (i.e., call <see cref="M:Lucene.Net.Search.Weight.GetScoresDocsOutOfOrder"/>), as
some <c>Scorer</c> implementations will always return documents
in-order.<br/>
<b>NOTE:</b> null can be returned if no documents will be scored by this
query.
</summary>
<param name="reader">
the <see cref="T:Lucene.Net.Index.IndexReader"/> for which to return the <see cref="T:Lucene.Net.Search.Scorer"/>.
</param>
<param name="scoreDocsInOrder">specifies whether in-order scoring of documents is required. Note
that if set to false (i.e., out-of-order scoring is required),
this method can return whatever scoring mode it supports, as every
in-order scorer is also an out-of-order one. However, an
out-of-order scorer may not support <see cref="M:Lucene.Net.Search.DocIdSetIterator.NextDoc"/>
and/or <see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/>, therefore it is recommended to
request an in-order scorer if use of these methods is required.
</param>
<param name="topScorer">
if true, <see cref="M:Lucene.Net.Search.Scorer.Score(Lucene.Net.Search.Collector)"/> will be called; if false,
<see cref="M:Lucene.Net.Search.DocIdSetIterator.NextDoc"/> and/or <see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/> will
be called.
</param>
<returns> a <see cref="M:Lucene.Net.Search.Weight.Scorer(Lucene.Net.Index.IndexReader,System.Boolean,System.Boolean)"/> which scores documents in/out-of order.
</returns>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.Weight.GetSumOfSquaredWeights">
<summary>The sum of squared weights of contained query clauses. </summary>
</member>
<member name="M:Lucene.Net.Search.Weight.GetScoresDocsOutOfOrder">
<summary> Returns true iff this implementation scores docs only out of order. This
method is used in conjunction with <see cref="T:Lucene.Net.Search.Collector"/>'s
<see cref="P:Lucene.Net.Search.Collector.AcceptsDocsOutOfOrder">AcceptsDocsOutOfOrder</see> and
<see cref="M:Lucene.Net.Search.Weight.Scorer(Lucene.Net.Index.IndexReader,System.Boolean,System.Boolean)"/> to
create a matching <see cref="M:Lucene.Net.Search.Weight.Scorer(Lucene.Net.Index.IndexReader,System.Boolean,System.Boolean)"/> instance for a given <see cref="T:Lucene.Net.Search.Collector"/>, or
vice versa.
<p/>
<b>NOTE:</b> the default implementation returns <c>false</c>, i.e.
the <c>Scorer</c> scores documents in-order.
</summary>
</member>
<member name="P:Lucene.Net.Search.Weight.Query">
<summary>The query that this concerns. </summary>
</member>
<member name="P:Lucene.Net.Search.Weight.Value">
<summary>The weight for this query. </summary>
</member>
<member name="F:Lucene.Net.Search.BooleanQuery.BooleanWeight.similarity">
<summary>The Similarity implementation. </summary>
</member>
<member name="T:Lucene.Net.Search.Scorer">
<summary> Expert: Common scoring functionality for different types of queries.
<p/>
A <c>Scorer</c> iterates over documents matching a
query in increasing order of doc Id.
<p/>
<p/>
Document scores are computed using a given <c>Similarity</c>
implementation.
<p/>
<p/><b>NOTE</b>: The values Float.Nan,
Float.NEGATIVE_INFINITY and Float.POSITIVE_INFINITY are
not valid scores. Certain collectors (eg <see cref="T:Lucene.Net.Search.TopScoreDocCollector"/>
) will not properly collect hits
with these scores.
</summary>
</member>
<member name="T:Lucene.Net.Search.DocIdSetIterator">
<summary> This abstract class defines methods to iterate over a set of non-decreasing
doc ids. Note that this class assumes it iterates on doc Ids, and therefore
<see cref="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS"/> is set to Int32.MaxValue in order to be used as
a sentinel object. Implementations of this class are expected to consider
<see cref="F:System.Int32.MaxValue"/> as an invalid value.
</summary>
</member>
<member name="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS">
<summary> When returned by <see cref="M:Lucene.Net.Search.DocIdSetIterator.NextDoc"/>, <see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/> and
<see cref="M:Lucene.Net.Search.DocIdSetIterator.DocID"/> it means there are no more docs in the iterator.
</summary>
</member>
<member name="M:Lucene.Net.Search.DocIdSetIterator.DocID">
<summary> Returns the following:
<list type="bullet">
<item>-1 or <see cref="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS"/> if <see cref="M:Lucene.Net.Search.DocIdSetIterator.NextDoc"/> or
<see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/> were not called yet.</item>
<item><see cref="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS"/> if the iterator has exhausted.</item>
<item>Otherwise it should return the doc ID it is currently on.</item>
</list>
<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.DocIdSetIterator.NextDoc">
<summary> Advances to the next document in the set and returns the doc it is
currently on, or <see cref="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS"/> if there are no more docs in the
set.<br/>
<b>NOTE:</b> after the iterator has exhausted you should not call this
method, as it may result in unpredicted behavior.
</summary>
</member>
<member name="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)">
<summary> Advances to the first beyond the current whose document number is greater
than or equal to <i>target</i>. Returns the current document number or
<see cref="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS"/> if there are no more docs in the set.
<p/>
Behaves as if written:
<code>
int advance(int target) {
int doc;
while ((doc = nextDoc()) < target) {
}
return doc;
}
</code>
Some implementations are considerably more efficient than that.
<p/>
<b>NOTE:</b> certain implemenations may return a different value (each
time) if called several times in a row with the same target.
<p/>
<b>NOTE:</b> this method may be called with <see cref="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS"/> for
efficiency by some Scorers. If your implementation cannot efficiently
determine that it should exhaust, it is recommended that you check for that
value in each call to this method.
<p/>
<b>NOTE:</b> after the iterator has exhausted you should not call this
method, as it may result in unpredicted behavior.
<p/>
</summary>
<since>2.9</since>
</member>
<member name="M:Lucene.Net.Search.Scorer.#ctor(Lucene.Net.Search.Similarity)">
<summary>Constructs a Scorer.</summary>
<param name="similarity">The <c>Similarity</c> implementation used by this scorer.
</param>
</member>
<member name="M:Lucene.Net.Search.Scorer.Score(Lucene.Net.Search.Collector)">
<summary>Scores and collects all matching documents.</summary>
<param name="collector">The collector to which all matching documents are passed.
</param>
</member>
<member name="M:Lucene.Net.Search.Scorer.Score(Lucene.Net.Search.Collector,System.Int32,System.Int32)">
<summary> Expert: Collects matching documents in a range. Hook for optimization.
Note, <paramref name="firstDocID"/> is added to ensure that <see cref="M:Lucene.Net.Search.DocIdSetIterator.NextDoc"/>
was called before this method.
</summary>
<param name="collector">The collector to which all matching documents are passed.
</param>
<param name="max">Do not score documents past this.
</param>
<param name="firstDocID">
The first document ID (ensures <see cref="M:Lucene.Net.Search.DocIdSetIterator.NextDoc"/> is called before
this method.
</param>
<returns> true if more matching documents may remain.
</returns>
</member>
<member name="M:Lucene.Net.Search.Scorer.Score">
<summary>Returns the score of the current document matching the query.
Initially invalid, until <see cref="M:Lucene.Net.Search.DocIdSetIterator.NextDoc"/> or <see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/>
is called the first time, or when called from within
<see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/>.
</summary>
</member>
<member name="P:Lucene.Net.Search.Scorer.Similarity">
<summary>Returns the Similarity implementation used by this scorer. </summary>
</member>
<member name="T:Lucene.Net.Search.Collector">
<summary> <p/>Expert: Collectors are primarily meant to be used to
gather raw results from a search, and implement sorting
or custom result filtering, collation, etc. <p/>
<p/>Lucene's core collectors are derived from Collector.
Likely your application can use one of these classes, or
subclass <see cref="T:Lucene.Net.Search.TopDocsCollector`1"/>, instead of
implementing Collector directly:
<list type="bullet">
<item><see cref="T:Lucene.Net.Search.TopDocsCollector`1"/> is an abstract base class
that assumes you will retrieve the top N docs,
according to some criteria, after collection is
done. </item>
<item><see cref="T:Lucene.Net.Search.TopScoreDocCollector"/> is a concrete subclass
<see cref="T:Lucene.Net.Search.TopDocsCollector`1"/> and sorts according to score +
docID. This is used internally by the <see cref="T:Lucene.Net.Search.IndexSearcher"/>
search methods that do not take an
explicit <see cref="T:Lucene.Net.Search.Sort"/>. It is likely the most frequently
used collector.</item>
<item><see cref="T:Lucene.Net.Search.TopFieldCollector"/> subclasses <see cref="T:Lucene.Net.Search.TopDocsCollector`1"/>
and sorts according to a specified
<see cref="T:Lucene.Net.Search.Sort"/> object (sort by field). This is used
internally by the <see cref="T:Lucene.Net.Search.IndexSearcher"/> search methods
that take an explicit <see cref="T:Lucene.Net.Search.Sort"/>.</item>
<item><see cref="T:Lucene.Net.Search.TimeLimitingCollector"/>, which wraps any other
Collector and aborts the search if it's taken too much
time.</item>
<item><see cref="T:Lucene.Net.Search.PositiveScoresOnlyCollector"/> wraps any other
Collector and prevents collection of hits whose score
is <= 0.0</item>
</list>
<p/>Collector decouples the score from the collected doc:
the score computation is skipped entirely if it's not
needed. Collectors that do need the score should
implement the <see cref="M:Lucene.Net.Search.Collector.SetScorer(Lucene.Net.Search.Scorer)"/> method, to hold onto the
passed <see cref="T:Lucene.Net.Search.Scorer"/> instance, and call <see cref="M:Lucene.Net.Search.Scorer.Score"/>
within the collect method to compute the
current hit's score. If your collector may request the
score for a single hit multiple times, you should use
<see cref="T:Lucene.Net.Search.ScoreCachingWrappingScorer"/>. <p/>
<p/><b>NOTE:</b> The doc that is passed to the collect
method is relative to the current reader. If your
collector needs to resolve this to the docID space of the
Multi*Reader, you must re-base it by recording the
docBase from the most recent setNextReader call. Here's
a simple example showing how to collect docIDs into a
BitSet:<p/>
<code>
Searcher searcher = new IndexSearcher(indexReader);
final BitSet bits = new BitSet(indexReader.MaxDoc);
searcher.search(query, new Collector() {
private int docBase;
<em>// ignore scorer</em>
public void setScorer(Scorer scorer) {
}
<em>// accept docs out of order (for a BitSet it doesn't matter)</em>
public boolean acceptsDocsOutOfOrder() {
return true;
}
public void collect(int doc) {
bits.set(doc + docBase);
}
public void setNextReader(IndexReader reader, int docBase) {
this.docBase = docBase;
}
});
</code>
<p/>Not all collectors will need to rebase the docID. For
example, a collector that simply counts the total number
of hits would skip it.<p/>
<p/><b>NOTE:</b> Prior to 2.9, Lucene silently filtered
out hits with score <= 0. As of 2.9, the core Collectors
no longer do that. It's very unusual to have such hits
(a negative query boost, or function query returning
negative custom scores, could cause it to happen). If
you need that behavior, use <see cref="T:Lucene.Net.Search.PositiveScoresOnlyCollector"/>
.<p/>
<p/><b>NOTE:</b> This API is experimental and might change
in incompatible ways in the next release.<p/>
</summary>
<since> 2.9
</since>
</member>
<member name="M:Lucene.Net.Search.Collector.SetScorer(Lucene.Net.Search.Scorer)">
<summary> Called before successive calls to <see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/>. Implementations
that need the score of the current document (passed-in to
<see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/>), should save the passed-in Scorer and call
scorer.score() when needed.
</summary>
</member>
<member name="M:Lucene.Net.Search.Collector.Collect(System.Int32)">
<summary> Called once for every document matching a query, with the unbased document
number.
<p/>
Note: This is called in an inner search loop. For good search performance,
implementations of this method should not call <see cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)"/> or
<see cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/> on every hit.
Doing so can slow searches by an order of magnitude or more.
</summary>
</member>
<member name="M:Lucene.Net.Search.Collector.SetNextReader(Lucene.Net.Index.IndexReader,System.Int32)">
<summary> Called before collecting from each IndexReader. All doc ids in
<see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/> will correspond to reader.
Add docBase to the current IndexReaders internal document id to re-base ids
in <see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/>.
</summary>
<param name="reader">next IndexReader
</param>
<param name="docBase">
</param>
</member>
<member name="P:Lucene.Net.Search.Collector.AcceptsDocsOutOfOrder">
<summary>
Return <c>true</c> if this collector does not
require the matching docIDs to be delivered in int sort
order (smallest to largest) to <see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/>.
<p/> Most Lucene Query implementations will visit
matching docIDs in order. However, some queries
(currently limited to certain cases of <see cref="T:Lucene.Net.Search.BooleanQuery"/>)
can achieve faster searching if the
<c>Collector</c> allows them to deliver the
docIDs out of order.
<p/> Many collectors don't mind getting docIDs out of
order, so it's important to return <c>true</c>
here.
</summary>
<value> </value>
</member>
<member name="T:Lucene.Net.Search.BooleanScorer.BucketTable">
<summary>A simple hash table of document scores within a range. </summary>
</member>
<member name="T:Lucene.Net.Search.BooleanScorer2">
<summary>An alternative to BooleanScorer that also allows a minimum number
of optional scorers that should match.
<br/>Implements skipTo(), and has no limitations on the numbers of added scorers.
<br/>Uses ConjunctionScorer, DisjunctionScorer, ReqOptScorer and ReqExclScorer.
</summary>
</member>
<member name="F:Lucene.Net.Search.BooleanScorer2.countingSumScorer">
<summary>The scorer to which all scoring will be delegated,
except for computing and using the coordination factor.
</summary>
</member>
<member name="F:Lucene.Net.Search.BooleanScorer2.minNrShouldMatch">
<summary>The number of optionalScorers that need to match (if there are any) </summary>
</member>
<member name="M:Lucene.Net.Search.BooleanScorer2.#ctor(Lucene.Net.Search.Similarity,System.Int32,System.Collections.Generic.List{Lucene.Net.Search.Scorer},System.Collections.Generic.List{Lucene.Net.Search.Scorer},System.Collections.Generic.List{Lucene.Net.Search.Scorer})">
<summary> Creates a <see cref="T:Lucene.Net.Search.Scorer"/> with the given similarity and lists of required,
prohibited and optional scorers. In no required scorers are added, at least
one of the optional scorers will have to match during the search.
</summary>
<param name="similarity">The similarity to be used.
</param>
<param name="minNrShouldMatch">The minimum number of optional added scorers that should match
during the search. In case no required scorers are added, at least
one of the optional scorers will have to match during the search.
</param>
<param name="required">the list of required scorers.
</param>
<param name="prohibited">the list of prohibited scorers.
</param>
<param name="optional">the list of optional scorers.
</param>
</member>
<member name="M:Lucene.Net.Search.BooleanScorer2.MakeCountingSumScorer">
<summary>Returns the scorer to be used for match counting and score summing.
Uses requiredScorers, optionalScorers and prohibitedScorers.
</summary>
</member>
<member name="M:Lucene.Net.Search.BooleanScorer2.AddProhibitedScorers(Lucene.Net.Search.Scorer)">
<summary>Returns the scorer to be used for match counting and score summing.
Uses the given required scorer and the prohibitedScorers.
</summary>
<param name="requiredCountingSumScorer">A required scorer already built.
</param>
</member>
<member name="M:Lucene.Net.Search.BooleanScorer2.Score(Lucene.Net.Search.Collector)">
<summary>Scores and collects all matching documents.</summary>
<param name="collector">The collector to which all matching documents are passed through.
</param>
</member>
<member name="T:Lucene.Net.Search.DisjunctionSumScorer">
<summary>A Scorer for OR like queries, counterpart of <c>ConjunctionScorer</c>.
This Scorer implements <see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/> and uses skipTo() on the given Scorers.
</summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionSumScorer.nrScorers">
<summary>The number of subscorers. </summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionSumScorer.subScorers">
<summary>The subscorers. </summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionSumScorer.minimumNrMatchers">
<summary>The minimum number of scorers that should match. </summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionSumScorer.scorerDocQueue">
<summary>The scorerDocQueue contains all subscorers ordered by their current doc(),
with the minimum at the top.
<br/>The scorerDocQueue is initialized the first time next() or skipTo() is called.
<br/>An exhausted scorer is immediately removed from the scorerDocQueue.
<br/>If less than the minimumNrMatchers scorers
remain in the scorerDocQueue next() and skipTo() return false.
<p/>
After each to call to next() or skipTo()
<c>currentSumScore</c> is the total score of the current matching doc,
<c>nrMatchers</c> is the number of matching scorers,
and all scorers are after the matching doc, or are exhausted.
</summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionSumScorer.currentDoc">
<summary>The document number of the current match. </summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionSumScorer.nrMatchers">
<summary>The number of subscorers that provide the current match. </summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.#ctor(System.Collections.Generic.IList{Lucene.Net.Search.Scorer},System.Int32)">
<summary>Construct a <c>DisjunctionScorer</c>.</summary>
<param name="subScorers">A collection of at least two subscorers.
</param>
<param name="minimumNrMatchers">The positive minimum number of subscorers that should
match to match this query.
<br/>When <c>minimumNrMatchers</c> is bigger than
the number of <c>subScorers</c>,
no matches will be produced.
<br/>When minimumNrMatchers equals the number of subScorers,
it more efficient to use <c>ConjunctionScorer</c>.
</param>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.#ctor(System.Collections.Generic.IList{Lucene.Net.Search.Scorer})">
<summary>Construct a <c>DisjunctionScorer</c>, using one as the minimum number
of matching subscorers.
</summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.InitScorerDocQueue">
<summary>Called the first time next() or skipTo() is called to
initialize <c>scorerDocQueue</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.Score(Lucene.Net.Search.Collector)">
<summary>Scores and collects all matching documents.</summary>
<param name="collector">The collector to which all matching documents are passed through.</param>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.Score(Lucene.Net.Search.Collector,System.Int32,System.Int32)">
<summary>Expert: Collects matching documents in a range. Hook for optimization.
Note that <see cref="M:Lucene.Net.Search.DisjunctionSumScorer.NextDoc"/> must be called once before this method is called
for the first time.
</summary>
<param name="collector">The collector to which all matching documents are passed through.
</param>
<param name="max">Do not score documents past this.
</param>
<param name="firstDocID"></param>
<returns> true if more matching documents may remain.
</returns>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.AdvanceAfterCurrent">
<summary>Advance all subscorers after the current document determined by the
top of the <c>scorerDocQueue</c>.
Repeat until at least the minimum number of subscorers match on the same
document and all subscorers are after that document or are exhausted.
<br/>On entry the <c>scorerDocQueue</c> has at least <c>minimumNrMatchers</c>
available. At least the scorer with the minimum document number will be advanced.
</summary>
<returns> true iff there is a match.
<br/>In case there is a match, <c>currentDoc</c>, <c>currentSumScore</c>,
and <c>nrMatchers</c> describe the match.
TODO: Investigate whether it is possible to use skipTo() when
the minimum number of matchers is bigger than one, ie. try and use the
character of ConjunctionScorer for the minimum number of matchers.
Also delay calling score() on the sub scorers until the minimum number of
matchers is reached.
<br/>For this, a Scorer array with minimumNrMatchers elements might
hold Scorers at currentDoc that are temporarily popped from scorerQueue.
</returns>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.Score">
<summary>Returns the score of the current document matching the query.
Initially invalid, until <see cref="M:Lucene.Net.Search.DisjunctionSumScorer.NextDoc"/> is called the first time.
</summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.NrMatchers">
<summary>Returns the number of subscorers matching the current document.
Initially invalid, until <see cref="M:Lucene.Net.Search.DisjunctionSumScorer.NextDoc"/> is called the first time.
</summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionSumScorer.Advance(System.Int32)">
<summary> Advances to the first match beyond the current whose document number is
greater than or equal to a given target. <br/>
The implementation uses the skipTo() method on the subscorers.
</summary>
<param name="target">The target document number.
</param>
<returns> the document whose number is greater than or equal to the given
target, or -1 if none exist.
</returns>
</member>
<member name="T:Lucene.Net.Search.ConjunctionScorer">
<summary>Scorer for conjunctions, sets of queries, all of which are required. </summary>
</member>
<member name="T:Lucene.Net.Search.BooleanScorer2.SingleMatchScorer">
<summary>Count a scorer as a single match. </summary>
</member>
<member name="T:Lucene.Net.Search.CachingSpanFilter">
<summary> Wraps another SpanFilter's result and caches it. The purpose is to allow
filters to simply filter, and then wrap with this class to add caching.
</summary>
</member>
<member name="T:Lucene.Net.Search.SpanFilter">
<summary>Abstract base class providing a mechanism to restrict searches to a subset
of an index and also maintains and returns position information.
This is useful if you want to compare the positions from a SpanQuery with the positions of items in
a filter. For instance, if you had a SpanFilter that marked all the occurrences of the word "foo" in documents,
and then you entered a new SpanQuery containing bar, you could not only filter by the word foo, but you could
then compare position information for post processing.
</summary>
</member>
<member name="T:Lucene.Net.Search.Filter">
<summary>Abstract base class for restricting which documents may be returned during searching.</summary>
</member>
<member name="M:Lucene.Net.Search.Filter.GetDocIdSet(Lucene.Net.Index.IndexReader)">
<summary>
<para>Creates a <see cref="T:Lucene.Net.Search.DocIdSet"/> enumerating the documents that should be
permitted in search results. <b>NOTE:</b> null can be
returned if no documents are accepted by this Filter.</para>
<p/>
<para>Note: This method will be called once per segment in
the index during searching. The returned <see cref="T:Lucene.Net.Search.DocIdSet"/>
must refer to document IDs for that segment, not for
the top-level reader.</para>
</summary>
<returns> a DocIdSet that provides the documents which should be permitted or
prohibited in search results. <b>NOTE:</b> null can be returned if
no documents will be accepted by this Filter.
</returns>
<param name="reader">
A <see cref="T:Lucene.Net.Index.IndexReader"/> instance opened on the index currently
searched on. Note, it is likely that the provided reader does not
represent the whole underlying index i.e. if the index has more than
one segment the given reader only represents a single segment.
</param>
<seealso cref="T:Lucene.Net.Util.DocIdBitSet">
</seealso>
</member>
<member name="M:Lucene.Net.Search.SpanFilter.BitSpans(Lucene.Net.Index.IndexReader)">
<summary>Returns a SpanFilterResult with true for documents which should be permitted in
search results, and false for those that should not and Spans for where the true docs match.
</summary>
<param name="reader">The <see cref="T:Lucene.Net.Index.IndexReader"/> to load position and DocIdSet information from
</param>
<returns> A <see cref="T:Lucene.Net.Search.SpanFilterResult"/>
</returns>
<throws> java.io.IOException if there was an issue accessing the necessary information </throws>
<summary>
</summary>
</member>
<member name="F:Lucene.Net.Search.CachingSpanFilter.cache">
<summary> A transient Filter cache (internal because of test)</summary>
</member>
<member name="M:Lucene.Net.Search.CachingSpanFilter.#ctor(Lucene.Net.Search.SpanFilter)">
<summary>
New deletions always result in a cache miss, by default
(<see cref="F:Lucene.Net.Search.CachingWrapperFilter.DeletesMode.RECACHE"/>.
<param name="filter">Filter to cache results of
</param>
</summary>
</member>
<member name="M:Lucene.Net.Search.CachingSpanFilter.#ctor(Lucene.Net.Search.SpanFilter,Lucene.Net.Search.CachingWrapperFilter.DeletesMode)">
<summary>New deletions always result in a cache miss, specify the <paramref name="deletesMode"/></summary>
<param name="filter">Filter to cache results of</param>
<param name="deletesMode">See <see cref="T:Lucene.Net.Search.CachingWrapperFilter.DeletesMode"/></param>
</member>
<member name="T:Lucene.Net.Search.CachingWrapperFilter">
<summary> Wraps another filter's result and caches it. The purpose is to allow
filters to simply filter, and then wrap with this class to add caching.
</summary>
</member>
<member name="M:Lucene.Net.Search.CachingWrapperFilter.#ctor(Lucene.Net.Search.Filter)">
<summary>
New deletes are ignored by default, which gives higher
cache hit rate on reopened readers. Most of the time
this is safe, because the filter will be AND'd with a
Query that fully enforces deletions. If instead you
need this filter to always enforce deletions, pass
either <see cref="F:Lucene.Net.Search.CachingWrapperFilter.DeletesMode.RECACHE"/> or
<see cref="F:Lucene.Net.Search.CachingWrapperFilter.DeletesMode.DYNAMIC"/>.
</summary>
<param name="filter">Filter to cache results of</param>
</member>
<member name="M:Lucene.Net.Search.CachingWrapperFilter.#ctor(Lucene.Net.Search.Filter,Lucene.Net.Search.CachingWrapperFilter.DeletesMode)">
<summary>
Expert: by default, the cached filter will be shared
across reopened segments that only had changes to their
deletions.
</summary>
<param name="filter">Filter to cache results of</param>
<param name="deletesMode">See <see cref="T:Lucene.Net.Search.CachingWrapperFilter.DeletesMode"/></param>
</member>
<member name="M:Lucene.Net.Search.CachingWrapperFilter.DocIdSetToCache(Lucene.Net.Search.DocIdSet,Lucene.Net.Index.IndexReader)">
<summary>Provide the DocIdSet to be cached, using the DocIdSet provided
by the wrapped Filter.
This implementation returns the given DocIdSet.
</summary>
</member>
<member name="T:Lucene.Net.Search.CachingWrapperFilter.DeletesMode">
Expert: Specifies how new deletions against a reopened
reader should be handled.
<para>The default is IGNORE, which means the cache entry
will be re-used for a given segment, even when that
segment has been reopened due to changes in deletions.
This is a big performance gain, especially with
near-real-timer readers, since you don't hit a cache
miss on every reopened reader for prior segments.</para>
<para>However, in some cases this can cause invalid query
results, allowing deleted documents to be returned.
This only happens if the main query does not rule out
deleted documents on its own, such as a toplevel
ConstantScoreQuery. To fix this, use RECACHE to
re-create the cached filter (at a higher per-reopen
cost, but at faster subsequent search performance), or
use DYNAMIC to dynamically intersect deleted docs (fast
reopen time but some hit to search performance).</para>
</member>
<member name="T:Lucene.Net.Search.FilteredDocIdSet">
<summary> Abstract decorator class for a DocIdSet implementation
that provides on-demand filtering/validation
mechanism on a given DocIdSet.
<p/>
Technically, this same functionality could be achieved
with ChainedFilter (under contrib/misc), however the
benefit of this class is it never materializes the full
bitset for the filter. Instead, the <see cref="M:Lucene.Net.Search.FilteredDocIdSet.Match(System.Int32)"/>
method is invoked on-demand, per docID visited during
searching. If you know few docIDs will be visited, and
the logic behind <see cref="M:Lucene.Net.Search.FilteredDocIdSet.Match(System.Int32)"/> is relatively costly,
this may be a better way to filter than ChainedFilter.
</summary>
<seealso cref="T:Lucene.Net.Search.DocIdSet">
</seealso>
</member>
<member name="T:Lucene.Net.Search.DocIdSet">
<summary> A DocIdSet contains a set of doc ids. Implementing classes must
only implement <see cref="M:Lucene.Net.Search.DocIdSet.Iterator"/> to provide access to the set.
</summary>
</member>
<member name="F:Lucene.Net.Search.DocIdSet.EMPTY_DOCIDSET">
<summary>An empty <see cref="T:Lucene.Net.Search.DocIdSet"/> instance for easy use, e.g. in Filters that hit no documents. </summary>
</member>
<member name="M:Lucene.Net.Search.DocIdSet.Iterator">
<summary>Provides a <see cref="T:Lucene.Net.Search.DocIdSetIterator"/> to access the set.
This implementation can return <c>null</c> or
<c>EMPTY_DOCIDSET.Iterator()</c> if there
are no docs that match.
</summary>
</member>
<member name="P:Lucene.Net.Search.DocIdSet.IsCacheable">
<summary>This method is a hint for <see cref="T:Lucene.Net.Search.CachingWrapperFilter"/>, if this <c>DocIdSet</c>
should be cached without copying it into a BitSet. The default is to return
<c>false</c>. If you have an own <c>DocIdSet</c> implementation
that does its iteration very effective and fast without doing disk I/O,
override this method and return true.
</summary>
</member>
<member name="M:Lucene.Net.Search.FilteredDocIdSet.#ctor(Lucene.Net.Search.DocIdSet)">
<summary> Constructor.</summary>
<param name="innerSet">Underlying DocIdSet
</param>
</member>
<member name="M:Lucene.Net.Search.FilteredDocIdSet.Match(System.Int32)">
<summary> Validation method to determine whether a docid should be in the result set.</summary>
<param name="docid">docid to be tested
</param>
<returns> true if input docid should be in the result set, false otherwise.
</returns>
</member>
<member name="M:Lucene.Net.Search.FilteredDocIdSet.Iterator">
<summary> Implementation of the contract to build a DocIdSetIterator.</summary>
<seealso cref="T:Lucene.Net.Search.DocIdSetIterator">
</seealso>
<seealso cref="T:Lucene.Net.Search.FilteredDocIdSetIterator">
</seealso>
</member>
<member name="P:Lucene.Net.Search.FilteredDocIdSet.IsCacheable">
<summary>This DocIdSet implementation is cacheable if the inner set is cacheable. </summary>
</member>
<member name="T:Lucene.Net.Search.FilteredDocIdSetIterator">
<summary> Abstract decorator class of a DocIdSetIterator
implementation that provides on-demand filter/validation
mechanism on an underlying DocIdSetIterator. See <see cref="T:Lucene.Net.Search.FilteredDocIdSet"/>
.
</summary>
</member>
<member name="M:Lucene.Net.Search.FilteredDocIdSetIterator.#ctor(Lucene.Net.Search.DocIdSetIterator)">
<summary> Constructor.</summary>
<param name="innerIter">Underlying DocIdSetIterator.
</param>
</member>
<member name="M:Lucene.Net.Search.FilteredDocIdSetIterator.Match(System.Int32)">
<summary> Validation method to determine whether a docid should be in the result set.</summary>
<param name="doc">docid to be tested
</param>
<returns> true if input docid should be in the result set, false otherwise.
</returns>
<seealso cref="M:Lucene.Net.Search.FilteredDocIdSetIterator.#ctor(Lucene.Net.Search.DocIdSetIterator)">
</seealso>
</member>
<member name="T:Lucene.Net.Search.ComplexExplanation">
<summary>Expert: Describes the score computation for document and query, and
can distinguish a match independent of a positive value.
</summary>
</member>
<member name="P:Lucene.Net.Search.ComplexExplanation.Match">
<summary> The match status of this explanation node.</summary>
<value> May be null if match status is unknown
</value>
</member>
<member name="P:Lucene.Net.Search.ComplexExplanation.IsMatch">
<summary> Indicates whether or not this Explanation models a good match.
<p/>
If the match status is explicitly set (i.e.: not null) this method
uses it; otherwise it defers to the superclass.
<p/>
</summary>
</member>
<member name="T:Lucene.Net.Search.ConstantScoreQuery">
<summary> A query that wraps a filter and simply returns a constant score equal to the
query boost for every document in the filter.
</summary>
</member>
<member name="M:Lucene.Net.Search.ConstantScoreQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="M:Lucene.Net.Search.ConstantScoreQuery.Equals(System.Object)">
<summary>Returns true if <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.ConstantScoreQuery.GetHashCode">
<summary>Returns a hash code value for this object. </summary>
</member>
<member name="P:Lucene.Net.Search.ConstantScoreQuery.Filter">
<summary>Returns the encapsulated filter </summary>
</member>
<member name="T:Lucene.Net.Search.DefaultSimilarity">
<summary>Expert: Default scoring implementation. </summary>
</member>
<member name="M:Lucene.Net.Search.DefaultSimilarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)">
<summary>Implemented as
<c>state.getBoost()*lengthNorm(numTerms)</c>, where
<c>numTerms</c> is <see cref="P:Lucene.Net.Index.FieldInvertState.Length"/> if <see cref="P:Lucene.Net.Search.DefaultSimilarity.DiscountOverlaps"/>
is false, else it's <see cref="P:Lucene.Net.Index.FieldInvertState.Length"/>
- <see cref="P:Lucene.Net.Index.FieldInvertState.NumOverlap"/>
.
<p/><b>WARNING</b>: This API is new and experimental, and may suddenly
change.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.DefaultSimilarity.LengthNorm(System.String,System.Int32)">
<summary>Implemented as <c>1/sqrt(numTerms)</c>. </summary>
</member>
<member name="M:Lucene.Net.Search.DefaultSimilarity.QueryNorm(System.Single)">
<summary>Implemented as <c>1/sqrt(sumOfSquaredWeights)</c>. </summary>
</member>
<member name="M:Lucene.Net.Search.DefaultSimilarity.Tf(System.Single)">
<summary>Implemented as <c>sqrt(freq)</c>. </summary>
</member>
<member name="M:Lucene.Net.Search.DefaultSimilarity.SloppyFreq(System.Int32)">
<summary>Implemented as <c>1 / (distance + 1)</c>. </summary>
</member>
<member name="M:Lucene.Net.Search.DefaultSimilarity.Idf(System.Int32,System.Int32)">
<summary>Implemented as <c>log(numDocs/(docFreq+1)) + 1</c>. </summary>
</member>
<member name="M:Lucene.Net.Search.DefaultSimilarity.Coord(System.Int32,System.Int32)">
<summary>Implemented as <c>overlap / maxOverlap</c>. </summary>
</member>
<member name="F:Lucene.Net.Search.DefaultSimilarity.internalDiscountOverlaps">
<seealso cref="P:Lucene.Net.Search.DefaultSimilarity.DiscountOverlaps">
</seealso>
</member>
<member name="P:Lucene.Net.Search.DefaultSimilarity.DiscountOverlaps">
<summary>Determines whether overlap tokens (Tokens with
0 position increment) are ignored when computing
norm. By default this is false, meaning overlap
tokens are counted just like non-overlap tokens.
<p/><b>WARNING</b>: This API is new and experimental, and may suddenly
change.<p/>
</summary>
<seealso cref="M:Lucene.Net.Search.DefaultSimilarity.ComputeNorm(System.String,Lucene.Net.Index.FieldInvertState)">
</seealso>
</member>
<member name="T:Lucene.Net.Search.DisjunctionMaxQuery">
<summary> A query that generates the union of documents produced by its subqueries, and that scores each document with the maximum
score for that document as produced by any subquery, plus a tie breaking increment for any additional matching subqueries.
This is useful when searching for a word in multiple fields with different boost factors (so that the fields cannot be
combined equivalently into a single search field). We want the primary score to be the one associated with the highest boost,
not the sum of the field scores (as BooleanQuery would give).
If the query is "albino elephant" this ensures that "albino" matching one field and "elephant" matching
another gets a higher score than "albino" matching both fields.
To get this result, use both BooleanQuery and DisjunctionMaxQuery: for each term a DisjunctionMaxQuery searches for it in
each field, while the set of these DisjunctionMaxQuery's is combined into a BooleanQuery.
The tie breaker capability allows results that include the same term in multiple fields to be judged better than results that
include this term in only the best of those multiple fields, without confusing this with the better case of two different terms
in the multiple fields.
</summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.#ctor(System.Single)">
<summary>Creates a new empty DisjunctionMaxQuery. Use add() to add the subqueries.</summary>
<param name="tieBreakerMultiplier">the score of each non-maximum disjunct for a document is multiplied by this weight
and added into the final score. If non-zero, the value should be small, on the order of 0.1, which says that
10 occurrences of word in a lower-scored field that is also in a higher scored field is just as good as a unique
word in the lower scored field (i.e., one that is not in any higher scored field.
</param>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.#ctor(System.Collections.Generic.ICollection{Lucene.Net.Search.Query},System.Single)">
<summary> Creates a new DisjunctionMaxQuery</summary>
<param name="disjuncts">a Collection<Query> of all the disjuncts to add
</param>
<param name="tieBreakerMultiplier"> the weight to give to each matching non-maximum disjunct
</param>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.Add(Lucene.Net.Search.Query)">
<summary>Add a subquery to this disjunction</summary>
<param name="query">the disjunct added
</param>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.Add(System.Collections.Generic.ICollection{Lucene.Net.Search.Query})">
<summary>Add a collection of disjuncts to this disjunction
via Iterable
</summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.GetEnumerator">
<summary>An Iterator<Query> over the disjuncts </summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.Rewrite(Lucene.Net.Index.IndexReader)">
<summary>Optimize our representation and our subqueries representations</summary>
<param name="reader">the IndexReader we query
</param>
<returns> an optimized copy of us (which may not be a copy if there is nothing to optimize)
</returns>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.Clone">
<summary>Create a shallow copy of us -- used in rewriting if necessary</summary>
<returns> a copy of us (but reuse, don't copy, our subqueries)
</returns>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.ToString(System.String)">
<summary>Prettyprint us.</summary>
<param name="field">the field to which we are applied
</param>
<returns> a string that shows what we do, of the form "(disjunct1 | disjunct2 | ... | disjunctn)^boost"
</returns>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.Equals(System.Object)">
<summary>Return true iff we represent the same query as o</summary>
<param name="o">another object
</param>
<returns> true iff o is a DisjunctionMaxQuery with the same boost and the same subqueries, in the same order, as us
</returns>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxQuery.GetHashCode">
<summary>Compute a hash code for hashing us</summary>
<returns> the hash code
</returns>
</member>
<member name="T:Lucene.Net.Search.DisjunctionMaxQuery.DisjunctionMaxWeight">
<summary> Expert: the Weight for DisjunctionMaxQuery, used to
normalize, score and explain these queries.
<p/>NOTE: this API and implementation is subject to
change suddenly in the next release.<p/>
</summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionMaxQuery.DisjunctionMaxWeight.similarity">
<summary>The Similarity implementation. </summary>
</member>
<member name="F:Lucene.Net.Search.DisjunctionMaxQuery.DisjunctionMaxWeight.weights">
<summary>The Weights for our subqueries, in 1-1 correspondence with disjuncts </summary>
</member>
<member name="T:Lucene.Net.Search.DisjunctionMaxScorer">
<summary> The Scorer for DisjunctionMaxQuery's. The union of all documents generated by the the subquery scorers
is generated in document number order. The score for each document is the maximum of the scores computed
by the subquery scorers that generate that document, plus tieBreakerMultiplier times the sum of the scores
for the other subqueries that generate the document.
</summary>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxScorer.#ctor(System.Single,Lucene.Net.Search.Similarity,Lucene.Net.Search.Scorer[],System.Int32)">
<summary> Creates a new instance of DisjunctionMaxScorer
</summary>
<param name="tieBreakerMultiplier">Multiplier applied to non-maximum-scoring subqueries for a
document as they are summed into the result.
</param>
<param name="similarity">-- not used since our definition involves neither coord nor terms
directly
</param>
<param name="subScorers">The sub scorers this Scorer should iterate on
</param>
<param name="numScorers">The actual number of scorers to iterate on. Note that the array's
length may be larger than the actual number of scorers.
</param>
</member>
<member name="M:Lucene.Net.Search.DisjunctionMaxScorer.Score">
<summary>Determine the current document score. Initially invalid, until <see cref="M:Lucene.Net.Search.DisjunctionMaxScorer.NextDoc"/> is called the first time.</summary>
<returns> the score of the current generated document
</returns>
</member>
<member name="T:Lucene.Net.Search.PhraseScorer">
<summary>Expert: Scoring functionality for phrase queries.
<br/>A document is considered matching if it contains the phrase-query terms
at "valid" positons. What "valid positions" are
depends on the type of the phrase query: for an exact phrase query terms are required
to appear in adjacent locations, while for a sloppy phrase query some distance between
the terms is allowed. The abstract method <see cref="M:Lucene.Net.Search.PhraseScorer.PhraseFreq"/> of extending classes
is invoked for each document containing all the phrase query terms, in order to
compute the frequency of the phrase query in that document. A non zero frequency
means a match.
</summary>
</member>
<member name="M:Lucene.Net.Search.PhraseScorer.CurrentFreq">
<summary>
Phrase frequency in current doc as computed by PhraseFreq()
</summary>
<returns></returns>
</member>
<member name="M:Lucene.Net.Search.PhraseScorer.PhraseFreq">
<summary> For a document containing all the phrase query terms, compute the
frequency of the phrase in that document.
A non zero frequency means a match.
<br/>Note, that containing all phrase terms does not guarantee a match - they have to be found in matching locations.
</summary>
<returns> frequency of the phrase in current doc, 0 if not found.
</returns>
</member>
<member name="T:Lucene.Net.Search.CreationPlaceholder">
<summary> Expert: Maintains caches of term values.
<p/>Created: May 19, 2004 11:13:14 AM
</summary>
<since> lucene 1.4
</since>
<version> $Id: FieldCache.java 807841 2009-08-25 22:27:31Z markrmiller $
</version>
<seealso cref="T:Lucene.Net.Util.FieldCacheSanityChecker">
</seealso>
</member>
<member name="T:Lucene.Net.Search.StringIndex">
<summary>Expert: Stores term text values and document ordering data. </summary>
</member>
<member name="F:Lucene.Net.Search.StringIndex.lookup">
<summary>All the term values, in natural order. </summary>
</member>
<member name="F:Lucene.Net.Search.StringIndex.order">
<summary>For each document, an index into the lookup array. </summary>
</member>
<member name="M:Lucene.Net.Search.StringIndex.#ctor(System.Int32[],System.String[])">
<summary>Creates one of these objects </summary>
</member>
<member name="T:Lucene.Net.Search.CacheEntry">
<summary> EXPERT: A unique Identifier/Description for each item in the FieldCache.
Can be useful for logging/debugging.
<p/>
<b>EXPERIMENTAL API:</b> This API is considered extremely advanced
and experimental. It may be removed or altered w/o warning in future
releases
of Lucene.
<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.CacheEntry.EstimateSize">
<seealso cref="M:Lucene.Net.Search.CacheEntry.EstimateSize(Lucene.Net.Util.RamUsageEstimator)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.CacheEntry.EstimateSize(Lucene.Net.Util.RamUsageEstimator)">
<summary> Computes (and stores) the estimated size of the cache Value </summary>
<seealso cref="P:Lucene.Net.Search.CacheEntry.EstimatedSize">
</seealso>
</member>
<member name="P:Lucene.Net.Search.CacheEntry.EstimatedSize">
<summary> The most recently estimated size of the value, null unless
estimateSize has been called.
</summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.STRING_INDEX">
<summary>Indicator for StringIndex values in the cache. </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.DEFAULT">
<summary>Expert: The cache used internally by sorting and range query classes. </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.DEFAULT_BYTE_PARSER">
<summary>The default parser for byte values, which are encoded by <see cref="M:System.Byte.ToString"/> </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.DEFAULT_SHORT_PARSER">
<summary>The default parser for short values, which are encoded by <see cref="M:System.Int16.ToString"/> </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.DEFAULT_INT_PARSER">
<summary>The default parser for int values, which are encoded by <see cref="M:System.Int32.ToString"/> </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.DEFAULT_FLOAT_PARSER">
<summary>The default parser for float values, which are encoded by <see cref="M:System.Single.ToString"/> </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.DEFAULT_LONG_PARSER">
<summary>The default parser for long values, which are encoded by <see cref="M:System.Int64.ToString"/> </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.DEFAULT_DOUBLE_PARSER">
<summary>The default parser for double values, which are encoded by <see cref="M:System.Double.ToString"/> </summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.NUMERIC_UTILS_INT_PARSER">
<summary> A parser instance for int values encoded by <see cref="M:Lucene.Net.Util.NumericUtils.IntToPrefixCoded(System.Int32)"/>, e.g. when indexed
via <see cref="T:Lucene.Net.Documents.NumericField"/>/<see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>.
</summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.NUMERIC_UTILS_FLOAT_PARSER">
<summary> A parser instance for float values encoded with <see cref="T:Lucene.Net.Util.NumericUtils"/>, e.g. when indexed
via <see cref="T:Lucene.Net.Documents.NumericField"/>/<see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>.
</summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.NUMERIC_UTILS_LONG_PARSER">
<summary> A parser instance for long values encoded by <see cref="M:Lucene.Net.Util.NumericUtils.LongToPrefixCoded(System.Int64)"/>, e.g. when indexed
via <see cref="T:Lucene.Net.Documents.NumericField"/>/<see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>.
</summary>
</member>
<member name="F:Lucene.Net.Search.FieldCache_Fields.NUMERIC_UTILS_DOUBLE_PARSER">
<summary> A parser instance for double values encoded with <see cref="T:Lucene.Net.Util.NumericUtils"/>, e.g. when indexed
via <see cref="T:Lucene.Net.Documents.NumericField"/>/<see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>.
</summary>
</member>
<member name="T:Lucene.Net.Search.ByteParser">
<summary>Interface to parse bytes from document fields.</summary>
<seealso cref="M:Lucene.Net.Search.FieldCache.GetBytes(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.ByteParser)">
</seealso>
</member>
<member name="T:Lucene.Net.Search.Parser">
<summary> Marker interface as super-interface to all parsers. It
is used to specify a custom parser to <see cref="M:Lucene.Net.Search.SortField.#ctor(System.String,Lucene.Net.Search.Parser)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.ByteParser.ParseByte(System.String)">
<summary>Return a single Byte representation of this field's value. </summary>
</member>
<member name="T:Lucene.Net.Search.ShortParser">
<summary>Interface to parse shorts from document fields.</summary>
<seealso cref="M:Lucene.Net.Search.FieldCache.GetShorts(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.ShortParser)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.ShortParser.ParseShort(System.String)">
<summary>Return a short representation of this field's value. </summary>
</member>
<member name="T:Lucene.Net.Search.IntParser">
<summary>Interface to parse ints from document fields.</summary>
<seealso cref="M:Lucene.Net.Search.FieldCache.GetInts(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.IntParser)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.IntParser.ParseInt(System.String)">
<summary>Return an integer representation of this field's value. </summary>
</member>
<member name="T:Lucene.Net.Search.FloatParser">
<summary>Interface to parse floats from document fields.</summary>
<seealso cref="M:Lucene.Net.Search.FieldCache.GetFloats(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.FloatParser)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.FloatParser.ParseFloat(System.String)">
<summary>Return an float representation of this field's value. </summary>
</member>
<member name="T:Lucene.Net.Search.LongParser">
<summary>Interface to parse long from document fields.</summary>
<seealso cref="M:Lucene.Net.Search.FieldCache.GetLongs(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.LongParser)">
</seealso>
<deprecated> Use <see cref="T:Lucene.Net.Search.LongParser"/>, this will be removed in Lucene 3.0
</deprecated>
</member>
<member name="M:Lucene.Net.Search.LongParser.ParseLong(System.String)">
<summary>Return an long representation of this field's value. </summary>
</member>
<member name="T:Lucene.Net.Search.DoubleParser">
<summary>Interface to parse doubles from document fields.</summary>
<seealso cref="M:Lucene.Net.Search.FieldCache.GetDoubles(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.DoubleParser)">
</seealso>
<deprecated> Use <see cref="T:Lucene.Net.Search.DoubleParser"/>, this will be removed in Lucene 3.0
</deprecated>
</member>
<member name="M:Lucene.Net.Search.DoubleParser.ParseDouble(System.String)">
<summary>Return an long representation of this field's value. </summary>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetBytes(Lucene.Net.Index.IndexReader,System.String)">
<summary>Checks the internal cache for an appropriate entry, and if none is
found, reads the terms in <c>field</c> as a single byte and returns an array
of size <c>reader.MaxDoc</c> of the value each document
has in the given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the single byte values.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetBytes(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.ByteParser)">
<summary>Checks the internal cache for an appropriate entry, and if none is found,
reads the terms in <c>field</c> as bytes and returns an array of
size <c>reader.MaxDoc</c> of the value each document has in the
given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the bytes.
</param>
<param name="parser"> Computes byte for string values.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetShorts(Lucene.Net.Index.IndexReader,System.String)">
<summary>Checks the internal cache for an appropriate entry, and if none is
found, reads the terms in <c>field</c> as shorts and returns an array
of size <c>reader.MaxDoc</c> of the value each document
has in the given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the shorts.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetShorts(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.ShortParser)">
<summary>Checks the internal cache for an appropriate entry, and if none is found,
reads the terms in <c>field</c> as shorts and returns an array of
size <c>reader.MaxDoc</c> of the value each document has in the
given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the shorts.
</param>
<param name="parser"> Computes short for string values.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetInts(Lucene.Net.Index.IndexReader,System.String)">
<summary>Checks the internal cache for an appropriate entry, and if none is
found, reads the terms in <c>field</c> as integers and returns an array
of size <c>reader.MaxDoc</c> of the value each document
has in the given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the integers.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetInts(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.IntParser)">
<summary>Checks the internal cache for an appropriate entry, and if none is found,
reads the terms in <c>field</c> as integers and returns an array of
size <c>reader.MaxDoc</c> of the value each document has in the
given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the integers.
</param>
<param name="parser"> Computes integer for string values.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetFloats(Lucene.Net.Index.IndexReader,System.String)">
<summary>Checks the internal cache for an appropriate entry, and if
none is found, reads the terms in <c>field</c> as floats and returns an array
of size <c>reader.MaxDoc</c> of the value each document
has in the given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the floats.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetFloats(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.FloatParser)">
<summary>Checks the internal cache for an appropriate entry, and if
none is found, reads the terms in <c>field</c> as floats and returns an array
of size <c>reader.MaxDoc</c> of the value each document
has in the given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the floats.
</param>
<param name="parser"> Computes float for string values.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetLongs(Lucene.Net.Index.IndexReader,System.String)">
<summary> Checks the internal cache for an appropriate entry, and if none is
found, reads the terms in <c>field</c> as longs and returns an array
of size <c>reader.MaxDoc</c> of the value each document
has in the given field.
</summary>
<param name="reader">Used to get field values.
</param>
<param name="field"> Which field contains the longs.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> java.io.IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetLongs(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.LongParser)">
<summary> Checks the internal cache for an appropriate entry, and if none is found,
reads the terms in <c>field</c> as longs and returns an array of
size <c>reader.MaxDoc</c> of the value each document has in the
given field.
</summary>
<param name="reader">Used to get field values.
</param>
<param name="field"> Which field contains the longs.
</param>
<param name="parser">Computes integer for string values.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetDoubles(Lucene.Net.Index.IndexReader,System.String)">
<summary> Checks the internal cache for an appropriate entry, and if none is
found, reads the terms in <c>field</c> as integers and returns an array
of size <c>reader.MaxDoc</c> of the value each document
has in the given field.
</summary>
<param name="reader">Used to get field values.
</param>
<param name="field"> Which field contains the doubles.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetDoubles(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.DoubleParser)">
<summary> Checks the internal cache for an appropriate entry, and if none is found,
reads the terms in <c>field</c> as doubles and returns an array of
size <c>reader.MaxDoc</c> of the value each document has in the
given field.
</summary>
<param name="reader">Used to get field values.
</param>
<param name="field"> Which field contains the doubles.
</param>
<param name="parser">Computes integer for string values.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetStrings(Lucene.Net.Index.IndexReader,System.String)">
<summary>Checks the internal cache for an appropriate entry, and if none
is found, reads the term values in <c>field</c> and returns an array
of size <c>reader.MaxDoc</c> containing the value each document
has in the given field.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the strings.
</param>
<returns> The values in the given field for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetStringIndex(Lucene.Net.Index.IndexReader,System.String)">
<summary>Checks the internal cache for an appropriate entry, and if none
is found reads the term values in <c>field</c> and returns
an array of them in natural order, along with an array telling
which element in the term array each document uses.
</summary>
<param name="reader"> Used to get field values.
</param>
<param name="field"> Which field contains the strings.
</param>
<returns> Array of terms and index into the array for each document.
</returns>
<throws> IOException If any error occurs. </throws>
</member>
<member name="M:Lucene.Net.Search.FieldCache.GetCacheEntries">
<summary> EXPERT: Generates an array of CacheEntry objects representing all items
currently in the FieldCache.
<p/>
NOTE: These CacheEntry objects maintain a strong refrence to the
Cached Values. Maintaining refrences to a CacheEntry the IndexReader
associated with it has garbage collected will prevent the Value itself
from being garbage collected when the Cache drops the WeakRefrence.
<p/>
<p/>
<b>EXPERIMENTAL API:</b> This API is considered extremely advanced
and experimental. It may be removed or altered w/o warning in future
releases
of Lucene.
<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCache.PurgeAllCaches">
<summary> <p/>
EXPERT: Instructs the FieldCache to forcibly expunge all entries
from the underlying caches. This is intended only to be used for
test methods as a way to ensure a known base state of the Cache
(with out needing to rely on GC to free WeakReferences).
It should not be relied on for "Cache maintenance" in general
application code.
<p/>
<p/>
<b>EXPERIMENTAL API:</b> This API is considered extremely advanced
and experimental. It may be removed or altered w/o warning in future
releases
of Lucene.
<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCache.Purge(Lucene.Net.Index.IndexReader)">
<summary>
Expert: drops all cache entries associated with this
reader. NOTE: this reader must precisely match the
reader that the cache entry is keyed on. If you pass a
top-level reader, it usually will have no effect as
Lucene now caches at the segment reader level.
</summary>
</member>
<member name="P:Lucene.Net.Search.FieldCache.InfoStream">
<summary> Gets or sets the InfoStream for this FieldCache.
<para>If non-null, FieldCacheImpl will warn whenever
entries are created that are not sane according to
<see cref="T:Lucene.Net.Util.FieldCacheSanityChecker"/>.
</para>
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldCacheImpl">
<summary> Expert: The default cache implementation, storing all values in memory.
A WeakDictionary is used for storage.
<p/>Created: May 19, 2004 4:40:36 PM
</summary>
<since> lucene 1.4
</since>
</member>
<member name="T:Lucene.Net.Search.FieldCacheImpl.StopFillCacheException">
<summary> Hack: When thrown from a Parser (NUMERIC_UTILS_* ones), this stops
processing terms and returns the current FieldCache
array.
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldCacheImpl.Cache">
<summary>Expert: Internal cache. </summary>
</member>
<member name="T:Lucene.Net.Search.FieldCacheImpl.Entry">
<summary>Expert: Every composite-key in the internal cache is of this type. </summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheImpl.Entry.#ctor(System.String,System.Object)">
<summary>Creates one of these objects for a custom comparator/parser. </summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheImpl.Entry.Equals(System.Object)">
<summary>Two of these are equal iff they reference the same field and type. </summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheImpl.Entry.GetHashCode">
<summary>Composes a hashcode based on the field and type. </summary>
</member>
<member name="T:Lucene.Net.Search.FieldCacheRangeFilter">
<summary> A range filter built on top of a cached single term field (in <see cref="T:Lucene.Net.Search.FieldCache"/>).
<p/><see cref="T:Lucene.Net.Search.FieldCacheRangeFilter"/> builds a single cache for the field the first time it is used.
Each subsequent <see cref="T:Lucene.Net.Search.FieldCacheRangeFilter"/> on the same field then reuses this cache,
even if the range itself changes.
<p/>This means that <see cref="T:Lucene.Net.Search.FieldCacheRangeFilter"/> is much faster (sometimes more than 100x as fast)
as building a <see cref="T:Lucene.Net.Search.TermRangeFilter"/> if using a <see cref="M:Lucene.Net.Search.FieldCacheRangeFilter.NewStringRange(System.String,System.String,System.String,System.Boolean,System.Boolean)"/>. However, if the range never changes it
is slower (around 2x as slow) than building a CachingWrapperFilter on top of a single <see cref="T:Lucene.Net.Search.TermRangeFilter"/>.
For numeric data types, this filter may be significantly faster than <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/>.
Furthermore, it does not need the numeric values encoded by <see cref="T:Lucene.Net.Documents.NumericField"/>. But
it has the problem that it only works with exact one value/document (see below).
<p/>As with all <see cref="T:Lucene.Net.Search.FieldCache"/> based functionality, <see cref="T:Lucene.Net.Search.FieldCacheRangeFilter"/> is only valid for
fields which exact one term for each document (except for <see cref="M:Lucene.Net.Search.FieldCacheRangeFilter.NewStringRange(System.String,System.String,System.String,System.Boolean,System.Boolean)"/>
where 0 terms are also allowed). Due to a restriction of <see cref="T:Lucene.Net.Search.FieldCache"/>, for numeric ranges
all terms that do not have a numeric value, 0 is assumed.
<p/>Thus it works on dates, prices and other single value fields but will not work on
regular text fields. It is preferable to use a <c>NOT_ANALYZED</c> field to ensure that
there is only a single term.
<p/>This class does not have an constructor, use one of the static factory methods available,
that create a correct instance for different data types supported by <see cref="T:Lucene.Net.Search.FieldCache"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewStringRange(System.String,System.String,System.String,System.Boolean,System.Boolean)">
<summary> Creates a string range filter using <see cref="M:Lucene.Net.Search.FieldCache.GetStringIndex(Lucene.Net.Index.IndexReader,System.String)"/>. This works with all
fields containing zero or one term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewByteRange(System.String,System.Nullable{System.SByte},System.Nullable{System.SByte},System.Boolean,System.Boolean)">
<summary> Creates a numeric range filter using <see cref="M:Lucene.Net.Search.FieldCache.GetBytes(Lucene.Net.Index.IndexReader,System.String)"/>. This works with all
byte fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewByteRange(System.String,Lucene.Net.Search.ByteParser,System.Nullable{System.SByte},System.Nullable{System.SByte},System.Boolean,System.Boolean)">
<summary> Creates a numeric range filter using <see cref="M:Lucene.Net.Search.FieldCache.GetBytes(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.ByteParser)"/>. This works with all
byte fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewShortRange(System.String,System.Nullable{System.Int16},System.Nullable{System.Int16},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetShorts(Lucene.Net.Index.IndexReader,System.String)"/>. This works with all
short fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewShortRange(System.String,Lucene.Net.Search.ShortParser,System.Nullable{System.Int16},System.Nullable{System.Int16},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetShorts(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.ShortParser)"/>. This works with all
short fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewIntRange(System.String,System.Nullable{System.Int32},System.Nullable{System.Int32},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetInts(Lucene.Net.Index.IndexReader,System.String)"/>. This works with all
int fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewIntRange(System.String,Lucene.Net.Search.IntParser,System.Nullable{System.Int32},System.Nullable{System.Int32},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetInts(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.IntParser)"/>. This works with all
int fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewLongRange(System.String,System.Nullable{System.Int64},System.Nullable{System.Int64},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetLongs(Lucene.Net.Index.IndexReader,System.String)"/>. This works with all
long fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewLongRange(System.String,Lucene.Net.Search.LongParser,System.Nullable{System.Int64},System.Nullable{System.Int64},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetLongs(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.LongParser)"/>. This works with all
long fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewFloatRange(System.String,System.Nullable{System.Single},System.Nullable{System.Single},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetFloats(Lucene.Net.Index.IndexReader,System.String)"/>. This works with all
float fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewFloatRange(System.String,Lucene.Net.Search.FloatParser,System.Nullable{System.Single},System.Nullable{System.Single},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetFloats(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.FloatParser)"/>. This works with all
float fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewDoubleRange(System.String,System.Nullable{System.Double},System.Nullable{System.Double},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetDoubles(Lucene.Net.Index.IndexReader,System.String)"/>. This works with all
double fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter.NewDoubleRange(System.String,Lucene.Net.Search.DoubleParser,System.Nullable{System.Double},System.Nullable{System.Double},System.Boolean,System.Boolean)">
<summary> Creates a numeric range query using <see cref="M:Lucene.Net.Search.FieldCache.GetDoubles(Lucene.Net.Index.IndexReader,System.String,Lucene.Net.Search.DoubleParser)"/>. This works with all
double fields containing exactly one numeric term in the field. The range can be half-open by setting one
of the values to <c>null</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter`1.GetDocIdSet(Lucene.Net.Index.IndexReader)">
<summary>This method is implemented for each data type </summary>
</member>
<member name="P:Lucene.Net.Search.FieldCacheRangeFilter`1.GetField">
<summary>
Returns the field name for this filter
</summary>
</member>
<member name="P:Lucene.Net.Search.FieldCacheRangeFilter`1.IncludesLower">
<summary>
Returns <c>true</c> if the lower endpoint is inclusive
</summary>
</member>
<member name="P:Lucene.Net.Search.FieldCacheRangeFilter`1.IncludesUpper">
<summary>
Returns <c>true</c> if the upper endpoint is inclusive
</summary>
</member>
<member name="P:Lucene.Net.Search.FieldCacheRangeFilter`1.LowerValue">
<summary>
Returns the lower value of the range filter
</summary>
</member>
<member name="P:Lucene.Net.Search.FieldCacheRangeFilter`1.UpperValue">
<summary>
Returns the upper value of this range filter
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldCacheRangeFilter`1.FieldCacheDocIdSet.MatchDoc(System.Int32)">
<summary>this method checks, if a doc is a hit, should throw AIOBE, when position invalid </summary>
</member>
<member name="P:Lucene.Net.Search.FieldCacheRangeFilter`1.FieldCacheDocIdSet.IsCacheable">
<summary>this DocIdSet is cacheable, if it works solely with FieldCache and no TermDocs </summary>
</member>
<member name="T:Lucene.Net.Search.FieldCacheTermsFilter">
<summary> A <see cref="T:Lucene.Net.Search.Filter"/> that only accepts documents whose single
term value in the specified field is contained in the
provided set of allowed terms.
<p/>
This is the same functionality as TermsFilter (from
contrib/queries), except this filter requires that the
field contains only a single term for all documents.
Because of drastically different implementations, they
also have different performance characteristics, as
described below.
<p/>
The first invocation of this filter on a given field will
be slower, since a <see cref="T:Lucene.Net.Search.StringIndex"/> must be
created. Subsequent invocations using the same field
will re-use this cache. However, as with all
functionality based on <see cref="P:Lucene.Net.Search.FieldCacheTermsFilter.FieldCache"/>, persistent RAM
is consumed to hold the cache, and is not freed until the
<see cref="T:Lucene.Net.Index.IndexReader"/> is closed. In contrast, TermsFilter
has no persistent RAM consumption.
<p/>
With each search, this filter translates the specified
set of Terms into a private <see cref="T:Lucene.Net.Util.OpenBitSet"/> keyed by
term number per unique <see cref="T:Lucene.Net.Index.IndexReader"/> (normally one
reader per segment). Then, during matching, the term
number for each docID is retrieved from the cache and
then checked for inclusion using the <see cref="T:Lucene.Net.Util.OpenBitSet"/>.
Since all testing is done using RAM resident data
structures, performance should be very fast, most likely
fast enough to not require further caching of the
DocIdSet for each possible combination of terms.
However, because docIDs are simply scanned linearly, an
index with a great many small documents may find this
linear scan too costly.
<p/>
In contrast, TermsFilter builds up an <see cref="T:Lucene.Net.Util.OpenBitSet"/>,
keyed by docID, every time it's created, by enumerating
through all matching docs using <see cref="T:Lucene.Net.Index.TermDocs"/> to seek
and scan through each term's docID list. While there is
no linear scan of all docIDs, besides the allocation of
the underlying array in the <see cref="T:Lucene.Net.Util.OpenBitSet"/>, this
approach requires a number of "disk seeks" in proportion
to the number of terms, which can be exceptionally costly
when there are cache misses in the OS's IO cache.
<p/>
Generally, this filter will be slower on the first
invocation for a given field, but subsequent invocations,
even if you change the allowed set of Terms, should be
faster than TermsFilter, especially as the number of
Terms being matched increases. If you are matching only
a very small number of terms, and those terms in turn
match a very small number of documents, TermsFilter may
perform faster.
<p/>
Which filter is best is very application dependent.
</summary>
</member>
<member name="P:Lucene.Net.Search.FieldCacheTermsFilter.FieldCacheTermsFilterDocIdSet.IsCacheable">
<summary>This DocIdSet implementation is cacheable. </summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator">
<summary> Expert: a FieldComparator compares hits so as to determine their
sort order when collecting the top results with <see cref="T:Lucene.Net.Search.TopFieldCollector"/>
. The concrete public FieldComparator
classes here correspond to the SortField types.
<p/>This API is designed to achieve high performance
sorting, by exposing a tight interaction with <see cref="T:Lucene.Net.Search.FieldValueHitQueue"/>
as it visits hits. Whenever a hit is
competitive, it's enrolled into a virtual slot, which is
an int ranging from 0 to numHits-1. The <see cref="T:Lucene.Net.Search.FieldComparator"/>
is made aware of segment transitions
during searching in case any internal state it's tracking
needs to be recomputed during these transitions.<p/>
<p/>A comparator must define these functions:<p/>
<list type="bullet">
<item> <see cref="M:Lucene.Net.Search.FieldComparator.Compare(System.Int32,System.Int32)"/> Compare a hit at 'slot a'
with hit 'slot b'.</item>
<item> <see cref="M:Lucene.Net.Search.FieldComparator.SetBottom(System.Int32)"/> This method is called by
<see cref="T:Lucene.Net.Search.FieldValueHitQueue"/> to notify the
FieldComparator of the current weakest ("bottom")
slot. Note that this slot may not hold the weakest
value according to your comparator, in cases where
your comparator is not the primary one (ie, is only
used to break ties from the comparators before it).</item>
<item> <see cref="M:Lucene.Net.Search.FieldComparator.CompareBottom(System.Int32)"/> Compare a new hit (docID)
against the "weakest" (bottom) entry in the queue.</item>
<item> <see cref="M:Lucene.Net.Search.FieldComparator.Copy(System.Int32,System.Int32)"/> Installs a new hit into the
priority queue. The <see cref="T:Lucene.Net.Search.FieldValueHitQueue"/>
calls this method when a new hit is competitive.</item>
<item> <see cref="M:Lucene.Net.Search.FieldComparator.SetNextReader(Lucene.Net.Index.IndexReader,System.Int32)"/> Invoked
when the search is switching to the next segment.
You may need to update internal state of the
comparator, for example retrieving new values from
the <see cref="T:Lucene.Net.Search.FieldCache"/>.</item>
<item> <see cref="P:Lucene.Net.Search.FieldComparator.Item(System.Int32)"/> Return the sort value stored in
the specified slot. This is only called at the end
of the search, in order to populate <see cref="F:Lucene.Net.Search.FieldDoc.fields"/>
when returning the top results.</item>
</list>
<b>NOTE:</b> This API is experimental and might change in
incompatible ways in the next release.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldComparator.Compare(System.Int32,System.Int32)">
<summary> Compare hit at slot1 with hit at slot2.
</summary>
<param name="slot1">first slot to compare
</param>
<param name="slot2">second slot to compare
</param>
<returns> any N < 0 if slot2's value is sorted after
slot1, any N > 0 if the slot2's value is sorted before
slot1 and 0 if they are equal
</returns>
</member>
<member name="M:Lucene.Net.Search.FieldComparator.SetBottom(System.Int32)">
<summary> Set the bottom slot, ie the "weakest" (sorted last)
entry in the queue. When <see cref="M:Lucene.Net.Search.FieldComparator.CompareBottom(System.Int32)"/> is
called, you should compare against this slot. This
will always be called before <see cref="M:Lucene.Net.Search.FieldComparator.CompareBottom(System.Int32)"/>.
</summary>
<param name="slot">the currently weakest (sorted last) slot in the queue
</param>
</member>
<member name="M:Lucene.Net.Search.FieldComparator.CompareBottom(System.Int32)">
<summary> Compare the bottom of the queue with doc. This will
only invoked after setBottom has been called. This
should return the same result as <see cref="M:Lucene.Net.Search.FieldComparator.Compare(System.Int32,System.Int32)"/>
} as if bottom were slot1 and the new
document were slot 2.
<p/>For a search that hits many results, this method
will be the hotspot (invoked by far the most
frequently).<p/>
</summary>
<param name="doc">that was hit
</param>
<returns> any N < 0 if the doc's value is sorted after
the bottom entry (not competitive), any N > 0 if the
doc's value is sorted before the bottom entry and 0 if
they are equal.
</returns>
</member>
<member name="M:Lucene.Net.Search.FieldComparator.Copy(System.Int32,System.Int32)">
<summary> This method is called when a new hit is competitive.
You should copy any state associated with this document
that will be required for future comparisons, into the
specified slot.
</summary>
<param name="slot">which slot to copy the hit to
</param>
<param name="doc">docID relative to current reader
</param>
</member>
<member name="M:Lucene.Net.Search.FieldComparator.SetNextReader(Lucene.Net.Index.IndexReader,System.Int32)">
<summary> Set a new Reader. All doc correspond to the current Reader.
</summary>
<param name="reader">current reader
</param>
<param name="docBase">docBase of this reader
</param>
<throws> IOException </throws>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.FieldComparator.SetScorer(Lucene.Net.Search.Scorer)">
<summary>Sets the Scorer to use in case a document's score is
needed.
</summary>
<param name="scorer">Scorer instance that you should use to
obtain the current hit's score, if necessary.
</param>
</member>
<member name="P:Lucene.Net.Search.FieldComparator.Item(System.Int32)">
<summary> Return the actual value in the slot.
</summary>
<param name="slot">the value
</param>
<returns> value in this slot upgraded to Comparable
</returns>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.ByteComparator">
<summary>Parses field's values as byte (using <see cref="M:Lucene.Net.Search.FieldCache.GetBytes(Lucene.Net.Index.IndexReader,System.String)"/>
and sorts by ascending value
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.DocComparator">
<summary>Sorts by ascending docID </summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.DoubleComparator">
<summary>Parses field's values as double (using <see cref="M:Lucene.Net.Search.FieldCache.GetDoubles(Lucene.Net.Index.IndexReader,System.String)"/>
and sorts by ascending value
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.FloatComparator">
<summary>Parses field's values as float (using <see cref="M:Lucene.Net.Search.FieldCache.GetFloats(Lucene.Net.Index.IndexReader,System.String)"/>
and sorts by ascending value
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.IntComparator">
<summary>Parses field's values as int (using <see cref="M:Lucene.Net.Search.FieldCache.GetInts(Lucene.Net.Index.IndexReader,System.String)"/>
and sorts by ascending value
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.LongComparator">
<summary>Parses field's values as long (using <see cref="M:Lucene.Net.Search.FieldCache.GetLongs(Lucene.Net.Index.IndexReader,System.String)"/>
and sorts by ascending value
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.RelevanceComparator">
<summary>Sorts by descending relevance. NOTE: if you are
sorting only by descending relevance and then
secondarily by ascending docID, peformance is faster
using <see cref="T:Lucene.Net.Search.TopScoreDocCollector"/> directly (which <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>
uses when no <see cref="T:Lucene.Net.Search.Sort"/> is
specified).
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.ShortComparator">
<summary>Parses field's values as short (using <see cref="M:Lucene.Net.Search.FieldCache.GetShorts(Lucene.Net.Index.IndexReader,System.String)"/>)
and sorts by ascending value
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.StringComparatorLocale">
<summary>Sorts by a field's value using the Collator for a
given Locale.
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.StringOrdValComparator">
<summary>Sorts by field's natural String sort order, using
ordinals. This is functionally equivalent to <see cref="T:Lucene.Net.Search.FieldComparator.StringValComparator"/>
, but it first resolves the string
to their relative ordinal positions (using the index
returned by <see cref="M:Lucene.Net.Search.FieldCache.GetStringIndex(Lucene.Net.Index.IndexReader,System.String)"/>), and
does most comparisons using the ordinals. For medium
to large results, this comparator will be much faster
than <see cref="T:Lucene.Net.Search.FieldComparator.StringValComparator"/>. For very small
result sets it may be slower.
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparator.StringValComparator">
<summary>Sorts by field's natural String sort order. All
comparisons are done using String.compareTo, which is
slow for medium to large result sets but possibly
very fast for very small results sets.
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldComparatorSource">
<summary> Provides a <see cref="T:Lucene.Net.Search.FieldComparator"/> for custom field sorting.
<b>NOTE:</b> This API is experimental and might change in
incompatible ways in the next release.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldComparatorSource.NewComparator(System.String,System.Int32,System.Int32,System.Boolean)">
<summary> Creates a comparator for the field in the given index.
</summary>
<param name="fieldname">Name of the field to create comparator for.
</param>
<returns> FieldComparator.
</returns>
<throws> IOException </throws>
<summary> If an error occurs reading the index.
</summary>
</member>
<member name="T:Lucene.Net.Search.FieldDoc">
<summary> Expert: A ScoreDoc which also contains information about
how to sort the referenced document. In addition to the
document number and score, this object contains an array
of values for the document from the field(s) used to sort.
For example, if the sort criteria was to sort by fields
"a", "b" then "c", the <c>fields</c> object array
will have three elements, corresponding respectively to
the term values for the document in fields "a", "b" and "c".
The class of each element in the array will be either
Integer, Float or String depending on the type of values
in the terms of each field.
<p/>Created: Feb 11, 2004 1:23:38 PM
</summary>
<seealso cref="T:Lucene.Net.Search.ScoreDoc"></seealso>
<seealso cref="T:Lucene.Net.Search.TopFieldDocs"></seealso>
</member>
<member name="T:Lucene.Net.Search.ScoreDoc">
<summary>Expert: Returned by low-level search implementations.</summary>
<seealso cref="T:Lucene.Net.Search.TopDocs">
</seealso>
</member>
<member name="M:Lucene.Net.Search.ScoreDoc.#ctor(System.Int32,System.Single)">
<summary>Expert: Constructs a ScoreDoc. </summary>
</member>
<member name="P:Lucene.Net.Search.ScoreDoc.Score">
<summary>Expert: The score of this document for the query. </summary>
</member>
<member name="P:Lucene.Net.Search.ScoreDoc.Doc">
<summary>Expert: A hit document's number.</summary>
<seealso cref="M:Lucene.Net.Search.Searcher.Doc(System.Int32)">
</seealso>
</member>
<member name="F:Lucene.Net.Search.FieldDoc.fields">
<summary>Expert: The values which are used to sort the referenced document.
The order of these will match the original sort criteria given by a
Sort object. Each Object will be either an Integer, Float or String,
depending on the type of values in the terms of the original field.
</summary>
<seealso cref="T:Lucene.Net.Search.Sort">
</seealso>
<seealso cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.FieldDoc.#ctor(System.Int32,System.Single)">
<summary>Expert: Creates one of these objects with empty sort information. </summary>
</member>
<member name="M:Lucene.Net.Search.FieldDoc.#ctor(System.Int32,System.Single,System.IComparable[])">
<summary>Expert: Creates one of these objects with the given sort information. </summary>
</member>
<member name="T:Lucene.Net.Search.FieldDocSortedHitQueue">
<summary> Expert: Collects sorted results from Searchable's and collates them.
The elements put into this queue must be of type FieldDoc.
<p/>Created: Feb 11, 2004 2:04:21 PM
</summary>
<since> lucene 1.4
</since>
</member>
<member name="M:Lucene.Net.Search.FieldDocSortedHitQueue.#ctor(System.Int32)">
<summary> Creates a hit queue sorted by the given list of fields.</summary>
<param name="size">The number of hits to retain. Must be greater than zero.</param>
</member>
<member name="M:Lucene.Net.Search.FieldDocSortedHitQueue.SetFields(Lucene.Net.Search.SortField[])">
<summary> Allows redefinition of sort fields if they are <c>null</c>.
This is to handle the case using ParallelMultiSearcher where the
original list contains AUTO and we don't know the actual sort
type until the values come back. The fields can only be set once.
This method is thread safe.
</summary>
<param name="fields"></param>
</member>
<member name="M:Lucene.Net.Search.FieldDocSortedHitQueue.GetFields">
<summary>Returns the fields being used to sort. </summary>
</member>
<member name="M:Lucene.Net.Search.FieldDocSortedHitQueue.HasCollators(Lucene.Net.Search.SortField[])">
<summary>Returns an array of collators, possibly <c>null</c>. The collators
correspond to any SortFields which were given a specific locale.
</summary>
<param name="fields">Array of sort fields.</param>
<returns> Array, possibly <c>null</c>.</returns>
</member>
<member name="M:Lucene.Net.Search.FieldDocSortedHitQueue.LessThan(Lucene.Net.Search.FieldDoc,Lucene.Net.Search.FieldDoc)">
<summary> Returns whether <c>a</c> is less relevant than <c>b</c>.</summary>
<param name="docA">ScoreDoc</param>
<param name="docB">ScoreDoc</param>
<returns><c>true</c> if document <c>a</c> should be sorted after document <c>b</c>.</returns>
</member>
<member name="T:Lucene.Net.Search.FieldValueHitQueue">
<summary> Expert: A hit queue for sorting by hits by terms in more than one field.
Uses <c>FieldCache.DEFAULT</c> for maintaining
internal term lookup tables.
<b>NOTE:</b> This API is experimental and might change in
incompatible ways in the next release.
</summary>
<seealso cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)"></seealso>
<seealso cref="T:Lucene.Net.Search.FieldCache"></seealso>
</member>
<member name="M:Lucene.Net.Search.FieldValueHitQueue.Create(Lucene.Net.Search.SortField[],System.Int32)">
<summary> Creates a hit queue sorted by the given list of fields.
<p/><b>NOTE</b>: The instances returned by this method
pre-allocate a full array of length <c>numHits</c>.
</summary>
<param name="fields">SortField array we are sorting by in priority order (highest
priority first); cannot be <c>null</c> or empty
</param>
<param name="size">The number of hits to retain. Must be greater than zero.
</param>
<throws> IOException </throws>
</member>
<member name="F:Lucene.Net.Search.FieldValueHitQueue.fields">
<summary>Stores the sort criteria being used. </summary>
</member>
<member name="M:Lucene.Net.Search.FieldValueHitQueue.FillFields(Lucene.Net.Search.FieldValueHitQueue.Entry)">
<summary> Given a queue Entry, creates a corresponding FieldDoc
that contains the values used to sort the given document.
These values are not the raw values out of the index, but the internal
representation of them. This is so the given search hit can be collated by
a MultiSearcher with other search hits.
</summary>
<param name="entry">The Entry used to create a FieldDoc
</param>
<returns> The newly created FieldDoc
</returns>
<seealso cref="M:Lucene.Net.Search.Searchable.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.FieldValueHitQueue.GetFields">
<summary>Returns the SortFields being used by this hit queue. </summary>
</member>
<member name="T:Lucene.Net.Search.FieldValueHitQueue.OneComparatorFieldValueHitQueue">
<summary> An implementation of <see cref="T:Lucene.Net.Search.FieldValueHitQueue"/> which is optimized in case
there is just one comparator.
</summary>
</member>
<member name="M:Lucene.Net.Search.FieldValueHitQueue.OneComparatorFieldValueHitQueue.LessThan(Lucene.Net.Search.FieldValueHitQueue.Entry,Lucene.Net.Search.FieldValueHitQueue.Entry)">
<summary> Returns whether <c>a</c> is less relevant than <c>b</c>.</summary>
<param name="hitA">ScoreDoc</param>
<param name="hitB">ScoreDoc</param>
<returns><c>true</c> if document <c>a</c> should be sorted after document <c>b</c>.</returns>
</member>
<member name="T:Lucene.Net.Search.FieldValueHitQueue.MultiComparatorsFieldValueHitQueue">
<summary> An implementation of <see cref="T:Lucene.Net.Search.FieldValueHitQueue"/> which is optimized in case
there is more than one comparator.
</summary>
</member>
<member name="T:Lucene.Net.Search.FilteredQuery">
<summary> A query that applies a filter to the results of another query.
<p/>Note: the bits are retrieved from the filter each time this
query is used in a search - use a CachingWrapperFilter to avoid
regenerating the bits every time.
<p/>Created: Apr 20, 2004 8:58:29 AM
</summary>
<since>1.4</since>
<seealso cref="T:Lucene.Net.Search.CachingWrapperFilter"/>
</member>
<member name="M:Lucene.Net.Search.FilteredQuery.#ctor(Lucene.Net.Search.Query,Lucene.Net.Search.Filter)">
<summary> Constructs a new query which applies a filter to the results of the original query.
Filter.getDocIdSet() will be called every time this query is used in a search.
</summary>
<param name="query"> Query to be filtered, cannot be <c>null</c>.
</param>
<param name="filter">Filter to apply to query results, cannot be <c>null</c>.
</param>
</member>
<member name="M:Lucene.Net.Search.FilteredQuery.CreateWeight(Lucene.Net.Search.Searcher)">
<summary> Returns a Weight that applies the filter to the enclosed query's Weight.
This is accomplished by overriding the Scorer returned by the Weight.
</summary>
</member>
<member name="M:Lucene.Net.Search.FilteredQuery.Rewrite(Lucene.Net.Index.IndexReader)">
<summary>Rewrites the wrapped query. </summary>
</member>
<member name="M:Lucene.Net.Search.FilteredQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="M:Lucene.Net.Search.FilteredQuery.Equals(System.Object)">
<summary>Returns true iff <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.FilteredQuery.GetHashCode">
<summary>Returns a hash code value for this object. </summary>
</member>
<member name="T:Lucene.Net.Search.FilteredTermEnum">
<summary>Abstract class for enumerating a subset of all terms.
<p/>Term enumerations are always ordered by Term.compareTo(). Each term in
the enumeration is greater than all that precede it.
</summary>
</member>
<member name="F:Lucene.Net.Search.FilteredTermEnum.currentTerm">
<summary>the current term </summary>
</member>
<member name="F:Lucene.Net.Search.FilteredTermEnum.actualEnum">
<summary>the delegate enum - to set this member use <see cref="M:Lucene.Net.Search.FilteredTermEnum.SetEnum(Lucene.Net.Index.TermEnum)"/> </summary>
</member>
<member name="M:Lucene.Net.Search.FilteredTermEnum.TermCompare(Lucene.Net.Index.Term)">
<summary>Equality compare on the term </summary>
</member>
<member name="M:Lucene.Net.Search.FilteredTermEnum.Difference">
<summary>Equality measure on the term </summary>
</member>
<member name="M:Lucene.Net.Search.FilteredTermEnum.EndEnum">
<summary>Indicates the end of the enumeration has been reached </summary>
</member>
<member name="M:Lucene.Net.Search.FilteredTermEnum.SetEnum(Lucene.Net.Index.TermEnum)">
<summary> use this method to set the actual TermEnum (e.g. in ctor),
it will be automatically positioned on the first matching term.
</summary>
</member>
<member name="M:Lucene.Net.Search.FilteredTermEnum.DocFreq">
<summary> Returns the docFreq of the current Term in the enumeration.
Returns -1 if no Term matches or all terms have been enumerated.
</summary>
</member>
<member name="M:Lucene.Net.Search.FilteredTermEnum.Next">
<summary>Increments the enumeration to the next element. True if one exists. </summary>
</member>
<member name="P:Lucene.Net.Search.FilteredTermEnum.Term">
<summary>Returns the current Term in the enumeration.
Returns null if no Term matches or all terms have been enumerated.
</summary>
</member>
<member name="T:Lucene.Net.Search.FilterManager">
<summary> Filter caching singleton. It can be used
to save filters locally for reuse.
This class makes it possble to cache Filters even when using RMI, as it
keeps the cache on the seaercher side of the RMI connection.
Also could be used as a persistent storage for any filter as long as the
filter provides a proper hashCode(), as that is used as the key in the cache.
The cache is periodically cleaned up from a separate thread to ensure the
cache doesn't exceed the maximum size.
</summary>
</member>
<member name="F:Lucene.Net.Search.FilterManager.DEFAULT_CACHE_CLEAN_SIZE">
<summary>The default maximum number of Filters in the cache </summary>
</member>
<member name="F:Lucene.Net.Search.FilterManager.DEFAULT_CACHE_SLEEP_TIME">
<summary>The default frequency of cache clenup </summary>
</member>
<member name="F:Lucene.Net.Search.FilterManager.cache">
<summary>The cache itself </summary>
</member>
<member name="F:Lucene.Net.Search.FilterManager.cacheCleanSize">
<summary>Maximum allowed cache size </summary>
</member>
<member name="F:Lucene.Net.Search.FilterManager.cleanSleepTime">
<summary>Cache cleaning frequency </summary>
</member>
<member name="F:Lucene.Net.Search.FilterManager.internalFilterCleaner">
<summary>Cache cleaner that runs in a separate thread </summary>
</member>
<member name="M:Lucene.Net.Search.FilterManager.#ctor">
<summary> Sets up the FilterManager singleton.</summary>
</member>
<member name="M:Lucene.Net.Search.FilterManager.SetCacheSize(System.Int32)">
<summary> Sets the max size that cache should reach before it is cleaned up</summary>
<param name="value"> maximum allowed cache size </param>
</member>
<member name="M:Lucene.Net.Search.FilterManager.SetCleanThreadSleepTime(System.Int64)">
<summary> Sets the cache cleaning frequency in milliseconds.</summary>
<param name="value"> cleaning frequency in millioseconds </param>
</member>
<member name="M:Lucene.Net.Search.FilterManager.GetFilter(Lucene.Net.Search.Filter)">
<summary> Returns the cached version of the filter. Allows the caller to pass up
a small filter but this will keep a persistent version around and allow
the caching filter to do its job.
</summary>
<param name="filter">The input filter
</param>
<returns> The cached version of the filter
</returns>
</member>
<member name="T:Lucene.Net.Search.FilterManager.FilterItem">
<summary> Holds the filter and the last time the filter was used, to make LRU-based
cache cleaning possible.
TODO: Clean this up when we switch to Java 1.5
</summary>
</member>
<member name="T:Lucene.Net.Search.FilterManager.FilterCleaner">
<summary> Keeps the cache from getting too big.
If we were using Java 1.5, we could use LinkedHashMap and we would not need this thread
to clean out the cache.
The SortedSet sortedFilterItems is used only to sort the items from the cache,
so when it's time to clean up we have the TreeSet sort the FilterItems by
timestamp.
Removes 1.5 * the numbers of items to make the cache smaller.
For example:
If cache clean size is 10, and the cache is at 15, we would remove (15 - 10) * 1.5 = 7.5 round up to 8.
This way we clean the cache a bit more, and avoid having the cache cleaner having to do it frequently.
</summary>
</member>
<member name="T:Lucene.Net.Search.Function.ByteFieldSource">
<summary> Expert: obtains single byte field values from the
<see cref="T:Lucene.Net.Search.FieldCache">FieldCache</see>
using <c>getBytes()</c> and makes those values
available as other numeric types, casting as needed.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
<seealso cref="T:Lucene.Net.Search.Function.FieldCacheSource"> for requirements"
on the field.
<p/><b>NOTE</b>: with the switch in 2.9 to segment-based
searching, if <see cref="M:Lucene.Net.Search.Function.FieldCacheSource.GetValues(Lucene.Net.Index.IndexReader)"/> is invoked with a
composite (multi-segment) reader, this can easily cause
double RAM usage for the values in the FieldCache. It's
best to switch your application to pass only atomic
(single segment) readers to this API.<p/>
</seealso>
</member>
<member name="T:Lucene.Net.Search.Function.FieldCacheSource">
<summary> Expert: A base class for ValueSource implementations that retrieve values for
a single field from the <see cref="T:Lucene.Net.Search.FieldCache">FieldCache</see>.
<p/>
Fields used herein nust be indexed (doesn't matter if these fields are stored or not).
<p/>
It is assumed that each such indexed field is untokenized, or at least has a single token in a document.
For documents with multiple tokens of the same field, behavior is undefined (It is likely that current
code would use the value of one of these tokens, but this is not guaranteed).
<p/>
Document with no tokens in this field are assigned the <c>Zero</c> value.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
<p/><b>NOTE</b>: with the switch in 2.9 to segment-based
searching, if <see cref="M:Lucene.Net.Search.Function.FieldCacheSource.GetValues(Lucene.Net.Index.IndexReader)"/> is invoked with a
composite (multi-segment) reader, this can easily cause
double RAM usage for the values in the FieldCache. It's
best to switch your application to pass only atomic
(single segment) readers to this API.<p/>
</summary>
</member>
<member name="T:Lucene.Net.Search.Function.ValueSource">
<summary> Expert: source of values for basic function queries.
<p/>At its default/simplest form, values - one per doc - are used as the score of that doc.
<p/>Values are instantiated as
<see cref="T:Lucene.Net.Search.Function.DocValues">DocValues</see> for a particular reader.
<p/>ValueSource implementations differ in RAM requirements: it would always be a factor
of the number of documents, but for each document the number of bytes can be 1, 2, 4, or 8.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.ValueSource.GetValues(Lucene.Net.Index.IndexReader)">
<summary> Return the DocValues used by the function query.</summary>
<param name="reader">the IndexReader used to read these values.
If any caching is involved, that caching would also be IndexReader based.
</param>
<throws> IOException for any error. </throws>
</member>
<member name="M:Lucene.Net.Search.Function.ValueSource.Description">
<summary> description of field, used in explain() </summary>
</member>
<member name="M:Lucene.Net.Search.Function.ValueSource.Equals(System.Object)">
<summary> Needed for possible caching of query results - used by <see cref="M:Lucene.Net.Search.Function.ValueSourceQuery.Equals(System.Object)"/>.</summary>
<seealso cref="M:System.Object.Equals(System.Object)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.ValueSource.GetHashCode">
<summary> Needed for possible caching of query results - used by <see cref="M:Lucene.Net.Search.Function.ValueSourceQuery.GetHashCode"/>.</summary>
<seealso cref="M:System.Object.GetHashCode">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.FieldCacheSource.#ctor(System.String)">
<summary> Create a cached field source for the input field. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.FieldCacheSource.GetCachedFieldValues(Lucene.Net.Search.FieldCache,System.String,Lucene.Net.Index.IndexReader)">
<summary> Return cached DocValues for input field and reader.</summary>
<param name="cache">FieldCache so that values of a field are loaded once per reader (RAM allowing)
</param>
<param name="field">Field for which values are required.
</param>
<seealso cref="T:Lucene.Net.Search.Function.ValueSource">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.FieldCacheSource.CachedFieldSourceEquals(Lucene.Net.Search.Function.FieldCacheSource)">
<summary> Check if equals to another <see cref="T:Lucene.Net.Search.Function.FieldCacheSource"/>, already knowing that cache and field are equal. </summary>
<seealso cref="M:System.Object.Equals(System.Object)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.FieldCacheSource.CachedFieldSourceHashCode">
<summary> Return a hash code of a <see cref="T:Lucene.Net.Search.Function.FieldCacheSource"/>, without the hash-codes of the field
and the cache (those are taken care of elsewhere).
</summary>
<seealso cref="M:System.Object.GetHashCode">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.ByteFieldSource.#ctor(System.String)">
<summary> Create a cached byte field source with default string-to-byte parser. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.ByteFieldSource.#ctor(System.String,Lucene.Net.Search.ByteParser)">
<summary> Create a cached byte field source with a specific string-to-byte parser. </summary>
</member>
<member name="T:Lucene.Net.Search.Function.DocValues">
<summary> Expert: represents field values as different types.
Normally created via a
<see cref="T:Lucene.Net.Search.Function.ValueSource">ValueSuorce</see>
for a particular field and reader.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.FloatVal(System.Int32)">
<summary> Return doc value as a float.
<p/>Mandatory: every DocValues implementation must implement at least this method.
</summary>
<param name="doc">document whose float value is requested.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.IntVal(System.Int32)">
<summary> Return doc value as an int.
<p/>Optional: DocValues implementation can (but don't have to) override this method.
</summary>
<param name="doc">document whose int value is requested.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.LongVal(System.Int32)">
<summary> Return doc value as a long.
<p/>Optional: DocValues implementation can (but don't have to) override this method.
</summary>
<param name="doc">document whose long value is requested.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.DoubleVal(System.Int32)">
<summary> Return doc value as a double.
<p/>Optional: DocValues implementation can (but don't have to) override this method.
</summary>
<param name="doc">document whose double value is requested.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.StrVal(System.Int32)">
<summary> Return doc value as a string.
<p/>Optional: DocValues implementation can (but don't have to) override this method.
</summary>
<param name="doc">document whose string value is requested.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.ToString(System.Int32)">
<summary> Return a string representation of a doc value, as reuired for Explanations.</summary>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.Explain(System.Int32)">
<summary> Explain the scoring value for the input doc.</summary>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.GetMinValue">
<summary> Returns the minimum of all values or <c>Float.NaN</c> if this
DocValues instance does not contain any value.
<p/>
This operation is optional
<p/>
</summary>
<returns> the minimum of all values or <c>Float.NaN</c> if this
DocValues instance does not contain any value.
</returns>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.GetMaxValue">
<summary> Returns the maximum of all values or <c>Float.NaN</c> if this
DocValues instance does not contain any value.
<p/>
This operation is optional
<p/>
</summary>
<returns> the maximum of all values or <c>Float.NaN</c> if this
DocValues instance does not contain any value.
</returns>
</member>
<member name="M:Lucene.Net.Search.Function.DocValues.GetAverageValue">
<summary> Returns the average of all values or <c>Float.NaN</c> if this
DocValues instance does not contain any value. *
<p/>
This operation is optional
<p/>
</summary>
<returns> the average of all values or <c>Float.NaN</c> if this
DocValues instance does not contain any value
</returns>
</member>
<member name="P:Lucene.Net.Search.Function.DocValues.InnerArray">
<summary> Expert: for test purposes only, return the inner array of values, or null if not applicable.
<p/>
Allows tests to verify that loaded values are:
<list type="bullet">
<item>indeed cached/reused.</item>
<item>stored in the expected size/type (byte/short/int/float).</item>
</list>
Note: implementations of DocValues must override this method for
these test elements to be tested, Otherwise the test would not fail, just
print a warning.
</summary>
</member>
<member name="T:Lucene.Net.Search.Function.CustomScoreProvider">
<summary>
An instance of this subclass should be returned by
<see cref="M:Lucene.Net.Search.Function.CustomScoreQuery.GetCustomScoreProvider(Lucene.Net.Index.IndexReader)"/>, if you want
to modify the custom score calculation of a <see cref="T:Lucene.Net.Search.Function.CustomScoreQuery"/>.
<para>Since Lucene 2.9, queries operate on each segment of an Index separately,
so overriding the similar (now deprecated) methods in <see cref="T:Lucene.Net.Search.Function.CustomScoreQuery"/>
is no longer suitable, as the supplied <c>doc</c> ID is per-segment
and without knowledge of the IndexReader you cannot access the
document or <see cref="T:Lucene.Net.Search.FieldCache"/>.</para>
@lucene.experimental
@since 2.9.2
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreProvider.#ctor(Lucene.Net.Index.IndexReader)">
<summary>
Creates a new instance of the provider class for the given IndexReader.
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomScore(System.Int32,System.Single,System.Single[])">
<summary>
* Compute a custom score by the subQuery score and a number of
ValueSourceQuery scores.
<p/>
Subclasses can override this method to modify the custom score.
<p/>
If your custom scoring is different than the default herein you
should override at least one of the two customScore() methods.
If the number of ValueSourceQueries is always < 2 it is
sufficient to override the other
<see cref="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomScore(System.Int32,System.Single,System.Single)">CustomScore()</see>
method, which is simpler.
<p/>
The default computation herein is a multiplication of given scores:
<pre>
ModifiedScore = valSrcScore * valSrcScores[0] * valSrcScores[1] * ...
</pre>
</summary>
<param name="doc">id of scored doc</param>
<param name="subQueryScore">score of that doc by the subQuery</param>
<param name="valSrcScores">scores of that doc by the ValueSourceQuery</param>
<returns>custom score</returns>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomScore(System.Int32,System.Single,System.Single)">
<summary>
Compute a custom score by the subQuery score and the ValueSourceQuery score.
<p/>
Subclasses can override this method to modify the custom score.
<p/>
If your custom scoring is different than the default herein you
should override at least one of the two customScore() methods.
If the number of ValueSourceQueries is always < 2 it is
sufficient to override this customScore() method, which is simpler.
<p/>
The default computation herein is a multiplication of the two scores:
<pre>
ModifiedScore = subQueryScore * valSrcScore
</pre>
</summary>
<param name="doc">id of scored doc</param>
<param name="subQueryScore">score of that doc by the subQuery</param>
<param name="valSrcScore">score of that doc by the ValueSourceQuery</param>
<returns>custom score</returns>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomExplain(System.Int32,Lucene.Net.Search.Explanation,Lucene.Net.Search.Explanation[])">
<summary>
Explain the custom score.
Whenever overriding <see cref="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomScore(System.Int32,System.Single,System.Single[])"/>,
this method should also be overridden to provide the correct explanation
for the part of the custom scoring.
</summary>
<param name="doc">doc being explained</param>
<param name="subQueryExpl">explanation for the sub-query part</param>
<param name="valSrcExpls">explanation for the value source part</param>
<returns>an explanation for the custom score</returns>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomExplain(System.Int32,Lucene.Net.Search.Explanation,Lucene.Net.Search.Explanation)">
<summary>
Explain the custom score.
Whenever overriding <see cref="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomScore(System.Int32,System.Single,System.Single)"/>,
this method should also be overridden to provide the correct explanation
for the part of the custom scoring.
</summary>
<param name="doc">doc being explained</param>
<param name="subQueryExpl">explanation for the sub-query part</param>
<param name="valSrcExpl">explanation for the value source part</param>
<returns>an explanation for the custom score</returns>
</member>
<member name="T:Lucene.Net.Search.Function.CustomScoreQuery">
<summary> Query that sets document score as a programmatic function of several (sub) scores:
<list type="bullet">
<item>the score of its subQuery (any query)</item>
<item>(optional) the score of its ValueSourceQuery (or queries).
For most simple/convenient use cases this query is likely to be a
<see cref="T:Lucene.Net.Search.Function.FieldScoreQuery">FieldScoreQuery</see></item>
</list>
Subclasses can modify the computation by overriding <see cref="M:Lucene.Net.Search.Function.CustomScoreQuery.GetCustomScoreProvider(Lucene.Net.Index.IndexReader)"/>.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.#ctor(Lucene.Net.Search.Query)">
<summary> Create a CustomScoreQuery over input subQuery.</summary>
<param name="subQuery">the sub query whose scored is being customed. Must not be null.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.#ctor(Lucene.Net.Search.Query,Lucene.Net.Search.Function.ValueSourceQuery)">
<summary> Create a CustomScoreQuery over input subQuery and a <see cref="T:Lucene.Net.Search.Function.ValueSourceQuery"/>.</summary>
<param name="subQuery">the sub query whose score is being customed. Must not be null.
</param>
<param name="valSrcQuery">a value source query whose scores are used in the custom score
computation. For most simple/convineient use case this would be a
<see cref="T:Lucene.Net.Search.Function.FieldScoreQuery">FieldScoreQuery</see>.
This parameter is optional - it can be null or even an empty array.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.#ctor(Lucene.Net.Search.Query,Lucene.Net.Search.Function.ValueSourceQuery[])">
<summary> Create a CustomScoreQuery over input subQuery and a <see cref="T:Lucene.Net.Search.Function.ValueSourceQuery"/>.</summary>
<param name="subQuery">the sub query whose score is being customized. Must not be null.
</param>
<param name="valSrcQueries">value source queries whose scores are used in the custom score
computation. For most simple/convenient use case these would be
<see cref="T:Lucene.Net.Search.Function.FieldScoreQuery">FieldScoreQueries</see>.
This parameter is optional - it can be null or even an empty array.
</param>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.Equals(System.Object)">
<summary>Returns true if <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.GetHashCode">
<summary>Returns a hash code value for this object. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.GetCustomScoreProvider(Lucene.Net.Index.IndexReader)">
<summary>
Returns a <see cref="T:Lucene.Net.Search.Function.CustomScoreProvider"/> that calculates the custom scores
for the given <see cref="T:Lucene.Net.Index.IndexReader"/>. The default implementation returns a default
implementation as specified in the docs of <see cref="T:Lucene.Net.Search.Function.CustomScoreProvider"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.CustomScore(System.Int32,System.Single,System.Single[])">
<summary>
Compute a custom score by the subQuery score and a number of
ValueSourceQuery scores.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override <see cref="M:Lucene.Net.Search.Function.CustomScoreQuery.GetCustomScoreProvider(Lucene.Net.Index.IndexReader)"/> and return a subclass
of <see cref="T:Lucene.Net.Search.Function.CustomScoreProvider"/> for the given <see cref="T:Lucene.Net.Index.IndexReader"/>.
see CustomScoreProvider#customScore(int,float,float[])
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.CustomScore(System.Int32,System.Single,System.Single)">
<summary> Compute a custom score by the subQuery score and the ValueSourceQuery score.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override <see cref="M:Lucene.Net.Search.Function.CustomScoreQuery.GetCustomScoreProvider(Lucene.Net.Index.IndexReader)"/> and return a subclass
of <see cref="T:Lucene.Net.Search.Function.CustomScoreProvider"/> for the given <see cref="T:Lucene.Net.Index.IndexReader"/>.
</summary>
<seealso cref="M:Lucene.Net.Search.Function.CustomScoreProvider.CustomScore(System.Int32,System.Single,System.Single)"/>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.CustomExplain(System.Int32,Lucene.Net.Search.Explanation,Lucene.Net.Search.Explanation[])">
<summary> Explain the custom score.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override <see cref="M:Lucene.Net.Search.Function.CustomScoreQuery.GetCustomScoreProvider(Lucene.Net.Index.IndexReader)"/> and return a subclass
of <see cref="T:Lucene.Net.Search.Function.CustomScoreProvider"/> for the given <see cref="T:Lucene.Net.Index.IndexReader"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.CustomExplain(System.Int32,Lucene.Net.Search.Explanation,Lucene.Net.Search.Explanation)">
<summary> Explain the custom score.
The doc is relative to the current reader, which is
unknown to CustomScoreQuery when using per-segment search (since Lucene 2.9).
Please override <see cref="M:Lucene.Net.Search.Function.CustomScoreQuery.GetCustomScoreProvider(Lucene.Net.Index.IndexReader)"/> and return a subclass
of <see cref="T:Lucene.Net.Search.Function.CustomScoreProvider"/> for the given <see cref="T:Lucene.Net.Index.IndexReader"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.IsStrict">
<summary> Checks if this is strict custom scoring.
In strict custom scoring, the ValueSource part does not participate in weight normalization.
This may be useful when one wants full control over how scores are modified, and does
not care about normalizing by the ValueSource part.
One particular case where this is useful if for testing this query.
<p/>
Note: only has effect when the ValueSource part is not null.
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.SetStrict(System.Boolean)">
<summary> Set the strict mode of this query. </summary>
<param name="strict">The strict mode to set.
</param>
<seealso cref="M:Lucene.Net.Search.Function.CustomScoreQuery.IsStrict">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.CustomScoreQuery.Name">
<summary> A short name of this query, used in <see cref="M:Lucene.Net.Search.Function.CustomScoreQuery.ToString(System.String)"/>.</summary>
</member>
<member name="T:Lucene.Net.Search.Function.CustomScoreQuery.CustomScorer">
<summary> A scorer that applies a (callback) function on scores of the subQuery.</summary>
</member>
<member name="T:Lucene.Net.Search.Function.FieldScoreQuery">
<summary> A query that scores each document as the value of the numeric input field.
<p/>
The query matches all documents, and scores each document according to the numeric
value of that field.
<p/>
It is assumed, and expected, that:
<list type="bullet">
<item>The field used here is indexed, and has exactly
one token in every scored document.</item>
<item>Best if this field is un_tokenized.</item>
<item>That token is parsable to the selected type.</item>
</list>
<p/>
Combining this query in a FunctionQuery allows much freedom in affecting document scores.
Note, that with this freedom comes responsibility: it is more than likely that the
default Lucene scoring is superior in quality to scoring modified as explained here.
However, in some cases, and certainly for research experiments, this capability may turn useful.
<p/>
When contructing this query, select the appropriate type. That type should match the data stored in the
field. So in fact the "right" type should be selected before indexing. Type selection
has effect on the RAM usage:
<list type="bullet">
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.BYTE"/> consumes 1 * maxDocs bytes.</item>
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.SHORT"/> consumes 2 * maxDocs bytes.</item>
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.INT"/> consumes 4 * maxDocs bytes.</item>
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.FLOAT"/> consumes 8 * maxDocs bytes.</item>
</list>
<p/>
<b>Caching:</b>
Values for the numeric field are loaded once and cached in memory for further use with the same IndexReader.
To take advantage of this, it is extremely important to reuse index-readers or index-searchers,
otherwise, for instance if for each query a new index reader is opened, large penalties would be
paid for loading the field values into memory over and over again!
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
</member>
<member name="T:Lucene.Net.Search.Function.ValueSourceQuery">
<summary> Expert: A Query that sets the scores of document to the
values obtained from a <see cref="T:Lucene.Net.Search.Function.ValueSource">ValueSource</see>.
<p/>
This query provides a score for <em>each and every</em> undeleted document in the index.
<p/>
The value source can be based on a (cached) value of an indexed field, but it
can also be based on an external source, e.g. values read from an external database.
<p/>
Score is set as: Score(doc,query) = query.getBoost()<sup>2</sup> * valueSource(doc).
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.ValueSourceQuery.#ctor(Lucene.Net.Search.Function.ValueSource)">
<summary> Create a value source query</summary>
<param name="valSrc">provides the values defines the function to be used for scoring
</param>
</member>
<member name="M:Lucene.Net.Search.Function.ValueSourceQuery.Equals(System.Object)">
<summary>Returns true if <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.ValueSourceQuery.GetHashCode">
<summary>Returns a hash code value for this object. </summary>
</member>
<member name="T:Lucene.Net.Search.Function.ValueSourceQuery.ValueSourceScorer">
<summary> A scorer that (simply) matches all documents, and scores each document with
the value of the value soure in effect. As an example, if the value source
is a (cached) field source, then value of that field in that document will
be used. (assuming field is indexed for this doc, with a single token.)
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.FieldScoreQuery.#ctor(System.String,Lucene.Net.Search.Function.FieldScoreQuery.Type)">
<summary> Create a FieldScoreQuery - a query that scores each document as the value of the numeric input field.
<p/>
The <c>type</c> param tells how to parse the field string values into a numeric score value.
</summary>
<param name="field">the numeric field to be used.
</param>
<param name="type">the type of the field: either
<see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.BYTE"/>, <see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.SHORT"/>, <see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.INT"/>, or <see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.FLOAT"/>.
</param>
</member>
<member name="T:Lucene.Net.Search.Function.FieldScoreQuery.Type">
<summary> Type of score field, indicating how field values are interpreted/parsed.
<p/>
The type selected at search search time should match the data stored in the field.
Different types have different RAM requirements:
<list type="bullet">
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.BYTE"/> consumes 1 * maxDocs bytes.</item>
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.SHORT"/> consumes 2 * maxDocs bytes.</item>
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.INT"/> consumes 4 * maxDocs bytes.</item>
<item><see cref="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.FLOAT"/> consumes 8 * maxDocs bytes.</item>
</list>
</summary>
</member>
<member name="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.BYTE">
<summary>field values are interpreted as numeric byte values. </summary>
</member>
<member name="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.SHORT">
<summary>field values are interpreted as numeric short values. </summary>
</member>
<member name="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.INT">
<summary>field values are interpreted as numeric int values. </summary>
</member>
<member name="F:Lucene.Net.Search.Function.FieldScoreQuery.Type.FLOAT">
<summary>field values are interpreted as numeric float values. </summary>
</member>
<member name="T:Lucene.Net.Search.Function.FloatFieldSource">
<summary> Expert: obtains float field values from the
<see cref="T:Lucene.Net.Search.FieldCache">FieldCache</see>
using <c>getFloats()</c> and makes those values
available as other numeric types, casting as needed.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
<seealso cref="T:Lucene.Net.Search.Function.FieldCacheSource"> for requirements"
on the field.
<p/><b>NOTE</b>: with the switch in 2.9 to segment-based
searching, if <see cref="M:Lucene.Net.Search.Function.FieldCacheSource.GetValues(Lucene.Net.Index.IndexReader)"/> is invoked with a
composite (multi-segment) reader, this can easily cause
double RAM usage for the values in the FieldCache. It's
best to switch your application to pass only atomic
(single segment) readers to this API.<p/>
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.FloatFieldSource.#ctor(System.String)">
<summary> Create a cached float field source with default string-to-float parser. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.FloatFieldSource.#ctor(System.String,Lucene.Net.Search.FloatParser)">
<summary> Create a cached float field source with a specific string-to-float parser. </summary>
</member>
<member name="T:Lucene.Net.Search.Function.IntFieldSource">
<summary> Expert: obtains int field values from the
<see cref="T:Lucene.Net.Search.FieldCache">FieldCache</see>
using <c>getInts()</c> and makes those values
available as other numeric types, casting as needed.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
<seealso cref="T:Lucene.Net.Search.Function.FieldCacheSource"> for requirements
on the field.
<p/><b>NOTE</b>: with the switch in 2.9 to segment-based
searching, if <see cref="M:Lucene.Net.Search.Function.FieldCacheSource.GetValues(Lucene.Net.Index.IndexReader)"/> is invoked with a
composite (multi-segment) reader, this can easily cause
double RAM usage for the values in the FieldCache. It's
best to switch your application to pass only atomic
(single segment) readers to this API.<p/>
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.IntFieldSource.#ctor(System.String)">
<summary> Create a cached int field source with default string-to-int parser. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.IntFieldSource.#ctor(System.String,Lucene.Net.Search.IntParser)">
<summary> Create a cached int field source with a specific string-to-int parser. </summary>
</member>
<member name="T:Lucene.Net.Search.Function.OrdFieldSource">
<summary> Expert: obtains the ordinal of the field value from the default Lucene
<see cref="T:Lucene.Net.Search.FieldCache">Fieldcache</see> using getStringIndex().
<p/>
The native lucene index order is used to assign an ordinal value for each field value.
<p/>
Field values (terms) are lexicographically ordered by unicode value, and numbered starting at 1.
<p/>
Example:
<br/>If there were only three field values: "apple","banana","pear"
<br/>then ord("apple")=1, ord("banana")=2, ord("pear")=3
<p/>
WARNING:
ord() depends on the position in an index and can thus change
when other documents are inserted or deleted,
or if a MultiSearcher is used.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
<p/><b>NOTE</b>: with the switch in 2.9 to segment-based
searching, if <see cref="M:Lucene.Net.Search.Function.OrdFieldSource.GetValues(Lucene.Net.Index.IndexReader)"/> is invoked with a
composite (multi-segment) reader, this can easily cause
double RAM usage for the values in the FieldCache. It's
best to switch your application to pass only atomic
(single segment) readers to this API.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.OrdFieldSource.#ctor(System.String)">
<summary> Constructor for a certain field.</summary>
<param name="field">field whose values order is used.
</param>
</member>
<member name="T:Lucene.Net.Search.Function.ReverseOrdFieldSource">
<summary> Expert: obtains the ordinal of the field value from the default Lucene
<see cref="T:Lucene.Net.Search.FieldCache">FieldCache</see> using getStringIndex()
and reverses the order.
<p/>
The native lucene index order is used to assign an ordinal value for each field value.
<p/>
Field values (terms) are lexicographically ordered by unicode value, and numbered starting at 1.
<br/>
Example of reverse ordinal (rord):
<br/>If there were only three field values: "apple","banana","pear"
<br/>then rord("apple")=3, rord("banana")=2, ord("pear")=1
<p/>
WARNING:
rord() depends on the position in an index and can thus change
when other documents are inserted or deleted,
or if a MultiSearcher is used.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
<p/><b>NOTE</b>: with the switch in 2.9 to segment-based
searching, if <see cref="M:Lucene.Net.Search.Function.ReverseOrdFieldSource.GetValues(Lucene.Net.Index.IndexReader)"/> is invoked with a
composite (multi-segment) reader, this can easily cause
double RAM usage for the values in the FieldCache. It's
best to switch your application to pass only atomic
(single segment) readers to this API.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.Function.ReverseOrdFieldSource.#ctor(System.String)">
<summary> Contructor for a certain field.</summary>
<param name="field">field whose values reverse order is used.
</param>
</member>
<member name="T:Lucene.Net.Search.Function.ShortFieldSource">
<summary> Expert: obtains short field values from the
<see cref="T:Lucene.Net.Search.FieldCache">FieldCache</see>
using <c>getShorts()</c> and makes those values
available as other numeric types, casting as needed.
<p/><font color="#FF0000">
WARNING: The status of the <b>Search.Function</b> package is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
<seealso cref="T:Lucene.Net.Search.Function.FieldCacheSource"> for requirements
on the field.
<p/><b>NOTE</b>: with the switch in 2.9 to segment-based
searching, if <see cref="M:Lucene.Net.Search.Function.FieldCacheSource.GetValues(Lucene.Net.Index.IndexReader)"/> is invoked with a
composite (multi-segment) reader, this can easily cause
double RAM usage for the values in the FieldCache. It's
best to switch your application to pass only atomic
(single segment) readers to this API.<p/>
</seealso>
</member>
<member name="M:Lucene.Net.Search.Function.ShortFieldSource.#ctor(System.String)">
<summary> Create a cached short field source with default string-to-short parser. </summary>
</member>
<member name="M:Lucene.Net.Search.Function.ShortFieldSource.#ctor(System.String,Lucene.Net.Search.ShortParser)">
<summary> Create a cached short field source with a specific string-to-short parser. </summary>
</member>
<member name="T:Lucene.Net.Search.FuzzyQuery">
<summary>Implements the fuzzy search query. The similarity measurement
is based on the Levenshtein (edit distance) algorithm.
Warning: this query is not very scalable with its default prefix
length of 0 - in this case, *every* term will be enumerated and
cause an edit score calculation.
</summary>
</member>
<member name="T:Lucene.Net.Search.MultiTermQuery">
<summary> An abstract <see cref="T:Lucene.Net.Search.Query"/> that matches documents
containing a subset of terms provided by a <see cref="T:Lucene.Net.Search.FilteredTermEnum"/>
enumeration.
<p/>This query cannot be used directly; you must subclass
it and define <see cref="M:Lucene.Net.Search.MultiTermQuery.GetEnum(Lucene.Net.Index.IndexReader)"/> to provide a <see cref="T:Lucene.Net.Search.FilteredTermEnum"/>
that iterates through the terms to be
matched.
<p/><b>NOTE</b>: if <see cref="P:Lucene.Net.Search.MultiTermQuery.RewriteMethod"/> is either
<see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE"/> or <see cref="F:Lucene.Net.Search.MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE"/>
, you may encounter a
<see cref="T:Lucene.Net.Search.BooleanQuery.TooManyClauses"/> exception during
searching, which happens when the number of terms to be
searched exceeds <see cref="P:Lucene.Net.Search.BooleanQuery.MaxClauseCount"/>
. Setting <see cref="P:Lucene.Net.Search.MultiTermQuery.RewriteMethod"/>
to <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE"/>
prevents this.
<p/>The recommended rewrite method is <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/>
: it doesn't spend CPU
computing unhelpful scores, and it tries to pick the most
performant rewrite method given the query.
Note that <see cref="T:Lucene.Net.QueryParsers.QueryParser"/> produces
MultiTermQueries using <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/>
by default.
</summary>
</member>
<member name="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE">
<summary>A rewrite method that first creates a private Filter,
by visiting each term in sequence and marking all docs
for that term. Matching documents are assigned a
constant score equal to the query's boost.
<p/> This method is faster than the BooleanQuery
rewrite methods when the number of matched terms or
matched documents is non-trivial. Also, it will never
hit an errant <see cref="T:Lucene.Net.Search.BooleanQuery.TooManyClauses"/>
exception.
</summary>
<seealso cref="P:Lucene.Net.Search.MultiTermQuery.RewriteMethod">
</seealso>
</member>
<member name="F:Lucene.Net.Search.MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE">
<summary>A rewrite method that first translates each term into
<see cref="F:Lucene.Net.Search.Occur.SHOULD"/> clause in a
BooleanQuery, and keeps the scores as computed by the
query. Note that typically such scores are
meaningless to the user, and require non-trivial CPU
to compute, so it's almost always better to use <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/>
instead.
<p/><b>NOTE</b>: This rewrite method will hit <see cref="T:Lucene.Net.Search.BooleanQuery.TooManyClauses"/>
if the number of terms
exceeds <see cref="P:Lucene.Net.Search.BooleanQuery.MaxClauseCount"/>.
</summary>
<seealso cref="P:Lucene.Net.Search.MultiTermQuery.RewriteMethod">
</seealso>
</member>
<member name="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE">
<summary>Like <see cref="F:Lucene.Net.Search.MultiTermQuery.SCORING_BOOLEAN_QUERY_REWRITE"/> except
scores are not computed. Instead, each matching
document receives a constant score equal to the
query's boost.
<p/><b>NOTE</b>: This rewrite method will hit <see cref="T:Lucene.Net.Search.BooleanQuery.TooManyClauses"/>
if the number of terms
exceeds <see cref="P:Lucene.Net.Search.BooleanQuery.MaxClauseCount"/>.
</summary>
<seealso cref="P:Lucene.Net.Search.MultiTermQuery.RewriteMethod">
</seealso>
</member>
<member name="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT">
<summary>Read-only default instance of <see cref="T:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite"/>
, with <see cref="P:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite.TermCountCutoff"/>
set to
<see cref="F:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite.DEFAULT_TERM_COUNT_CUTOFF"/>
and <see cref="P:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite.DocCountPercent"/>
set to
<see cref="F:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite.DEFAULT_DOC_COUNT_PERCENT"/>
.
Note that you cannot alter the configuration of this
instance; you'll need to create a private instance
instead.
</summary>
</member>
<member name="M:Lucene.Net.Search.MultiTermQuery.#ctor">
<summary> Constructs a query matching terms that cannot be represented with a single
Term.
</summary>
</member>
<member name="M:Lucene.Net.Search.MultiTermQuery.GetEnum(Lucene.Net.Index.IndexReader)">
<summary>Construct the enumeration to be used, expanding the pattern term. </summary>
</member>
<member name="M:Lucene.Net.Search.MultiTermQuery.ClearTotalNumberOfTerms">
<summary> Expert: Resets the counting of unique terms.
Do this before executing the query/filter.
</summary>
<seealso cref="P:Lucene.Net.Search.MultiTermQuery.TotalNumberOfTerms">
</seealso>
</member>
<member name="P:Lucene.Net.Search.MultiTermQuery.TotalNumberOfTerms">
<summary> Expert: Return the number of unique terms visited during execution of the query.
If there are many of them, you may consider using another query type
or optimize your total term count in index.
<p/>This method is not thread safe, be sure to only call it when no query is running!
If you re-use the same query instance for another
search, be sure to first reset the term counter
with <see cref="M:Lucene.Net.Search.MultiTermQuery.ClearTotalNumberOfTerms"/>.
<p/>On optimized indexes / no MultiReaders, you get the correct number of
unique terms for the whole index. Use this number to compare different queries.
For non-optimized indexes this number can also be achived in
non-constant-score mode. In constant-score mode you get the total number of
terms seeked for all segments / sub-readers.
</summary>
<seealso cref="M:Lucene.Net.Search.MultiTermQuery.ClearTotalNumberOfTerms">
</seealso>
</member>
<member name="P:Lucene.Net.Search.MultiTermQuery.RewriteMethod">
<summary> Sets the rewrite method to be used when executing the
query. You can use one of the four core methods, or
implement your own subclass of <see cref="T:Lucene.Net.Search.RewriteMethod"/>.
</summary>
</member>
<member name="T:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite">
<summary>A rewrite method that tries to pick the best
constant-score rewrite method based on term and
document counts from the query. If both the number of
terms and documents is small enough, then <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_BOOLEAN_QUERY_REWRITE"/>
is used.
Otherwise, <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE"/> is
used.
</summary>
</member>
<member name="T:Lucene.Net.Search.RewriteMethod">
<summary>Abstract class that defines how the query is rewritten. </summary>
</member>
<member name="P:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite.TermCountCutoff">
<summary>If the number of terms in this query is equal to or
larger than this setting then <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE"/>
is used.
</summary>
</member>
<member name="P:Lucene.Net.Search.MultiTermQuery.ConstantScoreAutoRewrite.DocCountPercent">
<summary>If the number of documents to be visited in the
postings exceeds this specified percentage of the
MaxDoc for the index, then <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE"/>
is used.
</summary>
<value> 0.0 to 100.0 </value>
</member>
<member name="M:Lucene.Net.Search.FuzzyQuery.#ctor(Lucene.Net.Index.Term,System.Single,System.Int32)">
<summary> Create a new FuzzyQuery that will match terms with a similarity
of at least <c>minimumSimilarity</c> to <c>term</c>.
If a <c>prefixLength</c> > 0 is specified, a common prefix
of that length is also required.
</summary>
<param name="term">the term to search for
</param>
<param name="minimumSimilarity">a value between 0 and 1 to set the required similarity
between the query term and the matching terms. For example, for a
<c>minimumSimilarity</c> of <c>0.5</c> a term of the same length
as the query term is considered similar to the query term if the edit distance
between both terms is less than <c>length(term)*0.5</c>
</param>
<param name="prefixLength">length of common (non-fuzzy) prefix
</param>
<throws> IllegalArgumentException if minimumSimilarity is >= 1 or < 0 </throws>
<summary> or if prefixLength < 0
</summary>
</member>
<member name="M:Lucene.Net.Search.FuzzyQuery.#ctor(Lucene.Net.Index.Term,System.Single)">
<summary> Calls <see cref="M:Lucene.Net.Search.FuzzyQuery.#ctor(Lucene.Net.Index.Term,System.Single)">FuzzyQuery(term, minimumSimilarity, 0)</see>.</summary>
</member>
<member name="M:Lucene.Net.Search.FuzzyQuery.#ctor(Lucene.Net.Index.Term)">
<summary> Calls <see cref="M:Lucene.Net.Search.FuzzyQuery.#ctor(Lucene.Net.Index.Term,System.Single)">FuzzyQuery(term, 0.5f, 0)</see>.</summary>
</member>
<member name="P:Lucene.Net.Search.FuzzyQuery.Term">
<summary> Returns the pattern term.</summary>
</member>
<member name="P:Lucene.Net.Search.FuzzyQuery.MinSimilarity">
<summary> Returns the minimum similarity that is required for this query to match.</summary>
<value> float value between 0.0 and 1.0 </value>
</member>
<member name="P:Lucene.Net.Search.FuzzyQuery.PrefixLength">
<summary> Returns the non-fuzzy prefix length. This is the number of characters at the start
of a term that must be identical (not fuzzy) to the query term if the query
is to match that term.
</summary>
</member>
<member name="T:Lucene.Net.Search.FuzzyTermEnum">
<summary>Subclass of FilteredTermEnum for enumerating all terms that are similiar
to the specified filter term.
<p/>Term enumerations are always ordered by Term.compareTo(). Each term in
the enumeration is greater than all that precede it.
</summary>
</member>
<member name="M:Lucene.Net.Search.FuzzyTermEnum.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term)">
<summary> Creates a FuzzyTermEnum with an empty prefix and a minSimilarity of 0.5f.
<p/>
After calling the constructor the enumeration is already pointing to the first
valid term if such a term exists.
</summary>
<param name="reader">
</param>
<param name="term">
</param>
<throws> IOException </throws>
<seealso cref="M:Lucene.Net.Search.FuzzyTermEnum.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term,System.Single,System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.FuzzyTermEnum.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term,System.Single)">
<summary> Creates a FuzzyTermEnum with an empty prefix.
<p/>
After calling the constructor the enumeration is already pointing to the first
valid term if such a term exists.
</summary>
<param name="reader">
</param>
<param name="term">
</param>
<param name="minSimilarity">
</param>
<throws> IOException </throws>
<seealso cref="M:Lucene.Net.Search.FuzzyTermEnum.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term,System.Single,System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.FuzzyTermEnum.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term,System.Single,System.Int32)">
<summary> Constructor for enumeration of all terms from specified <c>reader</c> which share a prefix of
length <c>prefixLength</c> with <c>term</c> and which have a fuzzy similarity >
<c>minSimilarity</c>.
<p/>
After calling the constructor the enumeration is already pointing to the first
valid term if such a term exists.
</summary>
<param name="reader">Delivers terms.
</param>
<param name="term">Pattern term.
</param>
<param name="minSimilarity">Minimum required similarity for terms from the reader. Default value is 0.5f.
</param>
<param name="prefixLength">Length of required common prefix. Default value is 0.
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.FuzzyTermEnum.TermCompare(Lucene.Net.Index.Term)">
<summary> The termCompare method in FuzzyTermEnum uses Levenshtein distance to
calculate the distance between the given term and the comparing term.
</summary>
</member>
<member name="M:Lucene.Net.Search.FuzzyTermEnum.Similarity(System.String)">
<summary> <p/>Similarity returns a number that is 1.0f or less (including negative numbers)
based on how similar the Term is compared to a target term. It returns
exactly 0.0f when
<c>
editDistance > maximumEditDistance</c>
Otherwise it returns:
<c>
1 - (editDistance / length)</c>
where length is the length of the shortest term (text or target) including a
prefix that are identical and editDistance is the Levenshtein distance for
the two words.<p/>
<p/>Embedded within this algorithm is a fail-fast Levenshtein distance
algorithm. The fail-fast algorithm differs from the standard Levenshtein
distance algorithm in that it is aborted if it is discovered that the
mimimum distance between the words is greater than some threshold.
<p/>To calculate the maximum distance threshold we use the following formula:
<c>
(1 - minimumSimilarity) * length</c>
where length is the shortest term including any prefix that is not part of the
similarity comparision. This formula was derived by solving for what maximum value
of distance returns false for the following statements:
<code>
similarity = 1 - ((float)distance / (float) (prefixLength + Math.min(textlen, targetlen)));
return (similarity > minimumSimilarity);</code>
where distance is the Levenshtein distance for the two words.
<p/>
<p/>Levenshtein distance (also known as edit distance) is a measure of similiarity
between two strings where the distance is measured as the number of character
deletions, insertions or substitutions required to transform one string to
the other string.
</summary>
<param name="target">the target word or phrase
</param>
<returns> the similarity, 0.0 or less indicates that it matches less than the required
threshold and 1.0 indicates that the text and target are identical
</returns>
</member>
<member name="M:Lucene.Net.Search.FuzzyTermEnum.CalculateMaxDistance(System.Int32)">
<summary> The max Distance is the maximum Levenshtein distance for the text
compared to some other value that results in score that is
better than the minimum similarity.
</summary>
<param name="m">the length of the "other value"
</param>
<returns> the maximum levenshtein distance that we care about
</returns>
</member>
<member name="M:Lucene.Net.Search.HitQueue.#ctor(System.Int32,System.Boolean)">
<summary> Creates a new instance with <c>size</c> elements. If
<c>prePopulate</c> is set to true, the queue will pre-populate itself
with sentinel objects and set its <see cref="M:Lucene.Net.Util.PriorityQueue`1.Size"/> to <c>size</c>. In
that case, you should not rely on <see cref="M:Lucene.Net.Util.PriorityQueue`1.Size"/> to get the number of
actual elements that were added to the queue, but keep track yourself.<br/>
<b>NOTE:</b> in case <c>prePopulate</c> is true, you should pop
elements from the queue using the following code example:
<code>
PriorityQueue pq = new HitQueue(10, true); // pre-populate.
ScoreDoc top = pq.top();
// Add/Update one element.
top.score = 1.0f;
top.doc = 0;
top = (ScoreDoc) pq.updateTop();
int totalHits = 1;
// Now pop only the elements that were *truly* inserted.
// First, pop all the sentinel elements (there are pq.size() - totalHits).
for (int i = pq.size() - totalHits; i > 0; i--) pq.pop();
// Now pop the truly added elements.
ScoreDoc[] results = new ScoreDoc[totalHits];
for (int i = totalHits - 1; i >= 0; i--) {
results[i] = (ScoreDoc) pq.pop();
}
</code>
<p/><b>NOTE</b>: This class pre-allocate a full array of
length <c>size</c>.
</summary>
<param name="size">the requested size of this queue.
</param>
<param name="prePopulate">specifies whether to pre-populate the queue with sentinel values.
</param>
<seealso cref="P:Lucene.Net.Search.HitQueue.SentinelObject">
</seealso>
</member>
<member name="T:Lucene.Net.Search.IndexSearcher">
<summary>Implements search over a single IndexReader.
<p/>Applications usually need only call the inherited <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>
or <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32)"/> methods. For performance reasons it is
recommended to open only one IndexSearcher and use it for all of your searches.
<a name="thread-safety"></a><p/><b>NOTE</b>:
<see cref="T:Lucene.Net.Search.IndexSearcher"/> instances are completely
thread safe, meaning multiple threads can call any of its
methods, concurrently. If your application requires
external synchronization, you should <b>not</b>
synchronize on the <c>IndexSearcher</c> instance;
use your own (non-Lucene) objects instead.<p/>
</summary>
</member>
<member name="T:Lucene.Net.Search.Searcher">
<summary> An abstract base class for search implementations. Implements the main search
methods.
<p/>
Note that you can only access hits from a Searcher as long as it is not yet
closed, otherwise an IOException will be thrown.
</summary>
</member>
<member name="T:Lucene.Net.Search.Searchable">
<summary> The interface for search implementations.
<p/>
Searchable is the abstract network protocol for searching. Implementations
provide search over a single index, over multiple indices, and over indices
on remote servers.
<p/>
Queries, filters and sort criteria are designed to be compact so that they
may be efficiently passed to a remote index, with only the top-scoring hits
being returned, rather than every matching hit.
<b>NOTE:</b> this interface is kept public for convenience. Since it is not
expected to be implemented directly, it may be changed unexpectedly between
releases.
</summary>
</member>
<member name="M:Lucene.Net.Search.Searchable.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,Lucene.Net.Search.Collector)">
<summary> Lower-level search API.
<p/>
<see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/> is called for every document. <br/>
Collector-based access to remote indexes is discouraged.
<p/>
Applications should only use this if they need <i>all</i> of the matching
documents. The high-level search API (<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>) is
usually more efficient, as it skips non-high-scoring hits.
</summary>
<param name="weight">to match documents
</param>
<param name="filter">if non-null, used to permit documents to be collected.
</param>
<param name="collector">to receive hits
</param>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searchable.Close">
<summary>Frees resources associated with this Searcher.
Be careful not to call this method while you are still using objects
that reference this searchable
</summary>
</member>
<member name="M:Lucene.Net.Search.Searchable.DocFreq(Lucene.Net.Index.Term)">
<summary>Expert: Returns the number of documents containing <c>term</c>.
Called by search code to compute term weights.
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.DocFreq(Lucene.Net.Index.Term)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Searchable.DocFreqs(Lucene.Net.Index.Term[])">
<summary>Expert: For each term in the terms array, calculates the number of
documents containing <c>term</c>. Returns an array with these
document frequencies. Used to minimize number of remote calls.
</summary>
</member>
<member name="M:Lucene.Net.Search.Searchable.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,System.Int32)">
<summary>
Expert: Low-level search implementation. Finds the top <c>n</c>
hits for <c>query</c>, applying <c>filter</c> if non-null.
<p/>Applications should usually call <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/> or
<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32)"/> instead.
</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searchable.Doc(System.Int32)">
<summary>Expert: Returns the stored fields of document <c>i</c>.</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32)"/>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
</member>
<member name="M:Lucene.Net.Search.Searchable.Doc(System.Int32,Lucene.Net.Documents.FieldSelector)">
<summary> Get the <see cref="T:Lucene.Net.Documents.Document"/>at the <c>n</c><sup>th</sup> position. The <see cref="T:Lucene.Net.Documents.FieldSelector"/>
may be used to determine what <see cref="T:Lucene.Net.Documents.Field"/>s to load and how they should be loaded.
<b>NOTE:</b> If the underlying Reader (more specifically, the underlying <c>FieldsReader</c>) is closed before the lazy <see cref="T:Lucene.Net.Documents.Field"/> is
loaded an exception may be thrown. If you want the value of a lazy <see cref="T:Lucene.Net.Documents.Field"/> to be available after closing you must
explicitly load it or fetch the Document again with a new loader.
</summary>
<param name="n">Get the document at the <c>n</c><sup>th</sup> position
</param>
<param name="fieldSelector">The <see cref="T:Lucene.Net.Documents.FieldSelector"/> to use to determine what Fields should be loaded on the Document. May be null, in which case all Fields will be loaded.
</param>
<returns> The stored fields of the <see cref="T:Lucene.Net.Documents.Document"/> at the nth position
</returns>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
<summary>
</summary>
<seealso cref="M:Lucene.Net.Index.IndexReader.Document(System.Int32,Lucene.Net.Documents.FieldSelector)">
</seealso>
<seealso cref="T:Lucene.Net.Documents.IFieldable">
</seealso>
<seealso cref="T:Lucene.Net.Documents.FieldSelector">
</seealso>
<seealso cref="T:Lucene.Net.Documents.SetBasedFieldSelector">
</seealso>
<seealso cref="T:Lucene.Net.Documents.LoadFirstFieldSelector">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Searchable.Rewrite(Lucene.Net.Search.Query)">
<summary>Expert: called to re-write queries into primitive queries.</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searchable.Explain(Lucene.Net.Search.Weight,System.Int32)">
<summary>Expert: low-level implementation method
Returns an Explanation that describes how <c>doc</c> scored against
<c>weight</c>.
<p/>This is intended to be used in developing Similarity implementations,
and, for good performance, should not be displayed with every hit.
Computing an explanation is as expensive as executing the query over the
entire index.
<p/>Applications should call <see cref="M:Lucene.Net.Search.Searcher.Explain(Lucene.Net.Search.Query,System.Int32)"/>.
</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searchable.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)">
<summary>Expert: Low-level search implementation with arbitrary sorting. Finds
the top <c>n</c> hits for <c>query</c>, applying
<c>filter</c> if non-null, and sorting the hits by the criteria in
<c>sort</c>.
<p/>Applications should usually call
<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)"/> instead.
</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="P:Lucene.Net.Search.Searchable.MaxDoc">
<summary>Expert: Returns one greater than the largest possible document number.
Called by search code to compute term weights.
</summary>
<seealso cref="P:Lucene.Net.Index.IndexReader.MaxDoc">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)">
<summary>Search implementation with arbitrary sorting. Finds
the top <c>n</c> hits for <c>query</c>, applying
<c>filter</c> if non-null, and sorting the hits by the criteria in
<c>sort</c>.
<p/>NOTE: this does not compute scores by default; use
<see cref="M:Lucene.Net.Search.IndexSearcher.SetDefaultFieldSortScoring(System.Boolean,System.Boolean)"/> to enable scoring.
</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Collector)">
<summary>Lower-level search API.
<p/><see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/> is called for every matching document.
<p/>Applications should only use this if they need <i>all</i> of the matching
documents. The high-level search API (<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>
) is usually more efficient, as it skips non-high-scoring hits.
<p/>Note: The <c>score</c> passed to this method is a raw score.
In other words, the score will not necessarily be a float whose value is
between 0 and 1.
</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,Lucene.Net.Search.Collector)">
<summary>Lower-level search API.
<p/><see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/> is called for every matching
document.
<br/>Collector-based access to remote indexes is discouraged.
<p/>Applications should only use this if they need <i>all</i> of the
matching documents. The high-level search API (<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32)"/>)
is usually more efficient, as it skips
non-high-scoring hits.
</summary>
<param name="query">to match documents
</param>
<param name="filter">if non-null, used to permit documents to be collected.
</param>
<param name="results">to receive hits
</param>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32)">
<summary>Finds the top <c>n</c>
hits for <c>query</c>, applying <c>filter</c> if non-null.
</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)">
<summary>Finds the top <c>n</c>
hits for <c>query</c>.
</summary>
<throws> BooleanQuery.TooManyClauses </throws>
</member>
<member name="M:Lucene.Net.Search.Searcher.Explain(Lucene.Net.Search.Query,System.Int32)">
<summary>Returns an Explanation that describes how <c>doc</c> scored against
<c>query</c>.
<p/>This is intended to be used in developing Similarity implementations,
and, for good performance, should not be displayed with every hit.
Computing an explanation is as expensive as executing the query over the
entire index.
</summary>
</member>
<member name="F:Lucene.Net.Search.Searcher.similarity">
<summary>The Similarity implementation used by this searcher. </summary>
</member>
<member name="M:Lucene.Net.Search.Searcher.CreateWeight(Lucene.Net.Search.Query)">
<summary> creates a weight for <c>query</c></summary>
<returns> new weight
</returns>
</member>
<member name="P:Lucene.Net.Search.Searcher.Similarity">
<summary>Expert: Gets or Sets the Similarity implementation used by this Searcher.
</summary>
<seealso cref="P:Lucene.Net.Search.Similarity.Default">
</seealso>
</member>
<member name="M:Lucene.Net.Search.IndexSearcher.#ctor(Lucene.Net.Store.Directory)">
<summary>Creates a searcher searching the index in the named
directory, with readOnly=true</summary>
<throws>CorruptIndexException if the index is corrupt</throws>
<throws>IOException if there is a low-level IO error</throws>
</member>
<member name="M:Lucene.Net.Search.IndexSearcher.#ctor(Lucene.Net.Store.Directory,System.Boolean)">
<summary>Creates a searcher searching the index in the named
directory. You should pass readOnly=true, since it
gives much better concurrent performance, unless you
intend to do write operations (delete documents or
change norms) with the underlying IndexReader.
</summary>
<throws> CorruptIndexException if the index is corrupt </throws>
<throws> IOException if there is a low-level IO error </throws>
<param name="path">directory where IndexReader will be opened
</param>
<param name="readOnly">if true, the underlying IndexReader
will be opened readOnly
</param>
</member>
<member name="M:Lucene.Net.Search.IndexSearcher.#ctor(Lucene.Net.Index.IndexReader)">
<summary>Creates a searcher searching the provided index
<para>
Note that the underlying IndexReader is not closed, if
IndexSearcher was constructed with IndexSearcher(IndexReader r).
If the IndexReader was supplied implicitly by specifying a directory, then
the IndexReader gets closed.
</para>
</summary>
</member>
<member name="M:Lucene.Net.Search.IndexSearcher.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.IndexReader[],System.Int32[])">
<summary>
Expert: directly specify the reader, subReaders and their
DocID starts
<p/>
<b>NOTE:</b> This API is experimental and
might change in incompatible ways in the next
release<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.IndexSearcher.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort,System.Boolean)">
<summary> Just like <see cref="M:Lucene.Net.Search.IndexSearcher.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)"/>, but you choose
whether or not the fields in the returned <see cref="T:Lucene.Net.Search.FieldDoc"/> instances
should be set by specifying fillFields.
<p/>
NOTE: this does not compute scores by default. If you need scores, create
a <see cref="T:Lucene.Net.Search.TopFieldCollector"/> instance by calling
<see cref="M:Lucene.Net.Search.TopFieldCollector.Create(Lucene.Net.Search.Sort,System.Int32,System.Boolean,System.Boolean,System.Boolean,System.Boolean)"/> and then pass that to
<see cref="M:Lucene.Net.Search.IndexSearcher.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,Lucene.Net.Search.Collector)"/>.
<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.IndexSearcher.SetDefaultFieldSortScoring(System.Boolean,System.Boolean)">
<summary> By default, no scores are computed when sorting by field (using
<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)"/>). You can change that, per
IndexSearcher instance, by calling this method. Note that this will incur
a CPU cost.
</summary>
<param name="doTrackScores">If true, then scores are returned for every matching document
in <see cref="T:Lucene.Net.Search.TopFieldDocs"/>.
</param>
<param name="doMaxScore">If true, then the max score for all matching docs is computed.
</param>
</member>
<member name="P:Lucene.Net.Search.IndexSearcher.IndexReader">
<summary>Return the <see cref="T:Lucene.Net.Index.IndexReader"/> this searches. </summary>
</member>
<member name="T:Lucene.Net.Search.MatchAllDocsQuery">
<summary> A query that matches all documents.
</summary>
</member>
<member name="M:Lucene.Net.Search.MatchAllDocsQuery.#ctor(System.String)">
<param name="normsField">Field used for normalization factor (document boost). Null if nothing.
</param>
</member>
<member name="T:Lucene.Net.Search.MultiPhraseQuery">
<summary> MultiPhraseQuery is a generalized version of PhraseQuery, with an added
method <see cref="M:Lucene.Net.Search.MultiPhraseQuery.Add(Lucene.Net.Index.Term[])"/>.
To use this class, to search for the phrase "Microsoft app*" first use
add(Term) on the term "Microsoft", then find all terms that have "app" as
prefix using IndexReader.terms(Term), and use MultiPhraseQuery.add(Term[]
terms) to add them to the query.
</summary>
<version> 1.0
</version>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.Add(Lucene.Net.Index.Term)">
<summary>Add a single term at the next position in the phrase.</summary>
<seealso cref="M:Lucene.Net.Search.PhraseQuery.Add(Lucene.Net.Index.Term)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.Add(Lucene.Net.Index.Term[])">
<summary>Add multiple terms at the next position in the phrase. Any of the terms
may match.
</summary>
<seealso cref="M:Lucene.Net.Search.PhraseQuery.Add(Lucene.Net.Index.Term)">
</seealso>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.Add(Lucene.Net.Index.Term[],System.Int32)">
<summary> Allows to specify the relative position of terms within the phrase.
</summary>
<seealso cref="M:Lucene.Net.Search.PhraseQuery.Add(Lucene.Net.Index.Term,System.Int32)">
</seealso>
<param name="terms">
</param>
<param name="position">
</param>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.GetTermArrays">
<summary> Returns a List<Term[]> of the terms in the multiphrase.
Do not modify the List or its contents.
</summary>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.GetPositions">
<summary> Returns the relative positions of terms in this phrase.</summary>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.Equals(System.Object)">
<summary>Returns true if <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.MultiPhraseQuery.GetHashCode">
<summary>Returns a hash code value for this object.</summary>
</member>
<member name="P:Lucene.Net.Search.MultiPhraseQuery.Slop">
<summary>Gets or sets the phrase slop for this query.</summary>
<seealso cref="P:Lucene.Net.Search.PhraseQuery.Slop">
</seealso>
</member>
<member name="T:Lucene.Net.Search.MultiSearcher">
<summary>Implements search over a set of <c>Searchables</c>.
<p/>Applications usually need only call the inherited <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>
or <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32)"/> methods.
</summary>
</member>
<member name="M:Lucene.Net.Search.MultiSearcher.#ctor(Lucene.Net.Search.Searchable[])">
<summary>Creates a searcher which searches <i>searchers</i>. </summary>
</member>
<member name="M:Lucene.Net.Search.MultiSearcher.GetSearchables">
<summary>Return the array of <see cref="T:Lucene.Net.Search.Searchable"/>s this searches. </summary>
</member>
<member name="M:Lucene.Net.Search.MultiSearcher.SubSearcher(System.Int32)">
<summary>Returns index of the searcher for document <c>n</c> in the array
used to construct this searcher.
</summary>
</member>
<member name="M:Lucene.Net.Search.MultiSearcher.SubDoc(System.Int32)">
<summary>Returns the document number of document <c>n</c> within its
sub-index.
</summary>
</member>
<member name="M:Lucene.Net.Search.MultiSearcher.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,Lucene.Net.Search.Collector)">
<inheritdoc />
</member>
<member name="M:Lucene.Net.Search.MultiSearcher.CreateWeight(Lucene.Net.Search.Query)">
<summary> Create weight in multiple index scenario.
Distributed query processing is done in the following steps:
1. rewrite query
2. extract necessary terms
3. collect dfs for these terms from the Searchables
4. create query weight using aggregate dfs.
5. distribute that weight to Searchables
6. merge results
Steps 1-4 are done here, 5+6 in the search() methods
</summary>
<returns> rewritten queries
</returns>
</member>
<member name="T:Lucene.Net.Search.MultiSearcher.CachedDfSource">
<summary> Document Frequency cache acting as a Dummy-Searcher. This class is no
full-fledged Searcher, but only supports the methods necessary to
initialize Weights.
</summary>
</member>
<member name="T:Lucene.Net.Search.MultiTermQueryWrapperFilter`1">
<summary> A wrapper for <see cref="T:Lucene.Net.Search.MultiTermQuery"/>, that exposes its
functionality as a <see cref="T:Lucene.Net.Search.Filter"/>.
<p/>
<c>MultiTermQueryWrapperFilter</c> is not designed to
be used by itself. Normally you subclass it to provide a Filter
counterpart for a <see cref="T:Lucene.Net.Search.MultiTermQuery"/> subclass.
<p/>
For example, <see cref="T:Lucene.Net.Search.TermRangeFilter"/> and <see cref="T:Lucene.Net.Search.PrefixFilter"/> extend
<c>MultiTermQueryWrapperFilter</c>.
This class also provides the functionality behind
<see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE"/>;
this is why it is not abstract.
</summary>
</member>
<member name="M:Lucene.Net.Search.MultiTermQueryWrapperFilter`1.#ctor(`0)">
<summary> Wrap a <see cref="T:Lucene.Net.Search.MultiTermQuery"/> as a Filter.</summary>
</member>
<member name="M:Lucene.Net.Search.MultiTermQueryWrapperFilter`1.ClearTotalNumberOfTerms">
<summary> Expert: Resets the counting of unique terms.
Do this before executing the filter.
</summary>
<seealso cref="P:Lucene.Net.Search.MultiTermQueryWrapperFilter`1.TotalNumberOfTerms">
</seealso>
</member>
<member name="P:Lucene.Net.Search.MultiTermQueryWrapperFilter`1.TotalNumberOfTerms">
<summary> Expert: Return the number of unique terms visited during execution of the filter.
If there are many of them, you may consider using another filter type
or optimize your total term count in index.
<p/>This method is not thread safe, be sure to only call it when no filter is running!
If you re-use the same filter instance for another
search, be sure to first reset the term counter
with <see cref="M:Lucene.Net.Search.MultiTermQueryWrapperFilter`1.ClearTotalNumberOfTerms"/>.
</summary>
<seealso cref="M:Lucene.Net.Search.MultiTermQueryWrapperFilter`1.ClearTotalNumberOfTerms">
</seealso>
</member>
<member name="T:Lucene.Net.Search.NumericRangeFilter`1">
<summary> A <see cref="T:Lucene.Net.Search.Filter"/> that only accepts numeric values within
a specified range. To use this, you must first index the
numeric values using <see cref="T:Lucene.Net.Documents.NumericField"/> (expert: <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
).
<p/>You create a new NumericRangeFilter with the static
factory methods, eg:
<code>
Filter f = NumericRangeFilter.newFloatRange("weight",
new Float(0.3f), new Float(0.10f),
true, true);
</code>
accepts all documents whose float valued "weight" field
ranges from 0.3 to 0.10, inclusive.
See <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> for details on how Lucene
indexes and searches numeric valued fields.
<p/><font color="red"><b>NOTE:</b> This API is experimental and
might change in incompatible ways in the next
release.</font>
</summary>
<since> 2.9
</since>
</member>
<member name="P:Lucene.Net.Search.NumericRangeFilter`1.Field">
<summary>Returns the field name for this filter </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeFilter`1.IncludesMin">
<summary>Returns <c>true</c> if the lower endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeFilter`1.IncludesMax">
<summary>Returns <c>true</c> if the upper endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeFilter`1.Min">
<summary>Returns the lower value of this range filter </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeFilter`1.Max">
<summary>Returns the upper value of this range filter </summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewLongRange(System.String,System.Int32,System.Nullable{System.Int64},System.Nullable{System.Int64},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that filters a <c>long</c>
range using the given <a href="NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewLongRange(System.String,System.Nullable{System.Int64},System.Nullable{System.Int64},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that queries a <c>long</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewIntRange(System.String,System.Int32,System.Nullable{System.Int32},System.Nullable{System.Int32},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that filters a <c>int</c>
range using the given <a href="NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewIntRange(System.String,System.Nullable{System.Int32},System.Nullable{System.Int32},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that queries a <c>int</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewDoubleRange(System.String,System.Int32,System.Nullable{System.Double},System.Nullable{System.Double},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that filters a <c>double</c>
range using the given <a href="NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewDoubleRange(System.String,System.Nullable{System.Double},System.Nullable{System.Double},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that queries a <c>double</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewFloatRange(System.String,System.Int32,System.Nullable{System.Single},System.Nullable{System.Single},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that filters a <c>float</c>
range using the given <a href="NumericRangeQuery.html#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeFilter.NewFloatRange(System.String,System.Nullable{System.Single},System.Nullable{System.Single},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeFilter</c>, that queries a <c>float</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="T:Lucene.Net.Search.NumericRangeQuery`1">
<summary> <p/>A <see cref="T:Lucene.Net.Search.Query"/> that matches numeric values within a
specified range. To use this, you must first index the
numeric values using <see cref="T:Lucene.Net.Documents.NumericField"/> (expert: <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>
). If your terms are instead textual,
you should use <see cref="T:Lucene.Net.Search.TermRangeQuery"/>. <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/>
is the filter equivalent of this
query.<p/>
<p/>You create a new NumericRangeQuery with the static
factory methods, eg:
<code>
Query q = NumericRangeQuery.newFloatRange("weight",
new Float(0.3f), new Float(0.10f),
true, true);
</code>
matches all documents whose float valued "weight" field
ranges from 0.3 to 0.10, inclusive.
<p/>The performance of NumericRangeQuery is much better
than the corresponding <see cref="T:Lucene.Net.Search.TermRangeQuery"/> because the
number of terms that must be searched is usually far
fewer, thanks to trie indexing, described below.<p/>
<p/>You can optionally specify a <a href="#precisionStepDesc"><c>precisionStep</c></a>
when creating this query. This is necessary if you've
changed this configuration from its default (4) during
indexing. Lower values consume more disk space but speed
up searching. Suitable values are between <b>1</b> and
<b>8</b>. A good starting point to test is <b>4</b>,
which is the default value for all <c>Numeric*</c>
classes. See <a href="#precisionStepDesc">below</a> for
details.
<p/>This query defaults to
<see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/> for
32 bit (int/float) ranges with precisionStep <8 and 64
bit (long/double) ranges with precisionStep <6.
Otherwise it uses
<see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_FILTER_REWRITE"/> as the
number of terms is likely to be high. With precision
steps of <4, this query can be run with one of the
BooleanQuery rewrite methods without changing
BooleanQuery's default max clause count.
<p/><font color="red"><b>NOTE:</b> This API is experimental and
might change in incompatible ways in the next release.</font>
<br/><h3>How it works</h3>
<p/>See the publication about <a target="_blank" href="http://www.panfmp.org">panFMP</a>,
where this algorithm was described (referred to as <c>TrieRangeQuery</c>):
<blockquote><strong>Schindler, U, Diepenbroek, M</strong>, 2008.
<em>Generic XML-based Framework for Metadata Portals.</em>
Computers & Geosciences 34 (12), 1947-1955.
<a href="http://dx.doi.org/10.1016/j.cageo.2008.02.023" target="_blank">doi:10.1016/j.cageo.2008.02.023</a></blockquote>
<p/><em>A quote from this paper:</em> Because Apache Lucene is a full-text
search engine and not a conventional database, it cannot handle numerical ranges
(e.g., field value is inside user defined bounds, even dates are numerical values).
We have developed an extension to Apache Lucene that stores
the numerical values in a special string-encoded format with variable precision
(all numerical values like doubles, longs, floats, and ints are converted to
lexicographic sortable string representations and stored with different precisions
(for a more detailed description of how the values are stored,
see <see cref="T:Lucene.Net.Util.NumericUtils"/>). A range is then divided recursively into multiple intervals for searching:
The center of the range is searched only with the lowest possible precision in the <em>trie</em>,
while the boundaries are matched more exactly. This reduces the number of terms dramatically.<p/>
<p/>For the variant that stores long values in 8 different precisions (each reduced by 8 bits) that
uses a lowest precision of 1 byte, the index contains only a maximum of 256 distinct values in the
lowest precision. Overall, a range could consist of a theoretical maximum of
<c>7*255*2 + 255 = 3825</c> distinct terms (when there is a term for every distinct value of an
8-byte-number in the index and the range covers almost all of them; a maximum of 255 distinct values is used
because it would always be possible to reduce the full 256 values to one term with degraded precision).
In practice, we have seen up to 300 terms in most cases (index with 500,000 metadata records
and a uniform value distribution).<p/>
<a name="precisionStepDesc"/><h3>Precision Step</h3>
<p/>You can choose any <c>precisionStep</c> when encoding values.
Lower step values mean more precisions and so more terms in index (and index gets larger).
On the other hand, the maximum number of terms to match reduces, which optimized query speed.
The formula to calculate the maximum term count is:
<code>
n = [ (bitsPerValue/precisionStep - 1) * (2^precisionStep - 1 ) * 2 ] + (2^precisionStep - 1 )
</code>
<p/><em>(this formula is only correct, when <c>bitsPerValue/precisionStep</c> is an integer;
in other cases, the value must be rounded up and the last summand must contain the modulo of the division as
precision step)</em>.
For longs stored using a precision step of 4, <c>n = 15*15*2 + 15 = 465</c>, and for a precision
step of 2, <c>n = 31*3*2 + 3 = 189</c>. But the faster search speed is reduced by more seeking
in the term enum of the index. Because of this, the ideal <c>precisionStep</c> value can only
be found out by testing. <b>Important:</b> You can index with a lower precision step value and test search speed
using a multiple of the original step value.<p/>
<p/>Good values for <c>precisionStep</c> are depending on usage and data type:
<list type="bullet">
<item>The default for all data types is <b>4</b>, which is used, when no <c>precisionStep</c> is given.</item>
<item>Ideal value in most cases for <em>64 bit</em> data types <em>(long, double)</em> is <b>6</b> or <b>8</b>.</item>
<item>Ideal value in most cases for <em>32 bit</em> data types <em>(int, float)</em> is <b>4</b>.</item>
<item>Steps <b>>64</b> for <em>long/double</em> and <b>>32</b> for <em>int/float</em> produces one token
per value in the index and querying is as slow as a conventional <see cref="T:Lucene.Net.Search.TermRangeQuery"/>. But it can be used
to produce fields, that are solely used for sorting (in this case simply use <see cref="F:System.Int32.MaxValue"/> as
<c>precisionStep</c>). Using <see cref="T:Lucene.Net.Documents.NumericField">NumericFields</see> for sorting
is ideal, because building the field cache is much faster than with text-only numbers.
Sorting is also possible with range query optimized fields using one of the above <c>precisionSteps</c>.</item>
</list>
<p/>Comparisons of the different types of RangeQueries on an index with about 500,000 docs showed
that <see cref="T:Lucene.Net.Search.TermRangeQuery"/> in boolean rewrite mode (with raised <see cref="T:Lucene.Net.Search.BooleanQuery"/> clause count)
took about 30-40 secs to complete, <see cref="T:Lucene.Net.Search.TermRangeQuery"/> in constant score filter rewrite mode took 5 secs
and executing this class took <100ms to complete (on an Opteron64 machine, Java 1.5, 8 bit
precision step). This query type was developed for a geographic portal, where the performance for
e.g. bounding boxes or exact date/time stamps is important.<p/>
</summary>
<since> 2.9
</since>
</member>
<member name="P:Lucene.Net.Search.NumericRangeQuery`1.Field">
<summary>Returns the field name for this query </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeQuery`1.IncludesMin">
<summary>Returns <c>true</c> if the lower endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeQuery`1.IncludesMax">
<summary>Returns <c>true</c> if the upper endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeQuery`1.Min">
<summary>Returns the lower value of this range query </summary>
</member>
<member name="P:Lucene.Net.Search.NumericRangeQuery`1.Max">
<summary>Returns the upper value of this range query </summary>
</member>
<member name="T:Lucene.Net.Search.NumericRangeQuery`1.NumericRangeTermEnum">
<summary> Subclass of FilteredTermEnum for enumerating all terms that match the
sub-ranges for trie range queries.
<p/>
WARNING: This term enumeration is not guaranteed to be always ordered by
<see cref="M:Lucene.Net.Index.Term.CompareTo(Lucene.Net.Index.Term)"/>.
The ordering depends on how <see cref="M:Lucene.Net.Util.NumericUtils.SplitLongRange(Lucene.Net.Util.NumericUtils.LongRangeBuilder,System.Int32,System.Int64,System.Int64)"/> and
<see cref="M:Lucene.Net.Util.NumericUtils.SplitIntRange(Lucene.Net.Util.NumericUtils.IntRangeBuilder,System.Int32,System.Int32,System.Int32)"/> generates the sub-ranges. For
<see cref="T:Lucene.Net.Search.MultiTermQuery"/> ordering is not relevant.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery`1.NumericRangeTermEnum.EndEnum">
<summary>this is a dummy, it is not used by this class. </summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery`1.NumericRangeTermEnum.SetEnum(Lucene.Net.Index.TermEnum)">
<summary>this is a dummy, it is not used by this class. </summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery`1.NumericRangeTermEnum.TermCompare(Lucene.Net.Index.Term)">
<summary> Compares if current upper bound is reached,
this also updates the term count for statistics.
In contrast to <see cref="T:Lucene.Net.Search.FilteredTermEnum"/>, a return value
of <c>false</c> ends iterating the current enum
and forwards to the next sub-range.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery`1.NumericRangeTermEnum.Next">
<summary>Increments the enumeration to the next element. True if one exists. </summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery`1.NumericRangeTermEnum.Dispose(System.Boolean)">
<summary>Closes the enumeration to further activity, freeing resources. </summary>
</member>
<member name="T:Lucene.Net.Util.NumericUtils.LongRangeBuilder">
<summary> Expert: Callback for <see cref="M:Lucene.Net.Util.NumericUtils.SplitLongRange(Lucene.Net.Util.NumericUtils.LongRangeBuilder,System.Int32,System.Int64,System.Int64)"/>.
You need to overwrite only one of the methods.
<p/><font color="red"><b>NOTE:</b> This is a very low-level interface,
the method signatures may change in later versions.</font>
</summary>
</member>
<member name="T:Lucene.Net.Util.NumericUtils">
<summary> This is a helper class to generate prefix-encoded representations for numerical values
and supplies converters to represent float/double values as sortable integers/longs.
<p/>To quickly execute range queries in Apache Lucene, a range is divided recursively
into multiple intervals for searching: The center of the range is searched only with
the lowest possible precision in the trie, while the boundaries are matched
more exactly. This reduces the number of terms dramatically.
<p/>This class generates terms to achive this: First the numerical integer values need to
be converted to strings. For that integer values (32 bit or 64 bit) are made unsigned
and the bits are converted to ASCII chars with each 7 bit. The resulting string is
sortable like the original integer value. Each value is also prefixed
(in the first char) by the <c>shift</c> value (number of bits removed) used
during encoding.
<p/>To also index floating point numbers, this class supplies two methods to convert them
to integer values by changing their bit layout: <see cref="M:Lucene.Net.Util.NumericUtils.DoubleToSortableLong(System.Double)"/>,
<see cref="M:Lucene.Net.Util.NumericUtils.FloatToSortableInt(System.Single)"/>. You will have no precision loss by
converting floating point numbers to integers and back (only that the integer form
is not usable). Other data types like dates can easily converted to longs or ints (e.g.
date to long: <see cref="T:System.DateTime"/>).
<p/>For easy usage, the trie algorithm is implemented for indexing inside
<see cref="T:Lucene.Net.Analysis.NumericTokenStream"/> that can index <c>int</c>, <c>long</c>,
<c>float</c>, and <c>double</c>. For querying,
<see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> and <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/> implement the query part
for the same data types.
<p/>This class can also be used, to generate lexicographically sortable (according
<see cref="M:System.String.CompareTo(System.String)"/>) representations of numeric data types for other
usages (e.g. sorting).
<p/><font color="red"><b>NOTE:</b> This API is experimental and
might change in incompatible ways in the next release.</font>
</summary>
<since> 2.9
</since>
</member>
<member name="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT">
<summary> The default precision step used by <see cref="T:Lucene.Net.Documents.NumericField"/>, <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>,
<see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/>, and <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/> as default
</summary>
</member>
<member name="F:Lucene.Net.Util.NumericUtils.BUF_SIZE_LONG">
<summary> Expert: The maximum term length (used for <c>char[]</c> buffer size)
for encoding <c>long</c> values.
</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.LongToPrefixCoded(System.Int64,System.Int32,System.Char[])">
</seealso>
</member>
<member name="F:Lucene.Net.Util.NumericUtils.BUF_SIZE_INT">
<summary> Expert: The maximum term length (used for <c>char[]</c> buffer size)
for encoding <c>int</c> values.
</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.IntToPrefixCoded(System.Int32,System.Int32,System.Char[])">
</seealso>
</member>
<member name="F:Lucene.Net.Util.NumericUtils.SHIFT_START_LONG">
<summary> Expert: Longs are stored at lower precision by shifting off lower bits. The shift count is
stored as <c>SHIFT_START_LONG+shift</c> in the first character
</summary>
</member>
<member name="F:Lucene.Net.Util.NumericUtils.SHIFT_START_INT">
<summary> Expert: Integers are stored at lower precision by shifting off lower bits. The shift count is
stored as <c>SHIFT_START_INT+shift</c> in the first character
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.LongToPrefixCoded(System.Int64,System.Int32,System.Char[])">
<summary> Expert: Returns prefix coded bits after reducing the precision by <c>shift</c> bits.
This is method is used by <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>.
</summary>
<param name="val">the numeric value
</param>
<param name="shift">how many bits to strip from the right
</param>
<param name="buffer">that will contain the encoded chars, must be at least of <see cref="F:Lucene.Net.Util.NumericUtils.BUF_SIZE_LONG"/>
length
</param>
<returns> number of chars written to buffer
</returns>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.LongToPrefixCoded(System.Int64,System.Int32)">
<summary> Expert: Returns prefix coded bits after reducing the precision by <c>shift</c> bits.
This is method is used by <see cref="T:Lucene.Net.Util.NumericUtils.LongRangeBuilder"/>.
</summary>
<param name="val">the numeric value
</param>
<param name="shift">how many bits to strip from the right
</param>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.LongToPrefixCoded(System.Int64)">
<summary> This is a convenience method, that returns prefix coded bits of a long without
reducing the precision. It can be used to store the full precision value as a
stored field in index.
<p/>To decode, use <see cref="M:Lucene.Net.Util.NumericUtils.PrefixCodedToLong(System.String)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.IntToPrefixCoded(System.Int32,System.Int32,System.Char[])">
<summary> Expert: Returns prefix coded bits after reducing the precision by <c>shift</c> bits.
This is method is used by <see cref="T:Lucene.Net.Analysis.NumericTokenStream"/>.
</summary>
<param name="val">the numeric value
</param>
<param name="shift">how many bits to strip from the right
</param>
<param name="buffer">that will contain the encoded chars, must be at least of <see cref="F:Lucene.Net.Util.NumericUtils.BUF_SIZE_INT"/>
length
</param>
<returns> number of chars written to buffer
</returns>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.IntToPrefixCoded(System.Int32,System.Int32)">
<summary> Expert: Returns prefix coded bits after reducing the precision by <c>shift</c> bits.
This is method is used by <see cref="T:Lucene.Net.Util.NumericUtils.IntRangeBuilder"/>.
</summary>
<param name="val">the numeric value
</param>
<param name="shift">how many bits to strip from the right
</param>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.IntToPrefixCoded(System.Int32)">
<summary> This is a convenience method, that returns prefix coded bits of an int without
reducing the precision. It can be used to store the full precision value as a
stored field in index.
<p/>To decode, use <see cref="M:Lucene.Net.Util.NumericUtils.PrefixCodedToInt(System.String)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.PrefixCodedToLong(System.String)">
<summary> Returns a long from prefixCoded characters.
Rightmost bits will be zero for lower precision codes.
This method can be used to decode e.g. a stored field.
</summary>
<throws> NumberFormatException if the supplied string is </throws>
<summary> not correctly prefix encoded.
</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.LongToPrefixCoded(System.Int64)">
</seealso>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.PrefixCodedToInt(System.String)">
<summary> Returns an int from prefixCoded characters.
Rightmost bits will be zero for lower precision codes.
This method can be used to decode e.g. a stored field.
</summary>
<throws> NumberFormatException if the supplied string is </throws>
<summary> not correctly prefix encoded.
</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.IntToPrefixCoded(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.DoubleToSortableLong(System.Double)">
<summary> Converts a <c>double</c> value to a sortable signed <c>long</c>.
The value is converted by getting their IEEE 754 floating-point "double format"
bit layout and then some bits are swapped, to be able to compare the result as long.
By this the precision is not reduced, but the value can easily used as a long.
</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.SortableLongToDouble(System.Int64)">
</seealso>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.DoubleToPrefixCoded(System.Double)">
<summary> Convenience method: this just returns:
longToPrefixCoded(doubleToSortableLong(val))
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.SortableLongToDouble(System.Int64)">
<summary> Converts a sortable <c>long</c> back to a <c>double</c>.</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.DoubleToSortableLong(System.Double)">
</seealso>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.PrefixCodedToDouble(System.String)">
<summary> Convenience method: this just returns:
sortableLongToDouble(prefixCodedToLong(val))
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.FloatToSortableInt(System.Single)">
<summary> Converts a <c>float</c> value to a sortable signed <c>int</c>.
The value is converted by getting their IEEE 754 floating-point "float format"
bit layout and then some bits are swapped, to be able to compare the result as int.
By this the precision is not reduced, but the value can easily used as an int.
</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.SortableIntToFloat(System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.FloatToPrefixCoded(System.Single)">
<summary> Convenience method: this just returns:
intToPrefixCoded(floatToSortableInt(val))
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.SortableIntToFloat(System.Int32)">
<summary> Converts a sortable <c>int</c> back to a <c>float</c>.</summary>
<seealso cref="M:Lucene.Net.Util.NumericUtils.FloatToSortableInt(System.Single)">
</seealso>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.PrefixCodedToFloat(System.String)">
<summary> Convenience method: this just returns:
sortableIntToFloat(prefixCodedToInt(val))
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.SplitLongRange(Lucene.Net.Util.NumericUtils.LongRangeBuilder,System.Int32,System.Int64,System.Int64)">
<summary> Expert: Splits a long range recursively.
You may implement a builder that adds clauses to a
<see cref="T:Lucene.Net.Search.BooleanQuery"/> for each call to its
<see cref="M:Lucene.Net.Util.NumericUtils.LongRangeBuilder.AddRange(System.String,System.String)"/>
method.
<p/>This method is used by <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/>.
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.SplitIntRange(Lucene.Net.Util.NumericUtils.IntRangeBuilder,System.Int32,System.Int32,System.Int32)">
<summary> Expert: Splits an int range recursively.
You may implement a builder that adds clauses to a
<see cref="T:Lucene.Net.Search.BooleanQuery"/> for each call to its
<see cref="M:Lucene.Net.Util.NumericUtils.IntRangeBuilder.AddRange(System.String,System.String)"/>
method.
<p/>This method is used by <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/>.
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.SplitRange(System.Object,System.Int32,System.Int32,System.Int64,System.Int64)">
<summary>This helper does the splitting for both 32 and 64 bit. </summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.AddRange(System.Object,System.Int32,System.Int64,System.Int64,System.Int32)">
<summary>Helper that delegates to correct range builder </summary>
</member>
<member name="T:Lucene.Net.Util.NumericUtils.LongRangeBuilder">
<summary> Expert: Callback for <see cref="M:Lucene.Net.Util.NumericUtils.SplitLongRange(Lucene.Net.Util.NumericUtils.LongRangeBuilder,System.Int32,System.Int64,System.Int64)"/>.
You need to overwrite only one of the methods.
<p/><font color="red"><b>NOTE:</b> This is a very low-level interface,
the method signatures may change in later versions.</font>
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.LongRangeBuilder.AddRange(System.String,System.String)">
<summary> Overwrite this method, if you like to receive the already prefix encoded range bounds.
You can directly build classical (inclusive) range queries from them.
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.LongRangeBuilder.AddRange(System.Int64,System.Int64,System.Int32)">
<summary> Overwrite this method, if you like to receive the raw long range bounds.
You can use this for e.g. debugging purposes (print out range bounds).
</summary>
</member>
<member name="T:Lucene.Net.Util.NumericUtils.IntRangeBuilder">
<summary> Expert: Callback for <see cref="M:Lucene.Net.Util.NumericUtils.SplitIntRange(Lucene.Net.Util.NumericUtils.IntRangeBuilder,System.Int32,System.Int32,System.Int32)"/>.
You need to overwrite only one of the methods.
<p/><font color="red"><b>NOTE:</b> This is a very low-level interface,
the method signatures may change in later versions.</font>
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.IntRangeBuilder.AddRange(System.String,System.String)">
<summary> Overwrite this method, if you like to receive the already prefix encoded range bounds.
You can directly build classical range (inclusive) queries from them.
</summary>
</member>
<member name="M:Lucene.Net.Util.NumericUtils.IntRangeBuilder.AddRange(System.Int32,System.Int32,System.Int32)">
<summary> Overwrite this method, if you like to receive the raw int range bounds.
You can use this for e.g. debugging purposes (print out range bounds).
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewLongRange(System.String,System.Int32,System.Nullable{System.Int64},System.Nullable{System.Int64},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>long</c>
range using the given <a href="#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewLongRange(System.String,System.Nullable{System.Int64},System.Nullable{System.Int64},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>long</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewIntRange(System.String,System.Int32,System.Nullable{System.Int32},System.Nullable{System.Int32},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>int</c>
range using the given <a href="#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewIntRange(System.String,System.Nullable{System.Int32},System.Nullable{System.Int32},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>int</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewDoubleRange(System.String,System.Int32,System.Nullable{System.Double},System.Nullable{System.Double},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>double</c>
range using the given <a href="#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewDoubleRange(System.String,System.Nullable{System.Double},System.Nullable{System.Double},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>double</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewFloatRange(System.String,System.Int32,System.Nullable{System.Single},System.Nullable{System.Single},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>float</c>
range using the given <a href="#precisionStepDesc"><c>precisionStep</c></a>.
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="M:Lucene.Net.Search.NumericRangeQuery.NewFloatRange(System.String,System.Nullable{System.Single},System.Nullable{System.Single},System.Boolean,System.Boolean)">
<summary> Factory that creates a <c>NumericRangeQuery</c>, that queries a <c>float</c>
range using the default <c>precisionStep</c> <see cref="F:Lucene.Net.Util.NumericUtils.PRECISION_STEP_DEFAULT"/> (4).
You can have half-open ranges (which are in fact </≤ or >/≥ queries)
by setting the min or max value to <c>null</c>. By setting inclusive to false, it will
match all documents excluding the bounds, with inclusive on, the boundaries are hits, too.
</summary>
</member>
<member name="T:Lucene.Net.Search.ParallelMultiSearcher">
<summary>Implements parallel search over a set of <c>Searchables</c>.
<p/>Applications usually need only call the inherited <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>
or <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32)"/> methods.
</summary>
</member>
<member name="M:Lucene.Net.Search.ParallelMultiSearcher.#ctor(Lucene.Net.Search.Searchable[])">
<summary>Creates a <see cref="T:Lucene.Net.Search.Searchable"/> which searches <i>searchables</i>. </summary>
</member>
<member name="M:Lucene.Net.Search.ParallelMultiSearcher.DocFreq(Lucene.Net.Index.Term)">
<summary>
Executes each <see cref="T:Lucene.Net.Search.Searchable"/>'s docFreq() in its own thread and
waits for each search to complete and merge the results back together.
</summary>
</member>
<member name="M:Lucene.Net.Search.ParallelMultiSearcher.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,System.Int32)">
<summary> A search implementation which executes each
<see cref="T:Lucene.Net.Search.Searchable"/> in its own thread and waits for each search to complete
and merge the results back together.
</summary>
</member>
<member name="M:Lucene.Net.Search.ParallelMultiSearcher.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)">
<summary> A search implementation allowing sorting which spans a new thread for each
Searchable, waits for each search to complete and merges
the results back together.
</summary>
</member>
<member name="M:Lucene.Net.Search.ParallelMultiSearcher.Search(Lucene.Net.Search.Weight,Lucene.Net.Search.Filter,Lucene.Net.Search.Collector)">
<summary>Lower-level search API.
<p/><see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/> is called for every matching document.
<p/>Applications should only use this if they need <i>all</i> of the
matching documents. The high-level search API (<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>)
is usually more efficient, as it skips
non-high-scoring hits.
<p/>This method cannot be parallelized, because <see cref="T:Lucene.Net.Search.Collector"/>
supports no concurrent access.
</summary>
<param name="weight">to match documents
</param>
<param name="filter">if non-null, a bitset used to eliminate some documents
</param>
<param name="collector">to receive hits
TODO: parallelize this one too
</param>
</member>
<member name="T:Lucene.Net.Search.Payloads.AveragePayloadFunction">
<summary> Calculate the final score as the average score of all payloads seen.
<p/>
Is thread safe and completely reusable.
</summary>
</member>
<member name="T:Lucene.Net.Search.Payloads.PayloadFunction">
<summary> An abstract class that defines a way for Payload*Query instances
to transform the cumulative effects of payload scores for a document.
</summary>
<seealso cref="T:Lucene.Net.Search.Payloads.PayloadTermQuery"> for more information
<p/>
This class and its derivations are experimental and subject to change
</seealso>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadFunction.CurrentScore(System.Int32,System.String,System.Int32,System.Int32,System.Int32,System.Single,System.Single)">
<summary> Calculate the score up to this point for this doc and field</summary>
<param name="docId">The current doc
</param>
<param name="field">The field
</param>
<param name="start">The start position of the matching Span
</param>
<param name="end">The end position of the matching Span
</param>
<param name="numPayloadsSeen">The number of payloads seen so far
</param>
<param name="currentScore">The current score so far
</param>
<param name="currentPayloadScore">The score for the current payload
</param>
<returns> The new current Score
</returns>
<seealso cref="T:Lucene.Net.Search.Spans.Spans">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadFunction.DocScore(System.Int32,System.String,System.Int32,System.Single)">
<summary> Calculate the final score for all the payloads seen so far for this doc/field</summary>
<param name="docId">The current doc
</param>
<param name="field">The current field
</param>
<param name="numPayloadsSeen">The total number of payloads seen on this document
</param>
<param name="payloadScore">The raw score for those payloads
</param>
<returns> The final score for the payloads
</returns>
</member>
<member name="T:Lucene.Net.Search.Payloads.MaxPayloadFunction">
<summary> Returns the maximum payload score seen, else 1 if there are no payloads on the doc.
<p/>
Is thread safe and completely reusable.
</summary>
</member>
<member name="T:Lucene.Net.Search.Payloads.MinPayloadFunction">
<summary> Calculates the minimum payload seen
</summary>
</member>
<member name="T:Lucene.Net.Search.Payloads.PayloadNearQuery">
<summary> This class is very similar to
<see cref="T:Lucene.Net.Search.Spans.SpanNearQuery"/> except that it factors
in the value of the payloads located at each of the positions where the
<see cref="T:Lucene.Net.Search.Spans.TermSpans"/> occurs.
<p/>
In order to take advantage of this, you must override
<see cref="M:Lucene.Net.Search.Similarity.ScorePayload(System.Int32,System.String,System.Int32,System.Int32,System.Byte[],System.Int32,System.Int32)"/>
which returns 1 by default.
<p/>
Payload scores are aggregated using a pluggable <see cref="T:Lucene.Net.Search.Payloads.PayloadFunction"/>.
</summary>
<seealso cref="M:Lucene.Net.Search.Similarity.ScorePayload(System.Int32,System.String,System.Int32,System.Int32,System.Byte[],System.Int32,System.Int32)">
</seealso>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanNearQuery">
<summary>Matches spans which are near one another. One can specify <i>slop</i>, the
maximum number of intervening unmatched positions, as well as whether
matches are required to be in-order.
</summary>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanQuery">
<summary>Base class for span-based queries. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanQuery.GetSpans(Lucene.Net.Index.IndexReader)">
<summary>Expert: Returns the matches for this query in an index. Used internally
to search for spans.
</summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanQuery.Field">
<summary>Returns the name of the field matched by this query.</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanNearQuery.#ctor(Lucene.Net.Search.Spans.SpanQuery[],System.Int32,System.Boolean)">
<summary>Construct a SpanNearQuery. Matches spans matching a span from each
clause, with up to <c>slop</c> total unmatched positions between
them. * When <c>inOrder</c> is true, the spans from each clause
must be * ordered as in <c>clauses</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanNearQuery.GetClauses">
<summary>Return the clauses whose spans are matched. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanNearQuery.Equals(System.Object)">
<summary>Returns true iff <c>o</c> is equal to this. </summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanNearQuery.Slop">
<summary>Return the maximum number of intervening unmatched positions permitted.</summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanNearQuery.IsInOrder">
<summary>Return true if matches are required to be in-order.</summary>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanWeight">
<summary> Expert-only. Public for use by other weight implementations</summary>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanScorer">
<summary> Public for extension only.</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanScorer.Explain(System.Int32)">
<summary>
This method is no longer an official member of <see cref="T:Lucene.Net.Search.Scorer"/>
but it is needed by SpanWeight to build an explanation.
</summary>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadNearQuery.PayloadNearSpanScorer.ProcessPayloads(System.Collections.Generic.ICollection{System.Byte[]},System.Int32,System.Int32)">
<summary> By default, uses the <see cref="T:Lucene.Net.Search.Payloads.PayloadFunction"/> to score the payloads, but
can be overridden to do other things.
</summary>
<param name="payLoads">The payloads
</param>
<param name="start">The start position of the span being scored
</param>
<param name="end">The end position of the span being scored
</param>
<seealso cref="N:Lucene.Net.Search.Spans">
</seealso>
</member>
<member name="T:Lucene.Net.Search.Payloads.PayloadSpanUtil">
<summary> Experimental class to get set of payloads for most standard Lucene queries.
Operates like Highlighter - IndexReader should only contain doc of interest,
best to use MemoryIndex.
<p/>
<font color="#FF0000">
WARNING: The status of the <b>Payloads</b> feature is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font>
</summary>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadSpanUtil.#ctor(Lucene.Net.Index.IndexReader)">
<param name="reader">that contains doc with payloads to extract
</param>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadSpanUtil.GetPayloadsForQuery(Lucene.Net.Search.Query)">
<summary> Query should be rewritten for wild/fuzzy support.
</summary>
<param name="query">
</param>
<returns> payloads Collection
</returns>
<throws> IOException </throws>
</member>
<member name="T:Lucene.Net.Search.Payloads.PayloadTermQuery">
<summary> This class is very similar to
<see cref="T:Lucene.Net.Search.Spans.SpanTermQuery"/> except that it factors
in the value of the payload located at each of the positions where the
<see cref="T:Lucene.Net.Index.Term"/> occurs.
<p/>
In order to take advantage of this, you must override
<see cref="M:Lucene.Net.Search.Similarity.ScorePayload(System.Int32,System.String,System.Int32,System.Int32,System.Byte[],System.Int32,System.Int32)"/>
which returns 1 by default.
<p/>
Payload scores are aggregated using a pluggable <see cref="T:Lucene.Net.Search.Payloads.PayloadFunction"/>.
</summary>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanTermQuery">
<summary>Matches spans containing a term. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanTermQuery.#ctor(Lucene.Net.Index.Term)">
<summary>Construct a SpanTermQuery matching the named term's spans. </summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanTermQuery.Term">
<summary>Return the term whose spans are matched. </summary>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadTermQuery.PayloadTermWeight.PayloadTermSpanScorer.Score">
<summary> </summary>
<returns> <see cref="M:Lucene.Net.Search.Payloads.PayloadTermQuery.PayloadTermWeight.PayloadTermSpanScorer.GetSpanScore"/> * <see cref="M:Lucene.Net.Search.Payloads.PayloadTermQuery.PayloadTermWeight.PayloadTermSpanScorer.GetPayloadScore"/>
</returns>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadTermQuery.PayloadTermWeight.PayloadTermSpanScorer.GetSpanScore">
<summary> Returns the SpanScorer score only.
<p/>
Should not be overriden without good cause!
</summary>
<returns> the score for just the Span part w/o the payload
</returns>
<throws> IOException </throws>
<summary>
</summary>
<seealso cref="M:Lucene.Net.Search.Payloads.PayloadTermQuery.PayloadTermWeight.PayloadTermSpanScorer.Score">
</seealso>
</member>
<member name="M:Lucene.Net.Search.Payloads.PayloadTermQuery.PayloadTermWeight.PayloadTermSpanScorer.GetPayloadScore">
<summary> The score for the payload
</summary>
<returns> The score, as calculated by
<see cref="M:Lucene.Net.Search.Payloads.PayloadFunction.DocScore(System.Int32,System.String,System.Int32,System.Single)"/>
</returns>
</member>
<member name="T:Lucene.Net.Search.PhrasePositions">
<summary> Position of a term in a document that takes into account the term offset within the phrase. </summary>
</member>
<member name="M:Lucene.Net.Search.PhrasePositions.NextPosition">
<summary> Go to next location of this term current document, and set
<c>position</c> as <c>location - offset</c>, so that a
matching exact phrase is easily identified when all PhrasePositions
have exactly the same <c>position</c>.
</summary>
</member>
<member name="T:Lucene.Net.Search.PhraseQuery">
<summary>A Query that matches documents containing a particular sequence of terms.
A PhraseQuery is built by QueryParser for input like <c>"new york"</c>.
<p/>This query may be combined with other terms or queries with a <see cref="T:Lucene.Net.Search.BooleanQuery"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.#ctor">
<summary>Constructs an empty phrase query. </summary>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.Add(Lucene.Net.Index.Term)">
<summary> Adds a term to the end of the query phrase.
The relative position of the term is the one immediately after the last term added.
</summary>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.Add(Lucene.Net.Index.Term,System.Int32)">
<summary> Adds a term to the end of the query phrase.
The relative position of the term within the phrase is specified explicitly.
This allows e.g. phrases with more than one term at the same position
or phrases with gaps (e.g. in connection with stopwords).
</summary>
<param name="term">
</param>
<param name="position">
</param>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.GetTerms">
<summary>Returns the set of terms in this phrase. </summary>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.GetPositions">
<summary> Returns the relative positions of terms in this phrase.</summary>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.ExtractTerms(System.Collections.Generic.ISet{Lucene.Net.Index.Term})">
<seealso cref="M:Lucene.Net.Search.Query.ExtractTerms(System.Collections.Generic.ISet{Lucene.Net.Index.Term})">
</seealso>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.Equals(System.Object)">
<summary>Returns true iff <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.PhraseQuery.GetHashCode">
<summary>Returns a hash code value for this object.</summary>
</member>
<member name="P:Lucene.Net.Search.PhraseQuery.Slop">
<summary>Sets the number of other words permitted between words in query phrase.
If zero, then this is an exact phrase search. For larger values this works
like a <c>WITHIN</c> or <c>NEAR</c> operator.
<p/>The slop is in fact an edit-distance, where the units correspond to
moves of terms in the query phrase out of position. For example, to switch
the order of two words requires two moves (the first move places the words
atop one another), so to permit re-orderings of phrases, the slop must be
at least two.
<p/>More exact matches are scored higher than sloppier matches, thus search
results are sorted by exactness.
<p/>The slop is zero by default, requiring exact matches.
</summary>
</member>
<member name="T:Lucene.Net.Search.PositiveScoresOnlyCollector">
<summary> A <see cref="T:Lucene.Net.Search.Collector"/> implementation which wraps another
<see cref="T:Lucene.Net.Search.Collector"/> and makes sure only documents with
scores > 0 are collected.
</summary>
</member>
<member name="T:Lucene.Net.Search.PrefixFilter">
<summary> A Filter that restricts search results to values that have a matching prefix in a given
field.
</summary>
</member>
<member name="M:Lucene.Net.Search.PrefixFilter.ToString">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="T:Lucene.Net.Search.PrefixQuery">
<summary>A Query that matches documents containing terms with a specified prefix. A PrefixQuery
is built by QueryParser for input like <c>app*</c>.
<p/>This query uses the
<see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/>
rewrite method.
</summary>
</member>
<member name="M:Lucene.Net.Search.PrefixQuery.#ctor(Lucene.Net.Index.Term)">
<summary>Constructs a query for terms starting with <c>prefix</c>. </summary>
</member>
<member name="M:Lucene.Net.Search.PrefixQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="P:Lucene.Net.Search.PrefixQuery.Prefix">
<summary>Returns the prefix of this query. </summary>
</member>
<member name="T:Lucene.Net.Search.PrefixTermEnum">
<summary> Subclass of FilteredTermEnum for enumerating all terms that match the
specified prefix filter term.
<p/>
Term enumerations are always ordered by Term.compareTo(). Each term in
the enumeration is greater than all that precede it.
</summary>
</member>
<member name="T:Lucene.Net.Search.QueryTermVector">
<summary>
</summary>
</member>
<member name="M:Lucene.Net.Search.QueryTermVector.#ctor(System.String[])">
<summary> </summary>
<param name="queryTerms">The original list of terms from the query, can contain duplicates
</param>
</member>
<member name="T:Lucene.Net.Search.QueryWrapperFilter">
<summary> Constrains search results to only match those which also match a provided
query.
<p/> This could be used, for example, with a <see cref="T:Lucene.Net.Search.TermRangeQuery"/> on a suitably
formatted date field to implement date filtering. One could re-use a single
QueryFilter that matches, e.g., only documents modified within the last
week. The QueryFilter and TermRangeQuery would only need to be reconstructed
once per day.
</summary>
<version> $Id:$
</version>
</member>
<member name="M:Lucene.Net.Search.QueryWrapperFilter.#ctor(Lucene.Net.Search.Query)">
<summary>Constructs a filter which only matches documents matching
<c>query</c>.
</summary>
</member>
<member name="T:Lucene.Net.Search.ReqExclScorer">
<summary>A Scorer for queries with a required subscorer
and an excluding (prohibited) sub DocIdSetIterator.
<br/>
This <c>Scorer</c> implements <see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/>,
and it uses the skipTo() on the given scorers.
</summary>
</member>
<member name="M:Lucene.Net.Search.ReqExclScorer.#ctor(Lucene.Net.Search.Scorer,Lucene.Net.Search.DocIdSetIterator)">
<summary>Construct a <c>ReqExclScorer</c>.</summary>
<param name="reqScorer">The scorer that must match, except where
</param>
<param name="exclDisi">indicates exclusion.
</param>
</member>
<member name="M:Lucene.Net.Search.ReqExclScorer.ToNonExcluded">
<summary>Advance to non excluded doc.
<br/>On entry:
<list type="bullet">
<item>reqScorer != null, </item>
<item>exclScorer != null, </item>
<item>reqScorer was advanced once via next() or skipTo()
and reqScorer.doc() may still be excluded.</item>
</list>
Advances reqScorer a non excluded required doc, if any.
</summary>
<returns> true iff there is a non excluded required doc.
</returns>
</member>
<member name="M:Lucene.Net.Search.ReqExclScorer.Score">
<summary>Returns the score of the current document matching the query.
Initially invalid, until <see cref="M:Lucene.Net.Search.ReqExclScorer.NextDoc"/> is called the first time.
</summary>
<returns> The score of the required scorer.
</returns>
</member>
<member name="T:Lucene.Net.Search.ReqOptSumScorer">
<summary>A Scorer for queries with a required part and an optional part.
Delays skipTo() on the optional part until a score() is needed.
<br/>
This <c>Scorer</c> implements <see cref="M:Lucene.Net.Search.DocIdSetIterator.Advance(System.Int32)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Search.ReqOptSumScorer.reqScorer">
<summary>The scorers passed from the constructor.
These are set to null as soon as their next() or skipTo() returns false.
</summary>
</member>
<member name="M:Lucene.Net.Search.ReqOptSumScorer.#ctor(Lucene.Net.Search.Scorer,Lucene.Net.Search.Scorer)">
<summary>Construct a <c>ReqOptScorer</c>.</summary>
<param name="reqScorer">The required scorer. This must match.
</param>
<param name="optScorer">The optional scorer. This is used for scoring only.
</param>
</member>
<member name="M:Lucene.Net.Search.ReqOptSumScorer.Score">
<summary>Returns the score of the current document matching the query.
Initially invalid, until <see cref="M:Lucene.Net.Search.ReqOptSumScorer.NextDoc"/> is called the first time.
</summary>
<returns> The score of the required scorer, eventually increased by the score
of the optional scorer when it also matches the current document.
</returns>
</member>
<member name="T:Lucene.Net.Search.ScoreCachingWrappingScorer">
<summary> A <see cref="T:Lucene.Net.Search.Scorer"/> which wraps another scorer and caches the score of the
current document. Successive calls to <see cref="M:Lucene.Net.Search.ScoreCachingWrappingScorer.Score"/> will return the same
result and will not invoke the wrapped Scorer's score() method, unless the
current document has changed.<br/>
This class might be useful due to the changes done to the <see cref="T:Lucene.Net.Search.Collector"/>
interface, in which the score is not computed for a document by default, only
if the collector requests it. Some collectors may need to use the score in
several places, however all they have in hand is a <see cref="T:Lucene.Net.Search.Scorer"/> object, and
might end up computing the score of a document more than once.
</summary>
</member>
<member name="M:Lucene.Net.Search.ScoreCachingWrappingScorer.#ctor(Lucene.Net.Search.Scorer)">
<summary>Creates a new instance by wrapping the given scorer. </summary>
</member>
<member name="T:Lucene.Net.Search.SingleTermEnum">
<summary>
Subclass of FilteredTermEnum for enumerating a single term.
<p/>
This can be used by <see cref="T:Lucene.Net.Search.MultiTermQuery"/>s that need only visit one term,
but want to preserve MultiTermQuery semantics such as
<see cref="T:Lucene.Net.Search.RewriteMethod"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.SingleTermEnum.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term)">
<summary>
Creates a new <c>SingleTermEnum</c>.
<p/>
After calling the constructor the enumeration is already pointing to the term,
if it exists.
</summary>
</member>
<member name="M:Lucene.Net.Search.SloppyPhraseScorer.PhraseFreq">
<summary> Score a candidate doc for all slop-valid position-combinations (matches)
encountered while traversing/hopping the PhrasePositions.
<br/> The score contribution of a match depends on the distance:
<br/> - highest score for distance=0 (exact match).
<br/> - score gets lower as distance gets higher.
<br/>Example: for query "a b"~2, a document "x a b a y" can be scored twice:
once for "a b" (distance=0), and once for "b a" (distance=2).
<br/>Possibly not all valid combinations are encountered, because for efficiency
we always propagate the least PhrasePosition. This allows to base on
PriorityQueue and move forward faster.
As result, for example, document "a b c b a"
would score differently for queries "a b c"~4 and "c b a"~4, although
they really are equivalent.
Similarly, for doc "a b c b a f g", query "c b"~2
would get same score as "g f"~2, although "c b"~2 could be matched twice.
We may want to fix this in the future (currently not, for performance reasons).
</summary>
</member>
<member name="M:Lucene.Net.Search.SloppyPhraseScorer.InitPhrasePositions">
<summary> Init PhrasePositions in place.
There is a one time initialization for this scorer:
<br/>- Put in repeats[] each pp that has another pp with same position in the doc.
<br/>- Also mark each such pp by pp.repeats = true.
<br/>Later can consult with repeats[] in termPositionsDiffer(pp), making that check efficient.
In particular, this allows to score queries with no repetitions with no overhead due to this computation.
<br/>- Example 1 - query with no repetitions: "ho my"~2
<br/>- Example 2 - query with repetitions: "ho my my"~2
<br/>- Example 3 - query with repetitions: "my ho my"~2
<br/>Init per doc w/repeats in query, includes propagating some repeating pp's to avoid false phrase detection.
</summary>
<returns> end (max position), or -1 if any term ran out (i.e. done)
</returns>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Search.SloppyPhraseScorer.TermPositionsDiffer(Lucene.Net.Search.PhrasePositions)">
<summary> We disallow two pp's to have the same TermPosition, thereby verifying multiple occurrences
in the query of the same word would go elsewhere in the matched doc.
</summary>
<returns> null if differ (i.e. valid) otherwise return the higher offset PhrasePositions
out of the first two PPs found to not differ.
</returns>
</member>
<member name="T:Lucene.Net.Search.Sort">
<summary> Encapsulates sort criteria for returned hits.
<p/>The fields used to determine sort order must be carefully chosen.
Documents must contain a single term in such a field,
and the value of the term should indicate the document's relative position in
a given sort order. The field must be indexed, but should not be tokenized,
and does not need to be stored (unless you happen to want it back with the
rest of your document data). In other words:
<p/><c>document.add (new Field ("byNumber", Integer.toString(x), Field.Store.NO, Field.Index.NOT_ANALYZED));</c><p/>
<p/><h3>Valid Types of Values</h3>
<p/>There are four possible kinds of term values which may be put into
sorting fields: Integers, Longs, Floats, or Strings. Unless
<see cref="T:Lucene.Net.Search.SortField">SortField</see> objects are specified, the type of value
in the field is determined by parsing the first term in the field.
<p/>Integer term values should contain only digits and an optional
preceding negative sign. Values must be base 10 and in the range
<c>Integer.MIN_VALUE</c> and <c>Integer.MAX_VALUE</c> inclusive.
Documents which should appear first in the sort
should have low value integers, later documents high values
(i.e. the documents should be numbered <c>1..n</c> where
<c>1</c> is the first and <c>n</c> the last).
<p/>Long term values should contain only digits and an optional
preceding negative sign. Values must be base 10 and in the range
<c>Long.MIN_VALUE</c> and <c>Long.MAX_VALUE</c> inclusive.
Documents which should appear first in the sort
should have low value integers, later documents high values.
<p/>Float term values should conform to values accepted by
<see cref="M:System.Single.Parse(System.String)"/> (except that <c>NaN</c>
and <c>Infinity</c> are not supported).
Documents which should appear first in the sort
should have low values, later documents high values.
<p/>String term values can contain any valid String, but should
not be tokenized. The values are sorted according to their
<see cref="T:System.IComparable">natural order</see>. Note that using this type
of term value has higher memory requirements than the other
two types.
<p/><h3>Object Reuse</h3>
<p/>One of these objects can be
used multiple times and the sort order changed between usages.
<p/>This class is thread safe.
<p/><h3>Memory Usage</h3>
<p/>Sorting uses of caches of term values maintained by the
internal HitQueue(s). The cache is static and contains an integer
or float array of length <c>IndexReader.MaxDoc</c> for each field
name for which a sort is performed. In other words, the size of the
cache in bytes is:
<p/><c>4 * IndexReader.MaxDoc * (# of different fields actually used to sort)</c>
<p/>For String fields, the cache is larger: in addition to the
above array, the value of every term in the field is kept in memory.
If there are many unique terms in the field, this could
be quite large.
<p/>Note that the size of the cache is not affected by how many
fields are in the index and <i>might</i> be used to sort - only by
the ones actually used to sort a result set.
<p/>Created: Feb 12, 2004 10:53:57 AM
</summary>
</member>
<member name="F:Lucene.Net.Search.Sort.RELEVANCE">
<summary> Represents sorting by computed relevance. Using this sort criteria returns
the same results as calling
<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>Searcher#search()without a sort criteria,
only with slightly more overhead.
</summary>
</member>
<member name="F:Lucene.Net.Search.Sort.INDEXORDER">
<summary>Represents sorting by index order. </summary>
</member>
<member name="M:Lucene.Net.Search.Sort.#ctor">
<summary> Sorts by computed relevance. This is the same sort criteria as calling
<see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>without a sort criteria,
only with slightly more overhead.
</summary>
</member>
<member name="M:Lucene.Net.Search.Sort.#ctor(Lucene.Net.Search.SortField)">
<summary>Sorts by the criteria in the given SortField. </summary>
</member>
<member name="M:Lucene.Net.Search.Sort.#ctor(Lucene.Net.Search.SortField[])">
<summary>Sorts in succession by the criteria in each SortField. </summary>
</member>
<member name="M:Lucene.Net.Search.Sort.SetSort(Lucene.Net.Search.SortField)">
<summary>Sets the sort to the given criteria. </summary>
</member>
<member name="M:Lucene.Net.Search.Sort.SetSort(Lucene.Net.Search.SortField[])">
<summary>Sets the sort to the given criteria in succession. </summary>
</member>
<member name="M:Lucene.Net.Search.Sort.GetSort">
<summary> Representation of the sort criteria.</summary>
<returns> Array of SortField objects used in this sort criteria
</returns>
</member>
<member name="M:Lucene.Net.Search.Sort.Equals(System.Object)">
<summary>Returns true if <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.Sort.GetHashCode">
<summary>Returns a hash code value for this object. </summary>
</member>
<member name="T:Lucene.Net.Search.SortField">
<summary> Stores information about how to sort documents by terms in an individual
field. Fields must be indexed in order to sort by them.
<p/>Created: Feb 11, 2004 1:25:29 PM
</summary>
<seealso cref="T:Lucene.Net.Search.Sort"></seealso>
</member>
<member name="F:Lucene.Net.Search.SortField.SCORE">
<summary>Sort by document score (relevancy). Sort values are Float and higher
values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.DOC">
<summary>Sort by document number (index order). Sort values are Integer and lower
values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.STRING">
<summary>Sort using term values as Strings. Sort values are String and lower
values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.INT">
<summary>Sort using term values as encoded Integers. Sort values are Integer and
lower values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.FLOAT">
<summary>Sort using term values as encoded Floats. Sort values are Float and
lower values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.LONG">
<summary>Sort using term values as encoded Longs. Sort values are Long and
lower values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.DOUBLE">
<summary>Sort using term values as encoded Doubles. Sort values are Double and
lower values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.SHORT">
<summary>Sort using term values as encoded Shorts. Sort values are Short and
lower values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.CUSTOM">
<summary>Sort using a custom Comparator. Sort values are any Comparable and
sorting is done according to natural order.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.BYTE">
<summary>Sort using term values as encoded Bytes. Sort values are Byte and
lower values are at the front.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.STRING_VAL">
<summary>Sort using term values as Strings, but comparing by
value (using String.compareTo) for all comparisons.
This is typically slower than <see cref="F:Lucene.Net.Search.SortField.STRING"/>, which
uses ordinals to do the sorting.
</summary>
</member>
<member name="F:Lucene.Net.Search.SortField.FIELD_SCORE">
<summary>Represents sorting by document score (relevancy). </summary>
</member>
<member name="F:Lucene.Net.Search.SortField.FIELD_DOC">
<summary>Represents sorting by document number (index order). </summary>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,System.Int32)">
<summary>Creates a sort by terms in the given field with the type of term
values explicitly given.
</summary>
<param name="field"> Name of field to sort by. Can be <c>null</c> if
<c>type</c> is SCORE or DOC.
</param>
<param name="type"> Type of values in the terms.
</param>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,System.Int32,System.Boolean)">
<summary>Creates a sort, possibly in reverse, by terms in the given field with the
type of term values explicitly given.
</summary>
<param name="field"> Name of field to sort by. Can be <c>null</c> if
<c>type</c> is SCORE or DOC.
</param>
<param name="type"> Type of values in the terms.
</param>
<param name="reverse">True if natural order should be reversed.
</param>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,Lucene.Net.Search.Parser)">
<summary>Creates a sort by terms in the given field, parsed
to numeric values using a custom <see cref="T:Lucene.Net.Search.Parser"/>.
</summary>
<param name="field"> Name of field to sort by. Must not be null.
</param>
<param name="parser">Instance of a <see cref="T:Lucene.Net.Search.Parser"/>,
which must subclass one of the existing numeric
parsers from <see cref="T:Lucene.Net.Search.FieldCache"/>. Sort type is inferred
by testing which numeric parser the parser subclasses.
</param>
<throws> IllegalArgumentException if the parser fails to </throws>
<summary> subclass an existing numeric parser, or field is null
</summary>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,Lucene.Net.Search.Parser,System.Boolean)">
<summary>Creates a sort, possibly in reverse, by terms in the given field, parsed
to numeric values using a custom <see cref="T:Lucene.Net.Search.Parser"/>.
</summary>
<param name="field"> Name of field to sort by. Must not be null.
</param>
<param name="parser">Instance of a <see cref="T:Lucene.Net.Search.Parser"/>,
which must subclass one of the existing numeric
parsers from <see cref="T:Lucene.Net.Search.FieldCache"/>. Sort type is inferred
by testing which numeric parser the parser subclasses.
</param>
<param name="reverse">True if natural order should be reversed.
</param>
<throws> IllegalArgumentException if the parser fails to </throws>
<summary> subclass an existing numeric parser, or field is null
</summary>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,System.Globalization.CultureInfo)">
<summary>Creates a sort by terms in the given field sorted
according to the given locale.
</summary>
<param name="field"> Name of field to sort by, cannot be <c>null</c>.
</param>
<param name="locale">Locale of values in the field.
</param>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,System.Globalization.CultureInfo,System.Boolean)">
<summary>Creates a sort, possibly in reverse, by terms in the given field sorted
according to the given locale.
</summary>
<param name="field"> Name of field to sort by, cannot be <c>null</c>.
</param>
<param name="locale">Locale of values in the field.
</param>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,Lucene.Net.Search.FieldComparatorSource)">
<summary>Creates a sort with a custom comparison function.</summary>
<param name="field">Name of field to sort by; cannot be <c>null</c>.
</param>
<param name="comparator">Returns a comparator for sorting hits.
</param>
</member>
<member name="M:Lucene.Net.Search.SortField.#ctor(System.String,Lucene.Net.Search.FieldComparatorSource,System.Boolean)">
<summary>Creates a sort, possibly in reverse, with a custom comparison function.</summary>
<param name="field">Name of field to sort by; cannot be <c>null</c>.
</param>
<param name="comparator">Returns a comparator for sorting hits.
</param>
<param name="reverse">True if natural order should be reversed.
</param>
</member>
<member name="M:Lucene.Net.Search.SortField.Equals(System.Object)">
<summary>Returns true if <c>o</c> is equal to this. If a
<see cref="T:Lucene.Net.Search.FieldComparatorSource"/> or <see cref="T:Lucene.Net.Search.Parser"/>
was provided, it must properly
implement equals (unless a singleton is always used).
</summary>
</member>
<member name="M:Lucene.Net.Search.SortField.GetHashCode">
<summary>Returns true if <c>o</c> is equal to this. If a
<see cref="T:Lucene.Net.Search.FieldComparatorSource"/> (deprecated) or <see cref="T:Lucene.Net.Search.Parser"/>
was provided, it must properly
implement hashCode (unless a singleton is always
used).
</summary>
</member>
<member name="M:Lucene.Net.Search.SortField.GetComparator(System.Int32,System.Int32)">
<summary>Returns the <see cref="T:Lucene.Net.Search.FieldComparator"/> to use for
sorting.
<b>NOTE:</b> This API is experimental and might change in
incompatible ways in the next release.
</summary>
<param name="numHits">number of top hits the queue will store
</param>
<param name="sortPos">position of this SortField within <see cref="T:Lucene.Net.Search.Sort"/>
. The comparator is primary if sortPos==0,
secondary if sortPos==1, etc. Some comparators can
optimize themselves when they are the primary sort.
</param>
<returns> <see cref="T:Lucene.Net.Search.FieldComparator"/> to use when sorting
</returns>
</member>
<member name="P:Lucene.Net.Search.SortField.Field">
<summary>Returns the name of the field. Could return <c>null</c>
if the sort is by SCORE or DOC.
</summary>
<value> Name of field, possibly <c>null</c>. </value>
</member>
<member name="P:Lucene.Net.Search.SortField.Type">
<summary>Returns the type of contents in the field.</summary>
<value> One of the constants SCORE, DOC, STRING, INT or FLOAT. </value>
</member>
<member name="P:Lucene.Net.Search.SortField.Locale">
<summary>Returns the Locale by which term values are interpreted.
May return <c>null</c> if no Locale was specified.
</summary>
<value> Locale, or <c>null</c>. </value>
</member>
<member name="P:Lucene.Net.Search.SortField.Parser">
<summary>Returns the instance of a <see cref="T:Lucene.Net.Search.FieldCache"/> parser that fits to the given sort type.
May return <c>null</c> if no parser was specified. Sorting is using the default parser then.
</summary>
<value> An instance of a <see cref="FieldCache" /> parser, or <c>null</c>. </value>
</member>
<member name="P:Lucene.Net.Search.SortField.Reverse">
<summary>Returns whether the sort should be reversed.</summary>
<value> True if natural order should be reversed. </value>
</member>
<member name="P:Lucene.Net.Search.SortField.ComparatorSource">
<summary>
Returns the <see cref="T:Lucene.Net.Search.FieldComparatorSource"/> used for
custom sorting
</summary>
</member>
<member name="T:Lucene.Net.Search.SpanFilterResult">
<summary> The results of a SpanQueryFilter. Wraps the BitSet and the position information from the SpanQuery
<p/>
NOTE: This API is still experimental and subject to change.
</summary>
</member>
<member name="M:Lucene.Net.Search.SpanFilterResult.#ctor(Lucene.Net.Search.DocIdSet,System.Collections.Generic.IList{Lucene.Net.Search.SpanFilterResult.PositionInfo})">
<summary> </summary>
<param name="docIdSet">The DocIdSet for the Filter
</param>
<param name="positions">A List of <see cref="T:Lucene.Net.Search.SpanFilterResult.PositionInfo"/> objects
</param>
</member>
<member name="P:Lucene.Net.Search.SpanFilterResult.Positions">
<summary> The first entry in the array corresponds to the first "on" bit.
Entries are increasing by document order
</summary>
<value> A List of PositionInfo objects </value>
</member>
<member name="P:Lucene.Net.Search.SpanFilterResult.DocIdSet">
<summary>Returns the docIdSet </summary>
</member>
<member name="P:Lucene.Net.Search.SpanFilterResult.PositionInfo.Positions">
<summary> </summary>
<value> A List of <see cref="Lucene.Net.Search.SpanFilterResult.StartEnd" /> objects </value>
</member>
<member name="P:Lucene.Net.Search.SpanFilterResult.StartEnd.End">
<summary> </summary>
<value> The end position of this match </value>
</member>
<member name="P:Lucene.Net.Search.SpanFilterResult.StartEnd.Start">
<summary> The Start position</summary>
<value> The start position of this match </value>
</member>
<member name="T:Lucene.Net.Search.SpanQueryFilter">
<summary> Constrains search results to only match those which also match a provided
query. Also provides position information about where each document matches
at the cost of extra space compared with the QueryWrapperFilter.
There is an added cost to this above what is stored in a <see cref="T:Lucene.Net.Search.QueryWrapperFilter"/>. Namely,
the position information for each matching document is stored.
<p/>
This filter does not cache. See the <see cref="T:Lucene.Net.Search.CachingSpanFilter"/> for a wrapper that
caches.
</summary>
<version> $Id:$
</version>
</member>
<member name="M:Lucene.Net.Search.SpanQueryFilter.#ctor(Lucene.Net.Search.Spans.SpanQuery)">
<summary>Constructs a filter which only matches documents matching
<c>query</c>.
</summary>
<param name="query">The <see cref="T:Lucene.Net.Search.Spans.SpanQuery"/> to use as the basis for the Filter.
</param>
</member>
<member name="T:Lucene.Net.Search.Spans.FieldMaskingSpanQuery">
<summary> <p/>Wrapper to allow <see cref="T:Lucene.Net.Search.Spans.SpanQuery"/> objects participate in composite
single-field SpanQueries by 'lying' about their search field. That is,
the masked SpanQuery will function as normal,
but <see cref="P:Lucene.Net.Search.Spans.SpanQuery.Field"/> simply hands back the value supplied
in this class's constructor.<p/>
<p/>This can be used to support Queries like <see cref="T:Lucene.Net.Search.Spans.SpanNearQuery"/> or
<see cref="T:Lucene.Net.Search.Spans.SpanOrQuery"/> across different fields, which is not ordinarily
permitted.<p/>
<p/>This can be useful for denormalized relational data: for example, when
indexing a document with conceptually many 'children': <p/>
<pre>
teacherid: 1
studentfirstname: james
studentsurname: jones
teacherid: 2
studenfirstname: james
studentsurname: smith
studentfirstname: sally
studentsurname: jones
</pre>
<p/>a SpanNearQuery with a slop of 0 can be applied across two
<see cref="T:Lucene.Net.Search.Spans.SpanTermQuery"/> objects as follows:
<code>
SpanQuery q1 = new SpanTermQuery(new Term("studentfirstname", "james"));
SpanQuery q2 = new SpanTermQuery(new Term("studentsurname", "jones"));
SpanQuery q2m new FieldMaskingSpanQuery(q2, "studentfirstname");
Query q = new SpanNearQuery(new SpanQuery[]{q1, q2m}, -1, false);
</code>
to search for 'studentfirstname:james studentsurname:jones' and find
teacherid 1 without matching teacherid 2 (which has a 'james' in position 0
and 'jones' in position 1). <p/>
<p/>Note: as <see cref="P:Lucene.Net.Search.Spans.FieldMaskingSpanQuery.Field"/> returns the masked field, scoring will be
done using the norms of the field name supplied. This may lead to unexpected
scoring behaviour.<p/>
</summary>
</member>
<member name="T:Lucene.Net.Search.Spans.NearSpansOrdered">
<summary>A Spans that is formed from the ordered subspans of a SpanNearQuery
where the subspans do not overlap and have a maximum slop between them.
<p/>
The formed spans only contains minimum slop matches.<br/>
The matching slop is computed from the distance(s) between
the non overlapping matching Spans.<br/>
Successive matches are always formed from the successive Spans
of the SpanNearQuery.
<p/>
The formed spans may contain overlaps when the slop is at least 1.
For example, when querying using
<c>t1 t2 t3</c>
with slop at least 1, the fragment:
<c>t1 t2 t1 t3 t2 t3</c>
matches twice:
<c>t1 t2 .. t3 </c>
<c> t1 .. t2 t3</c>
Expert:
Only public for subclassing. Most implementations should not need this class
</summary>
</member>
<member name="T:Lucene.Net.Search.Spans.Spans">
<summary>Expert: an enumeration of span matches. Used to implement span searching.
Each span represents a range of term positions within a document. Matches
are enumerated in order, by increasing document number, within that by
increasing start position and finally by increasing end position.
</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.Spans.Next">
<summary>Move to the next match, returning true iff any such exists. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.Spans.SkipTo(System.Int32)">
<summary>Skips to the first match beyond the current, whose document number is
greater than or equal to <i>target</i>. <p/>Returns true iff there is such
a match. <p/>Behaves as if written: <code>
boolean skipTo(int target) {
do {
if (!next())
return false;
} while (target > doc());
return true;
}
</code>
Most implementations are considerably more efficient than that.
</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.Spans.Doc">
<summary>Returns the document number of the current match. Initially invalid. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.Spans.Start">
<summary>Returns the start position of the current match. Initially invalid. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.Spans.End">
<summary>Returns the end position of the current match. Initially invalid. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.Spans.GetPayload">
<summary> Returns the payload data for the current span.
This is invalid until <see cref="M:Lucene.Net.Search.Spans.Spans.Next"/> is called for
the first time.
This method must not be called more than once after each call
of <see cref="M:Lucene.Net.Search.Spans.Spans.Next"/>. However, most payloads are loaded lazily,
so if the payload data for the current position is not needed,
this method may not be called at all for performance reasons. An ordered
SpanQuery does not lazy load, so if you have payloads in your index and
you do not want ordered SpanNearQuerys to collect payloads, you can
disable collection with a constructor option.<br/>
Note that the return type is a collection, thus the ordering should not be relied upon.
<br/>
<p/><font color="#FF0000">
WARNING: The status of the <b>Payloads</b> feature is experimental.
The APIs introduced here might change in the future and will not be
supported anymore in such a case.</font><p/>
</summary>
<returns> a List of byte arrays containing the data of this payload, otherwise null if isPayloadAvailable is false </returns>
<throws> java.io.IOException </throws>
</member>
<member name="M:Lucene.Net.Search.Spans.Spans.IsPayloadAvailable">
<summary> Checks if a payload can be loaded at this position.
<p/>
Payloads can only be loaded once per call to
<see cref="M:Lucene.Net.Search.Spans.Spans.Next"/>.
</summary>
<returns> true if there is a payload available at this position that can be loaded </returns>
</member>
<member name="F:Lucene.Net.Search.Spans.NearSpansOrdered.subSpans">
<summary>The spans in the same order as the SpanNearQuery </summary>
</member>
<member name="F:Lucene.Net.Search.Spans.NearSpansOrdered.inSameDoc">
<summary>Indicates that all subSpans have same doc() </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.NearSpansOrdered.AdvanceAfterOrdered">
<summary>Advances the subSpans to just after an ordered match with a minimum slop
that is smaller than the slop allowed by the SpanNearQuery.
</summary>
<returns> true iff there is such a match.
</returns>
</member>
<member name="M:Lucene.Net.Search.Spans.NearSpansOrdered.ToSameDoc">
<summary>Advance the subSpans to the same document </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.NearSpansOrdered.DocSpansOrdered(Lucene.Net.Search.Spans.Spans,Lucene.Net.Search.Spans.Spans)">
<summary>Check whether two Spans in the same document are ordered.</summary>
<param name="spans1">
</param>
<param name="spans2">
</param>
<returns> true iff spans1 starts before spans2
or the spans start at the same position,
and spans1 ends before spans2.
</returns>
</member>
<member name="M:Lucene.Net.Search.Spans.NearSpansOrdered.DocSpansOrdered(System.Int32,System.Int32,System.Int32,System.Int32)">
<summary>Like <see cref="M:Lucene.Net.Search.Spans.NearSpansOrdered.DocSpansOrdered(Lucene.Net.Search.Spans.Spans,Lucene.Net.Search.Spans.Spans)"/>, but use the spans
starts and ends as parameters.
</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.NearSpansOrdered.StretchToOrder">
<summary>Order the subSpans within the same document by advancing all later spans
after the previous one.
</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.NearSpansOrdered.ShrinkToAfterShortestMatch">
<summary>The subSpans are ordered in the same doc, so there is a possible match.
Compute the slop while making the match as short as possible by advancing
all subSpans except the last one in reverse order.
</summary>
</member>
<member name="T:Lucene.Net.Search.Spans.NearSpansUnordered">
<summary> Similar to <see cref="T:Lucene.Net.Search.Spans.NearSpansOrdered"/>, but for the unordered case.
Expert:
Only public for subclassing. Most implementations should not need this class
</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.NearSpansUnordered.GetPayload">
<summary> WARNING: The List is not necessarily in order of the the positions</summary>
<returns> Collection of &lt;c&gt;byte[]&lt;/c&gt; payloads </returns>
<throws> IOException </throws>
</member>
<member name="T:Lucene.Net.Search.Spans.NearSpansUnordered.SpansCell">
<summary>Wraps a Spans, and can be used to form a linked list. </summary>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanFirstQuery">
<summary>Matches spans near the beginning of a field. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanFirstQuery.#ctor(Lucene.Net.Search.Spans.SpanQuery,System.Int32)">
<summary>Construct a SpanFirstQuery matching spans in <c>match</c> whose end
position is less than or equal to <c>end</c>.
</summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanFirstQuery.Match">
<summary>Return the SpanQuery whose matches are filtered. </summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanFirstQuery.End">
<summary>Return the maximum end position permitted in a match. </summary>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanNotQuery">
<summary>Removes matches which overlap with another SpanQuery. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanNotQuery.#ctor(Lucene.Net.Search.Spans.SpanQuery,Lucene.Net.Search.Spans.SpanQuery)">
<summary>Construct a SpanNotQuery matching spans from <c>include</c> which
have no overlap with spans from <c>exclude</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanNotQuery.Equals(System.Object)">
<summary>Returns true iff <c>o</c> is equal to this. </summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanNotQuery.Include">
<summary>Return the SpanQuery whose matches are filtered. </summary>
</member>
<member name="P:Lucene.Net.Search.Spans.SpanNotQuery.Exclude">
<summary>Return the SpanQuery whose matches must not overlap those returned. </summary>
</member>
<member name="T:Lucene.Net.Search.Spans.SpanOrQuery">
<summary>Matches the union of its clauses.</summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanOrQuery.#ctor(Lucene.Net.Search.Spans.SpanQuery[])">
<summary>Construct a SpanOrQuery merging the provided clauses. </summary>
</member>
<member name="M:Lucene.Net.Search.Spans.SpanOrQuery.GetClauses">
<summary>Return the clauses whose spans are matched. </summary>
</member>
<member name="T:Lucene.Net.Search.Spans.TermSpans">
<summary> Expert:
Public for extension only
</summary>
</member>
<member name="T:Lucene.Net.Search.TermQuery">
<summary>A Query that matches documents containing a term.
This may be combined with other terms with a <see cref="T:Lucene.Net.Search.BooleanQuery"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.TermQuery.#ctor(Lucene.Net.Index.Term)">
<summary>Constructs a query for the term <c>t</c>. </summary>
</member>
<member name="M:Lucene.Net.Search.TermQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="M:Lucene.Net.Search.TermQuery.Equals(System.Object)">
<summary>Returns true iff <c>o</c> is equal to this. </summary>
</member>
<member name="M:Lucene.Net.Search.TermQuery.GetHashCode">
<summary>Returns a hash code value for this object.</summary>
</member>
<member name="P:Lucene.Net.Search.TermQuery.Term">
<summary>Returns the term of this query. </summary>
</member>
<member name="T:Lucene.Net.Search.TermRangeFilter">
<summary> A Filter that restricts search results to a range of values in a given
field.
<p/>This filter matches the documents looking for terms that fall into the
supplied range according to <see cref="M:System.String.CompareTo(System.String)"/>. It is not intended
for numerical ranges, use <see cref="T:Lucene.Net.Search.NumericRangeFilter`1"/> instead.
<p/>If you construct a large number of range filters with different ranges but on the
same field, <see cref="T:Lucene.Net.Search.FieldCacheRangeFilter"/> may have significantly better performance.
</summary>
<since> 2.9
</since>
</member>
<member name="M:Lucene.Net.Search.TermRangeFilter.#ctor(System.String,System.String,System.String,System.Boolean,System.Boolean)">
<param name="fieldName">The field this range applies to
</param>
<param name="lowerTerm">The lower bound on this range
</param>
<param name="upperTerm">The upper bound on this range
</param>
<param name="includeLower">Does this range include the lower bound?
</param>
<param name="includeUpper">Does this range include the upper bound?
</param>
<throws> IllegalArgumentException if both terms are null or if </throws>
<summary> lowerTerm is null and includeLower is true (similar for upperTerm
and includeUpper)
</summary>
</member>
<member name="M:Lucene.Net.Search.TermRangeFilter.#ctor(System.String,System.String,System.String,System.Boolean,System.Boolean,System.Globalization.CompareInfo)">
<summary> <strong>WARNING:</strong> Using this constructor and supplying a non-null
value in the <c>collator</c> parameter will cause every single
index Term in the Field referenced by lowerTerm and/or upperTerm to be
examined. Depending on the number of index Terms in this Field, the
operation could be very slow.
</summary>
<param name="fieldName"></param>
<param name="lowerTerm">The lower bound on this range
</param>
<param name="upperTerm">The upper bound on this range
</param>
<param name="includeLower">Does this range include the lower bound?
</param>
<param name="includeUpper">Does this range include the upper bound?
</param>
<param name="collator">The collator to use when determining range inclusion; set
to null to use Unicode code point ordering instead of collation.
</param>
<throws> IllegalArgumentException if both terms are null or if </throws>
<summary> lowerTerm is null and includeLower is true (similar for upperTerm
and includeUpper)
</summary>
</member>
<member name="M:Lucene.Net.Search.TermRangeFilter.Less(System.String,System.String)">
<summary> Constructs a filter for field <c>fieldName</c> matching
less than or equal to <c>upperTerm</c>.
</summary>
</member>
<member name="M:Lucene.Net.Search.TermRangeFilter.More(System.String,System.String)">
<summary> Constructs a filter for field <c>fieldName</c> matching
greater than or equal to <c>lowerTerm</c>.
</summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeFilter.Field">
<summary>Returns the field name for this filter </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeFilter.LowerTerm">
<summary>Returns the lower value of this range filter </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeFilter.UpperTerm">
<summary>Returns the upper value of this range filter </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeFilter.IncludesLower">
<summary>Returns <c>true</c> if the lower endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeFilter.IncludesUpper">
<summary>Returns <c>true</c> if the upper endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeFilter.Collator">
<summary>Returns the collator used to determine range inclusion, if any. </summary>
</member>
<member name="T:Lucene.Net.Search.TermRangeQuery">
<summary> A Query that matches documents within an exclusive range of terms.
<p/>This query matches the documents looking for terms that fall into the
supplied range according to <see cref="M:System.String.CompareTo(System.String)"/>. It is not intended
for numerical ranges, use <see cref="T:Lucene.Net.Search.NumericRangeQuery`1"/> instead.
<p/>This query uses the <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/>
rewrite method.
</summary>
<since> 2.9
</since>
</member>
<member name="M:Lucene.Net.Search.TermRangeQuery.#ctor(System.String,System.String,System.String,System.Boolean,System.Boolean)">
<summary> Constructs a query selecting all terms greater/equal than <c>lowerTerm</c>
but less/equal than <c>upperTerm</c>.
<p/>
If an endpoint is null, it is said
to be "open". Either or both endpoints may be open. Open endpoints may not
be exclusive (you can't select all but the first or last term without
explicitly specifying the term to exclude.)
</summary>
<param name="field">The field that holds both lower and upper terms.
</param>
<param name="lowerTerm">The term text at the lower end of the range
</param>
<param name="upperTerm">The term text at the upper end of the range
</param>
<param name="includeLower">If true, the <c>lowerTerm</c> is
included in the range.
</param>
<param name="includeUpper">If true, the <c>upperTerm</c> is
included in the range.
</param>
</member>
<member name="M:Lucene.Net.Search.TermRangeQuery.#ctor(System.String,System.String,System.String,System.Boolean,System.Boolean,System.Globalization.CompareInfo)">
<summary>Constructs a query selecting all terms greater/equal than
<c>lowerTerm</c> but less/equal than <c>upperTerm</c>.
<p/>
If an endpoint is null, it is said
to be "open". Either or both endpoints may be open. Open endpoints may not
be exclusive (you can't select all but the first or last term without
explicitly specifying the term to exclude.)
<p/>
If <c>collator</c> is not null, it will be used to decide whether
index terms are within the given range, rather than using the Unicode code
point order in which index terms are stored.
<p/>
<strong>WARNING:</strong> Using this constructor and supplying a non-null
value in the <c>collator</c> parameter will cause every single
index Term in the Field referenced by lowerTerm and/or upperTerm to be
examined. Depending on the number of index Terms in this Field, the
operation could be very slow.
</summary>
<param name="field"></param>
<param name="lowerTerm">The Term text at the lower end of the range
</param>
<param name="upperTerm">The Term text at the upper end of the range
</param>
<param name="includeLower">If true, the <c>lowerTerm</c> is
included in the range.
</param>
<param name="includeUpper">If true, the <c>upperTerm</c> is
included in the range.
</param>
<param name="collator">The collator to use to collate index Terms, to determine
their membership in the range bounded by <c>lowerTerm</c> and
<c>upperTerm</c>.
</param>
</member>
<member name="M:Lucene.Net.Search.TermRangeQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeQuery.Field">
<summary>Returns the field name for this query </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeQuery.LowerTerm">
<summary>Returns the lower value of this range query </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeQuery.UpperTerm">
<summary>Returns the upper value of this range query </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeQuery.IncludesLower">
<summary>Returns <c>true</c> if the lower endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeQuery.IncludesUpper">
<summary>Returns <c>true</c> if the upper endpoint is inclusive </summary>
</member>
<member name="P:Lucene.Net.Search.TermRangeQuery.Collator">
<summary>Returns the collator used to determine range inclusion, if any. </summary>
</member>
<member name="T:Lucene.Net.Search.TermRangeTermEnum">
<summary> Subclass of FilteredTermEnum for enumerating all terms that match the
specified range parameters.
<p/>
Term enumerations are always ordered by Term.compareTo(). Each term in
the enumeration is greater than all that precede it.
</summary>
<since> 2.9
</since>
</member>
<member name="M:Lucene.Net.Search.TermRangeTermEnum.#ctor(Lucene.Net.Index.IndexReader,System.String,System.String,System.String,System.Boolean,System.Boolean,System.Globalization.CompareInfo)">
<summary> Enumerates all terms greater/equal than <c>lowerTerm</c>
but less/equal than <c>upperTerm</c>.
If an endpoint is null, it is said to be "open". Either or both
endpoints may be open. Open endpoints may not be exclusive
(you can't select all but the first or last term without
explicitly specifying the term to exclude.)
</summary>
<param name="reader">
</param>
<param name="field">An interned field that holds both lower and upper terms.
</param>
<param name="lowerTermText">The term text at the lower end of the range
</param>
<param name="upperTermText">The term text at the upper end of the range
</param>
<param name="includeLower">If true, the <c>lowerTerm</c> is included in the range.
</param>
<param name="includeUpper">If true, the <c>upperTerm</c> is included in the range.
</param>
<param name="collator">The collator to use to collate index Terms, to determine their
membership in the range bounded by <c>lowerTerm</c> and
<c>upperTerm</c>.
</param>
<throws> IOException </throws>
</member>
<member name="T:Lucene.Net.Search.TermScorer">
<summary>Expert: A <c>Scorer</c> for documents matching a <c>Term</c>.</summary>
</member>
<member name="M:Lucene.Net.Search.TermScorer.#ctor(Lucene.Net.Search.Weight,Lucene.Net.Index.TermDocs,Lucene.Net.Search.Similarity,System.Byte[])">
<summary> Construct a <c>TermScorer</c>.
</summary>
<param name="weight">The weight of the <c>Term</c> in the query.
</param>
<param name="td">An iterator over the documents matching the <c>Term</c>.
</param>
<param name="similarity">The <c>Similarity</c> implementation to be used for score
computations.
</param>
<param name="norms">The field norms of the document fields for the <c>Term</c>.
</param>
</member>
<member name="M:Lucene.Net.Search.TermScorer.NextDoc">
<summary> Advances to the next document matching the query. <br/>
The iterator over the matching documents is buffered using
<see cref="M:Lucene.Net.Index.TermDocs.Read(System.Int32[],System.Int32[])"/>.
</summary>
<returns> the document matching the query or -1 if there are no more documents.
</returns>
</member>
<member name="M:Lucene.Net.Search.TermScorer.Advance(System.Int32)">
<summary> Advances to the first match beyond the current whose document number is
greater than or equal to a given target. <br/>
The implementation uses <see cref="M:Lucene.Net.Index.TermDocs.SkipTo(System.Int32)"/>.
</summary>
<param name="target">The target document number.
</param>
<returns> the matching document or -1 if none exist.
</returns>
</member>
<member name="M:Lucene.Net.Search.TermScorer.ToString">
<summary>Returns a string representation of this <c>TermScorer</c>. </summary>
</member>
<member name="T:Lucene.Net.Search.TimeLimitingCollector">
<summary> The <see cref="T:Lucene.Net.Search.TimeLimitingCollector"/> is used to timeout search requests that
take longer than the maximum allowed search time limit. After this time is
exceeded, the search thread is stopped by throwing a
<see cref="T:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException"/>.
</summary>
</member>
<member name="F:Lucene.Net.Search.TimeLimitingCollector.DEFAULT_RESOLUTION">
<summary> Default timer resolution.</summary>
<seealso cref="P:Lucene.Net.Search.TimeLimitingCollector.Resolution">
</seealso>
</member>
<member name="F:Lucene.Net.Search.TimeLimitingCollector.DEFAULT_GREEDY">
<summary> Default for <see cref="P:Lucene.Net.Search.TimeLimitingCollector.IsGreedy"/>.</summary>
<seealso cref="P:Lucene.Net.Search.TimeLimitingCollector.IsGreedy">
</seealso>
</member>
<member name="M:Lucene.Net.Search.TimeLimitingCollector.#ctor(Lucene.Net.Search.Collector,System.Int64)">
<summary> Create a TimeLimitedCollector wrapper over another <see cref="T:Lucene.Net.Search.Collector"/> with a specified timeout.</summary>
<param name="collector">the wrapped <see cref="T:Lucene.Net.Search.Collector"/>
</param>
<param name="timeAllowed">max time allowed for collecting hits after which <see cref="T:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException"/> is thrown
</param>
</member>
<member name="M:Lucene.Net.Search.TimeLimitingCollector.Collect(System.Int32)">
<summary> Calls <see cref="M:Lucene.Net.Search.Collector.Collect(System.Int32)"/> on the decorated <see cref="T:Lucene.Net.Search.Collector"/>
unless the allowed time has passed, in which case it throws an exception.
</summary>
<throws> TimeExceededException </throws>
<summary> if the time allowed has exceeded.
</summary>
</member>
<member name="P:Lucene.Net.Search.TimeLimitingCollector.Resolution">
<summary>
Gets or sets the timer resolution.
The default timer resolution is 20 milliseconds.
This means that a search required to take no longer than
800 milliseconds may be stopped after 780 to 820 milliseconds.
<br/>Note that:
<list type="bullet">
<item>Finer (smaller) resolution is more accurate but less efficient.</item>
<item>Setting resolution to less than 5 milliseconds will be silently modified to 5 milliseconds.</item>
<item>Setting resolution smaller than current resolution might take effect only after current
resolution. (Assume current resolution of 20 milliseconds is modified to 5 milliseconds,
then it can take up to 20 milliseconds for the change to have effect.</item>
</list>
</summary>
</member>
<member name="P:Lucene.Net.Search.TimeLimitingCollector.IsGreedy">
<summary> Checks if this time limited collector is greedy in collecting the last hit.
A non greedy collector, upon a timeout, would throw a <see cref="T:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException"/>
without allowing the wrapped collector to collect current doc. A greedy one would
first allow the wrapped hit collector to collect current doc and only then
throw a <see cref="T:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException"/>.
</summary>
</member>
<member name="M:Lucene.Net.Search.TimeLimitingCollector.TimerThread.#ctor">
<summary> TimerThread provides a pseudo-clock service to all searching
threads, so that they can count elapsed time with less overhead
than repeatedly calling System.currentTimeMillis. A single
thread should be created to be used for all searches.
</summary>
</member>
<member name="P:Lucene.Net.Search.TimeLimitingCollector.TimerThread.Milliseconds">
<summary> Get the timer value in milliseconds.</summary>
</member>
<member name="T:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException">
<summary>Thrown when elapsed search time exceeds allowed search time. </summary>
</member>
<member name="P:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException.TimeAllowed">
<summary>Returns allowed time (milliseconds). </summary>
</member>
<member name="P:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException.TimeElapsed">
<summary>Returns elapsed time (milliseconds). </summary>
</member>
<member name="P:Lucene.Net.Search.TimeLimitingCollector.TimeExceededException.LastDocCollected">
<summary>Returns last doc(absolute doc id) that was collected when the search time exceeded. </summary>
</member>
<member name="T:Lucene.Net.Search.TopDocs">
<summary> Represents hits returned by <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32)"/>
and <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,System.Int32)"/>
</summary>
</member>
<member name="M:Lucene.Net.Search.TopDocs.#ctor(System.Int32,Lucene.Net.Search.ScoreDoc[])">
<summary>Constructs a TopDocs with a default maxScore=Float.NaN. </summary>
</member>
<member name="M:Lucene.Net.Search.TopDocs.#ctor(System.Int32,Lucene.Net.Search.ScoreDoc[],System.Single)">
<summary></summary>
</member>
<member name="P:Lucene.Net.Search.TopDocs.TotalHits">
<summary>The total number of hits for the query.</summary>
</member>
<member name="P:Lucene.Net.Search.TopDocs.ScoreDocs">
<summary>The top hits for the query. </summary>
</member>
<member name="P:Lucene.Net.Search.TopDocs.MaxScore">
<summary>
Gets or sets the maximum score value encountered, needed for normalizing.
Note that in case scores are not tracked, this returns <see cref="F:System.Single.NaN"/>.
</summary>
</member>
<member name="T:Lucene.Net.Search.TopDocsCollector`1">
<summary> A base class for all collectors that return a <see cref="T:Lucene.Net.Search.TopDocs"/> output. This
collector allows easy extension by providing a single constructor which
accepts a <see cref="T:Lucene.Net.Util.PriorityQueue`1"/> as well as protected members for that
priority queue and a counter of the number of total hits.<br/>
Extending classes can override <see cref="M:Lucene.Net.Search.TopDocsCollector`1.TopDocs(System.Int32,System.Int32)"/> and
<see cref="P:Lucene.Net.Search.TopDocsCollector`1.TotalHits"/> in order to provide their own implementation.
</summary>
</member>
<member name="F:Lucene.Net.Search.TopDocsCollector`1.pq">
<summary> The priority queue which holds the top documents. Note that different
implementations of PriorityQueue give different meaning to 'top documents'.
HitQueue for example aggregates the top scoring documents, while other PQ
implementations may hold documents sorted by other criteria.
</summary>
</member>
<member name="F:Lucene.Net.Search.TopDocsCollector`1.internalTotalHits">
<summary>The total number of documents that the collector encountered. </summary>
</member>
<member name="M:Lucene.Net.Search.TopDocsCollector`1.PopulateResults(Lucene.Net.Search.ScoreDoc[],System.Int32)">
<summary> Populates the results array with the ScoreDoc instaces. This can be
overridden in case a different ScoreDoc type should be returned.
</summary>
</member>
<member name="M:Lucene.Net.Search.TopDocsCollector`1.NewTopDocs(Lucene.Net.Search.ScoreDoc[],System.Int32)">
<summary> Returns a <see cref="T:Lucene.Net.Search.TopDocs"/> instance containing the given results. If
<c>results</c> is null it means there are no results to return,
either because there were 0 calls to collect() or because the arguments to
topDocs were invalid.
</summary>
</member>
<member name="M:Lucene.Net.Search.TopDocsCollector`1.TopDocs">
<summary>Returns the top docs that were collected by this collector. </summary>
</member>
<member name="M:Lucene.Net.Search.TopDocsCollector`1.TopDocs(System.Int32)">
<summary> Returns the documents in the rage [start .. pq.size()) that were collected
by this collector. Note that if start >= pq.size(), an empty TopDocs is
returned.<br/>
This method is convenient to call if the application allways asks for the
last results, starting from the last 'page'.<br/>
<b>NOTE:</b> you cannot call this method more than once for each search
execution. If you need to call it more than once, passing each time a
different <c>start</c>, you should call <see cref="M:Lucene.Net.Search.TopDocsCollector`1.TopDocs"/> and work
with the returned <see cref="T:Lucene.Net.Search.TopDocs"/> object, which will contain all the
results this search execution collected.
</summary>
</member>
<member name="M:Lucene.Net.Search.TopDocsCollector`1.TopDocs(System.Int32,System.Int32)">
<summary> Returns the documents in the rage [start .. start+howMany) that were
collected by this collector. Note that if start >= pq.size(), an empty
TopDocs is returned, and if pq.size() - start < howMany, then only the
available documents in [start .. pq.size()) are returned.<br/>
This method is useful to call in case pagination of search results is
allowed by the search application, as well as it attempts to optimize the
memory used by allocating only as much as requested by howMany.<br/>
<b>NOTE:</b> you cannot call this method more than once for each search
execution. If you need to call it more than once, passing each time a
different range, you should call <see cref="M:Lucene.Net.Search.TopDocsCollector`1.TopDocs"/> and work with the
returned <see cref="T:Lucene.Net.Search.TopDocs"/> object, which will contain all the results this
search execution collected.
</summary>
</member>
<member name="P:Lucene.Net.Search.TopDocsCollector`1.TotalHits">
<summary>The total number of documents that matched this query. </summary>
</member>
<member name="T:Lucene.Net.Search.TopFieldCollector">
<summary> A <see cref="T:Lucene.Net.Search.Collector"/> that sorts by <see cref="T:Lucene.Net.Search.SortField"/> using
<see cref="T:Lucene.Net.Search.FieldComparator"/>s.
<p/>
See the <see cref="M:Lucene.Net.Search.TopFieldCollector.Create(Lucene.Net.Search.Sort,System.Int32,System.Boolean,System.Boolean,System.Boolean,System.Boolean)"/> method
for instantiating a TopFieldCollector.
<p/><b>NOTE:</b> This API is experimental and might change in
incompatible ways in the next release.<p/>
</summary>
</member>
<member name="M:Lucene.Net.Search.TopFieldCollector.Create(Lucene.Net.Search.Sort,System.Int32,System.Boolean,System.Boolean,System.Boolean,System.Boolean)">
<summary> Creates a new <see cref="T:Lucene.Net.Search.TopFieldCollector"/> from the given
arguments.
<p/><b>NOTE</b>: The instances returned by this method
pre-allocate a full array of length
<c>numHits</c>.
</summary>
<param name="sort">the sort criteria (SortFields).
</param>
<param name="numHits">the number of results to collect.
</param>
<param name="fillFields">specifies whether the actual field values should be returned on
the results (FieldDoc).
</param>
<param name="trackDocScores">specifies whether document scores should be tracked and set on the
results. Note that if set to false, then the results' scores will
be set to Float.NaN. Setting this to true affects performance, as
it incurs the score computation on each competitive result.
Therefore if document scores are not required by the application,
it is recommended to set it to false.
</param>
<param name="trackMaxScore">specifies whether the query's maxScore should be tracked and set
on the resulting <see cref="T:Lucene.Net.Search.TopDocs"/>. Note that if set to false,
<see cref="P:Lucene.Net.Search.TopDocs.MaxScore"/> returns Float.NaN. Setting this to
true affects performance as it incurs the score computation on
each result. Also, setting this true automatically sets
<c>trackDocScores</c> to true as well.
</param>
<param name="docsScoredInOrder">specifies whether documents are scored in doc Id order or not by
the given <see cref="T:Lucene.Net.Search.Scorer"/> in <see cref="M:Lucene.Net.Search.Collector.SetScorer(Lucene.Net.Search.Scorer)"/>.
</param>
<returns> a <see cref="T:Lucene.Net.Search.TopFieldCollector"/> instance which will sort the results by
the sort criteria.
</returns>
<throws> IOException </throws>
</member>
<member name="T:Lucene.Net.Search.TopFieldDocs">
<summary>
Represents hits returned by <see cref="M:Lucene.Net.Search.Searcher.Search(Lucene.Net.Search.Query,Lucene.Net.Search.Filter,System.Int32,Lucene.Net.Search.Sort)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Search.TopFieldDocs.fields">
<summary>The fields which were used to sort results by. </summary>
</member>
<member name="M:Lucene.Net.Search.TopFieldDocs.#ctor(System.Int32,Lucene.Net.Search.ScoreDoc[],Lucene.Net.Search.SortField[],System.Single)">
<summary>Creates one of these objects.</summary>
<param name="totalHits"> Total number of hits for the query.
</param>
<param name="scoreDocs"> The top hits for the query.
</param>
<param name="fields"> The sort criteria used to find the top hits.
</param>
<param name="maxScore"> The maximum score encountered.
</param>
</member>
<member name="T:Lucene.Net.Search.TopScoreDocCollector">
<summary> A <see cref="T:Lucene.Net.Search.Collector"/> implementation that collects the top-scoring hits,
returning them as a <see cref="T:Lucene.Net.Search.TopDocs"/>. This is used by <see cref="T:Lucene.Net.Search.IndexSearcher"/> to
implement <see cref="T:Lucene.Net.Search.TopDocs"/>-based search. Hits are sorted by score descending
and then (when the scores are tied) docID ascending. When you create an
instance of this collector you should know in advance whether documents are
going to be collected in doc Id order or not.
<p/><b>NOTE</b>: The values <see cref="F:System.Single.NaN"/> and
<see cref="F:System.Single.NegativeInfinity"/> are not valid scores. This
collector will not properly collect hits with such
scores.
</summary>
</member>
<member name="M:Lucene.Net.Search.TopScoreDocCollector.Create(System.Int32,System.Boolean)">
<summary> Creates a new <see cref="T:Lucene.Net.Search.TopScoreDocCollector"/> given the number of hits to
collect and whether documents are scored in order by the input
<see cref="T:Lucene.Net.Search.Scorer"/> to <see cref="M:Lucene.Net.Search.TopScoreDocCollector.SetScorer(Lucene.Net.Search.Scorer)"/>.
<p/><b>NOTE</b>: The instances returned by this method
pre-allocate a full array of length
<c>numHits</c>, and fill the array with sentinel
objects.
</summary>
</member>
<member name="T:Lucene.Net.Search.WildcardQuery">
<summary>Implements the wildcard search query. Supported wildcards are <c>*</c>, which
matches any character sequence (including the empty one), and <c>?</c>,
which matches any single character. Note this query can be slow, as it
needs to iterate over many terms. In order to prevent extremely slow WildcardQueries,
a Wildcard term should not start with one of the wildcards <c>*</c> or
<c>?</c>.
<p/>This query uses the <see cref="F:Lucene.Net.Search.MultiTermQuery.CONSTANT_SCORE_AUTO_REWRITE_DEFAULT"/>
rewrite method.
</summary>
<seealso cref="T:Lucene.Net.Search.WildcardTermEnum">
</seealso>
</member>
<member name="M:Lucene.Net.Search.WildcardQuery.ToString(System.String)">
<summary>Prints a user-readable version of this query. </summary>
</member>
<member name="P:Lucene.Net.Search.WildcardQuery.Term">
<summary> Returns the pattern term.</summary>
</member>
<member name="T:Lucene.Net.Search.WildcardTermEnum">
<summary> Subclass of FilteredTermEnum for enumerating all terms that match the
specified wildcard filter term.
<p/>
Term enumerations are always ordered by Term.compareTo(). Each term in
the enumeration is greater than all that precede it.
</summary>
</member>
<member name="F:Lucene.Net.Search.WildcardTermEnum.WILDCARD_STRING">
<summary>*****************************************
String equality with support for wildcards
******************************************
</summary>
</member>
<member name="M:Lucene.Net.Search.WildcardTermEnum.#ctor(Lucene.Net.Index.IndexReader,Lucene.Net.Index.Term)">
<summary> Creates a new <c>WildcardTermEnum</c>.
<p/>
After calling the constructor the enumeration is already pointing to the first
valid term if such a term exists.
</summary>
</member>
<member name="M:Lucene.Net.Search.WildcardTermEnum.WildcardEquals(System.String,System.Int32,System.String,System.Int32)">
<summary> Determines if a word matches a wildcard pattern.
<small>Work released by Granta Design Ltd after originally being done on
company time.</small>
</summary>
</member>
<member name="T:Lucene.Net.Store.AlreadyClosedException">
<summary> This exception is thrown when there is an attempt to
access something that has already been closed.
</summary>
</member>
<member name="T:Lucene.Net.Store.BufferedIndexOutput">
<summary>Base implementation class for buffered <see cref="T:Lucene.Net.Store.IndexOutput"/>. </summary>
</member>
<member name="T:Lucene.Net.Store.IndexOutput">
<summary>Abstract base class for output to a file in a Directory. A random-access
output stream. Used for all Lucene index output operations.
</summary>
<seealso cref="T:Lucene.Net.Store.Directory">
</seealso>
<seealso cref="T:Lucene.Net.Store.IndexInput">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteByte(System.Byte)">
<summary>Writes a single byte.</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadByte">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteBytes(System.Byte[],System.Int32)">
<summary>Writes an array of bytes.</summary>
<param name="b">the bytes to write
</param>
<param name="length">the number of bytes to write
</param>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteBytes(System.Byte[],System.Int32,System.Int32)">
<summary>Writes an array of bytes.</summary>
<param name="b">the bytes to write
</param>
<param name="offset">the offset in the byte array
</param>
<param name="length">the number of bytes to write
</param>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteInt(System.Int32)">
<summary>Writes an int as four bytes.</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadInt">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteVInt(System.Int32)">
<summary>Writes an int in a variable-length format. Writes between one and
five bytes. Smaller values take fewer bytes. Negative numbers are not
supported.
</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadVInt">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteLong(System.Int64)">
<summary>Writes a long as eight bytes.</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadLong">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteVLong(System.Int64)">
<summary>Writes an long in a variable-length format. Writes between one and five
bytes. Smaller values take fewer bytes. Negative numbers are not
supported.
</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadVLong">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteString(System.String)">
<summary>Writes a string.</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadString">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteChars(System.String,System.Int32,System.Int32)">
<summary>Writes a sub sequence of characters from s as the old
format (modified UTF-8 encoded bytes).
</summary>
<param name="s">the source of the characters
</param>
<param name="start">the first character in the sequence
</param>
<param name="length">the number of characters in the sequence
</param>
<deprecated> -- please pre-convert to utf8 bytes
instead or use <see cref="M:Lucene.Net.Store.IndexOutput.WriteString(System.String)"/>
</deprecated>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.WriteChars(System.Char[],System.Int32,System.Int32)">
<summary>Writes a sub sequence of characters from char[] as
the old format (modified UTF-8 encoded bytes).
</summary>
<param name="s">the source of the characters
</param>
<param name="start">the first character in the sequence
</param>
<param name="length">the number of characters in the sequence
</param>
<deprecated> -- please pre-convert to utf8 bytes instead or use <see cref="M:Lucene.Net.Store.IndexOutput.WriteString(System.String)"/>
</deprecated>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.CopyBytes(Lucene.Net.Store.IndexInput,System.Int64)">
<summary>Copy numBytes bytes from input to ourself. </summary>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.Flush">
<summary>Forces any buffered output to be written. </summary>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.Close">
<summary>Closes this stream to further operations. </summary>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.Dispose">
<summary>Closes this stream to further operations. </summary>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.Seek(System.Int64)">
<summary>Sets current position in this file, where the next write will occur.</summary>
<seealso cref="P:Lucene.Net.Store.IndexOutput.FilePointer">
</seealso>
</member>
<member name="M:Lucene.Net.Store.IndexOutput.SetLength(System.Int64)">
<summary>Set the file length. By default, this method does
nothing (it's optional for a Directory to implement
it). But, certain Directory implementations (for
</summary>
<seealso cref="T:Lucene.Net.Store.FSDirectory"> can use this to inform the
underlying IO system to pre-allocate the file to the
specified size. If the length is longer than the
current file length, the bytes added to the file are
undefined. Otherwise the file is truncated.
</seealso>
<param name="length">file length
</param>
</member>
<member name="P:Lucene.Net.Store.IndexOutput.FilePointer">
<summary>Returns the current position in this file, where the next write will
occur.
</summary>
<seealso cref="M:Lucene.Net.Store.IndexOutput.Seek(System.Int64)">
</seealso>
</member>
<member name="P:Lucene.Net.Store.IndexOutput.Length">
<summary>The number of bytes in the file. </summary>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexOutput.WriteByte(System.Byte)">
<summary>Writes a single byte.</summary>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadByte">
</seealso>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexOutput.WriteBytes(System.Byte[],System.Int32,System.Int32)">
<summary>Writes an array of bytes.</summary>
<param name="b">the bytes to write
</param>
<param name="length">the number of bytes to write
</param>
<seealso cref="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexOutput.Flush">
<summary>Forces any buffered output to be written. </summary>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexOutput.FlushBuffer(System.Byte[],System.Int32)">
<summary>Expert: implements buffer write. Writes bytes at the current position in
the output.
</summary>
<param name="b">the bytes to write
</param>
<param name="len">the number of bytes to write
</param>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexOutput.FlushBuffer(System.Byte[],System.Int32,System.Int32)">
<summary>Expert: implements buffer write. Writes bytes at the current position in
the output.
</summary>
<param name="b">the bytes to write
</param>
<param name="offset">the offset in the byte array
</param>
<param name="len">the number of bytes to write
</param>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexOutput.Dispose(System.Boolean)">
<summary>Closes this stream to further operations. </summary>
</member>
<member name="M:Lucene.Net.Store.BufferedIndexOutput.Seek(System.Int64)">
<summary>Sets current position in this file, where the next write will occur.</summary>
<seealso cref="P:Lucene.Net.Store.BufferedIndexOutput.FilePointer">
</seealso>
</member>
<member name="P:Lucene.Net.Store.BufferedIndexOutput.FilePointer">
<summary>Returns the current position in this file, where the next write will
occur.
</summary>
<seealso cref="M:Lucene.Net.Store.BufferedIndexOutput.Seek(System.Int64)">
</seealso>
</member>
<member name="P:Lucene.Net.Store.BufferedIndexOutput.Length">
<summary>The number of bytes in the file. </summary>
</member>
<member name="T:Lucene.Net.Store.ChecksumIndexInput">
<summary>Writes bytes through to a primary IndexOutput, computing
checksum as it goes. Note that you cannot use seek().
</summary>
</member>
<member name="T:Lucene.Net.Store.ChecksumIndexOutput">
<summary>Writes bytes through to a primary IndexOutput, computing
checksum. Note that you cannot use seek().
</summary>
</member>
<member name="M:Lucene.Net.Store.ChecksumIndexOutput.PrepareCommit">
<summary> Starts but does not complete the commit of this file (=
writing of the final checksum at the end). After this
is called must call <see cref="M:Lucene.Net.Store.ChecksumIndexOutput.FinishCommit"/> and the
<see cref="M:Lucene.Net.Store.ChecksumIndexOutput.Dispose(System.Boolean)"/> to complete the commit.
</summary>
</member>
<member name="M:Lucene.Net.Store.ChecksumIndexOutput.FinishCommit">
<summary>See <see cref="M:Lucene.Net.Store.ChecksumIndexOutput.PrepareCommit"/> </summary>
</member>
<member name="T:Lucene.Net.Store.FileSwitchDirectory">
<summary> Expert: A Directory instance that switches files between
two other Directory instances.
<p/>Files with the specified extensions are placed in the
primary directory; others are placed in the secondary
directory. The provided Set must not change once passed
to this class, and must allow multiple threads to call
contains at once.<p/>
<p/><b>NOTE</b>: this API is new and experimental and is
subject to suddenly change in the next release.
</summary>
</member>
<member name="M:Lucene.Net.Store.FileSwitchDirectory.GetExtension(System.String)">
<summary>Utility method to return a file's extension. </summary>
</member>
<member name="P:Lucene.Net.Store.FileSwitchDirectory.PrimaryDir">
<summary>Return the primary directory </summary>
</member>
<member name="P:Lucene.Net.Store.FileSwitchDirectory.SecondaryDir">
<summary>Return the secondary directory </summary>
</member>
<member name="T:Lucene.Net.Store.FSDirectory">
<summary> <a name="subclasses"/>
Base class for Directory implementations that store index
files in the file system. There are currently three core
subclasses:
<list type="bullet">
<item> <see cref="T:Lucene.Net.Store.SimpleFSDirectory"/> is a straightforward
implementation using java.io.RandomAccessFile.
However, it has poor concurrent performance
(multiple threads will bottleneck) as it
synchronizes when multiple threads read from the
same file.</item>
<item> <see cref="T:Lucene.Net.Store.NIOFSDirectory"/> uses java.nio's
FileChannel's positional io when reading to avoid
synchronization when reading from the same file.
Unfortunately, due to a Windows-only <a href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6265734">Sun
JRE bug</a> this is a poor choice for Windows, but
on all other platforms this is the preferred
choice. Applications using <see cref="M:System.Threading.Thread.Interrupt"/> or
<c>Future#cancel(boolean)</c> (on Java 1.5) should use
<see cref="T:Lucene.Net.Store.SimpleFSDirectory"/> instead. See <see cref="T:Lucene.Net.Store.NIOFSDirectory"/> java doc
for details.
<item> <see cref="T:Lucene.Net.Store.MMapDirectory"/> uses memory-mapped IO when
reading. This is a good choice if you have plenty
of virtual memory relative to your index size, eg
if you are running on a 64 bit JRE, or you are
running on a 32 bit JRE but your index sizes are
small enough to fit into the virtual memory space.
Java has currently the limitation of not being able to
unmap files from user code. The files are unmapped, when GC
releases the byte buffers. Due to
<a href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038">
this bug</a> in Sun's JRE, MMapDirectory's <see cref="M:Lucene.Net.Store.IndexInput.Close"/>
is unable to close the underlying OS file handle. Only when
GC finally collects the underlying objects, which could be
quite some time later, will the file handle be closed.
This will consume additional transient disk usage: on Windows,
attempts to delete or overwrite the files will result in an
exception; on other platforms, which typically have a "delete on
last close" semantics, while such operations will succeed, the bytes
are still consuming space on disk. For many applications this
limitation is not a problem (e.g. if you have plenty of disk space,
and you don't rely on overwriting files on Windows) but it's still
an important limitation to be aware of. This class supplies a
(possibly dangerous) workaround mentioned in the bug report,
which may fail on non-Sun JVMs.</item>
Applications using <see cref="M:System.Threading.Thread.Interrupt"/> or
<c>Future#cancel(boolean)</c> (on Java 1.5) should use
<see cref="T:Lucene.Net.Store.SimpleFSDirectory"/> instead. See <see cref="T:Lucene.Net.Store.MMapDirectory"/>
java doc for details.</item>
</list>
Unfortunately, because of system peculiarities, there is
no single overall best implementation. Therefore, we've
added the <see cref="M:Lucene.Net.Store.FSDirectory.Open(System.IO.DirectoryInfo)"/> method, to allow Lucene to choose
the best FSDirectory implementation given your
environment, and the known limitations of each
implementation. For users who have no reason to prefer a
specific implementation, it's best to simply use <see cref="M:Lucene.Net.Store.FSDirectory.Open(System.IO.DirectoryInfo)"/>
. For all others, you should instantiate the
desired implementation directly.
<p/>The locking implementation is by default <see cref="T:Lucene.Net.Store.NativeFSLockFactory"/>
, but can be changed by
passing in a custom <see cref="T:Lucene.Net.Store.LockFactory"/> instance.
</summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.InitOutput(System.String)">
<summary>Initializes the directory to create a new file with the given name.
This method should be used in <see cref="M:Lucene.Net.Store.Directory.CreateOutput(System.String)"/>.
</summary>
</member>
<member name="F:Lucene.Net.Store.FSDirectory.internalDirectory">
<summary>The underlying filesystem directory </summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.#ctor(System.IO.DirectoryInfo,Lucene.Net.Store.LockFactory)">
<summary>Create a new FSDirectory for the named location (ctor for subclasses).</summary>
<param name="path">the path of the directory
</param>
<param name="lockFactory">the lock factory to use, or null for the default
(<see cref="T:Lucene.Net.Store.NativeFSLockFactory"/>);
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.Open(System.String)">
<summary>Creates an FSDirectory instance, trying to pick the
best implementation given the current environment.
The directory returned uses the <see cref="T:Lucene.Net.Store.NativeFSLockFactory"/>.
<p/>Currently this returns <see cref="T:Lucene.Net.Store.SimpleFSDirectory"/> as
NIOFSDirectory is currently not supported.
<p/><b>NOTE</b>: this method may suddenly change which
implementation is returned from release to release, in
the event that higher performance defaults become
possible; if the precise implementation is important to
your application, please instantiate it directly,
instead. On 64 bit systems, it may also good to
return <see cref="T:Lucene.Net.Store.MMapDirectory"/>, but this is disabled
because of officially missing unmap support in Java.
For optimal performance you should consider using
this implementation on 64 bit JVMs.
<p/>See <a href="#subclasses">above</a>
</summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.Open(System.IO.DirectoryInfo)">
<summary>Creates an FSDirectory instance, trying to pick the
best implementation given the current environment.
The directory returned uses the <see cref="T:Lucene.Net.Store.NativeFSLockFactory"/>.
<p/>Currently this returns <see cref="T:Lucene.Net.Store.SimpleFSDirectory"/> as
NIOFSDirectory is currently not supported.
<p/><b>NOTE</b>: this method may suddenly change which
implementation is returned from release to release, in
the event that higher performance defaults become
possible; if the precise implementation is important to
your application, please instantiate it directly,
instead. On 64 bit systems, it may also good to
return <see cref="T:Lucene.Net.Store.MMapDirectory"/>, but this is disabled
because of officially missing unmap support in Java.
For optimal performance you should consider using
this implementation on 64 bit JVMs.
<p/>See <a href="#subclasses">above</a>
</summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.Open(System.IO.DirectoryInfo,Lucene.Net.Store.LockFactory)">
<summary>Just like <see cref="M:Lucene.Net.Store.FSDirectory.Open(System.IO.DirectoryInfo)"/>, but allows you to
also specify a custom <see cref="T:Lucene.Net.Store.LockFactory"/>.
</summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.ListAll(System.IO.DirectoryInfo)">
<summary>Lists all files (not subdirectories) in the
directory. This method never returns null (throws
<see cref="T:System.IO.IOException"/> instead).
</summary>
<throws> NoSuchDirectoryException if the directory </throws>
<summary> does not exist, or does exist but is not a
directory.
</summary>
<throws> IOException if list() returns null </throws>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.ListAll">
<summary>Lists all files (not subdirectories) in the
directory.
</summary>
<seealso cref="M:Lucene.Net.Store.FSDirectory.ListAll(System.IO.DirectoryInfo)">
</seealso>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.FileExists(System.String)">
<summary>Returns true iff a file with the given name exists. </summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.FileModified(System.String)">
<summary>Returns the time the named file was last modified. </summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.FileModified(System.IO.FileInfo,System.String)">
<summary>Returns the time the named file was last modified. </summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.TouchFile(System.String)">
<summary>Set the modified time of an existing file to now. </summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.FileLength(System.String)">
<summary>Returns the length in bytes of a file in the directory. </summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.DeleteFile(System.String)">
<summary>Removes an existing file in the directory. </summary>
</member>
<member name="F:Lucene.Net.Store.FSDirectory.HEX_DIGITS">
<summary> So we can do some byte-to-hexchar conversion below</summary>
</member>
<member name="M:Lucene.Net.Store.FSDirectory.ToString">
<summary>For debug output. </summary>
</member>
<member name="F:Lucene.Net.Store.FSDirectory.DEFAULT_READ_CHUNK_SIZE">
<summary> Default read chunk size. This is a conditional
default: on 32bit JVMs, it defaults to 100 MB. On
64bit JVMs, it's <c>Integer.MAX_VALUE</c>.
</summary>
<seealso cref="P:Lucene.Net.Store.FSDirectory.ReadChunkSize">
</seealso>
</member>
<member name="P:Lucene.Net.Store.FSDirectory.ReadChunkSize">
<summary> The maximum number of bytes to read at once from the
underlying file during <see cref="M:Lucene.Net.Store.IndexInput.ReadBytes(System.Byte[],System.Int32,System.Int32)"/>.
</summary>
<seealso cref="P:Lucene.Net.Store.FSDirectory.ReadChunkSize">
</seealso>
</member>
<member name="T:Lucene.Net.Store.FSLockFactory">
<summary> Base class for file system based locking implementation.</summary>
</member>
<member name="T:Lucene.Net.Store.LockFactory">
<summary> <p/>Base class for Locking implementation. <see cref="T:Lucene.Net.Store.Directory"/> uses
instances of this class to implement locking.<p/>
<p/>Note that there are some useful tools to verify that
your LockFactory is working correctly: <see cref="T:Lucene.Net.Store.VerifyingLockFactory"/>
, <see cref="T:Lucene.Net.Store.LockStressTest"/>, <see cref="T:Lucene.Net.Store.LockVerifyServer"/>
.<p/>
</summary>
<seealso cref="T:Lucene.Net.Store.LockVerifyServer">
</seealso>
<seealso cref="T:Lucene.Net.Store.LockStressTest">
</seealso>
<seealso cref="T:Lucene.Net.Store.VerifyingLockFactory">
</seealso>
</member>
<member name="M:Lucene.Net.Store.LockFactory.MakeLock(System.String)">
<summary> Return a new Lock instance identified by lockName.</summary>
<param name="lockName">name of the lock to be created.
</param>
</member>
<member name="M:Lucene.Net.Store.LockFactory.ClearLock(System.String)">
<summary> Attempt to clear (forcefully unlock and remove) the
specified lock. Only call this at a time when you are
certain this lock is no longer in use.
</summary>
<param name="lockName">name of the lock to be cleared.
</param>
</member>
<member name="P:Lucene.Net.Store.LockFactory.LockPrefix">
<summary> Gets or sets the prefix in use for all locks created in this
LockFactory. This is normally called once, when a
Directory gets this LockFactory instance. However, you
can also call this (after this instance is assigned to
a Directory) to override the prefix in use. This
is helpful if you're running Lucene on machines that
have different mount points for the same shared
directory.
</summary>
</member>
<member name="F:Lucene.Net.Store.FSLockFactory.internalLockDir">
<summary> Directory for the lock files.</summary>
</member>
<member name="P:Lucene.Net.Store.FSLockFactory.LockDir">
<summary> Gets the lock directory.
<para>Subclasses can use this to set the lock directory.
This method can be only called
once to initialize the lock directory. It is used by <see cref="T:Lucene.Net.Store.FSDirectory"/>
to set the lock directory to itsself.
Subclasses can also use this method to set the directory
in the constructor.
</para>
</summary>
</member>
<member name="T:Lucene.Net.Store.Lock">
<summary>An interprocess mutex lock.
<p/>Typical use might look like:<code>
new Lock.With(directory.makeLock("my.lock")) {
public Object doBody() {
<i>... code to execute while locked ...</i>
}
}.run();
</code>
</summary>
<seealso cref="M:Lucene.Net.Store.Directory.MakeLock(System.String)"/>
</member>
<member name="F:Lucene.Net.Store.Lock.LOCK_OBTAIN_WAIT_FOREVER">
<summary>Pass this value to <see cref="M:Lucene.Net.Store.Lock.Obtain(System.Int64)"/> to try
forever to obtain the lock.
</summary>
</member>
<member name="F:Lucene.Net.Store.Lock.LOCK_POLL_INTERVAL">
<summary>How long <see cref="M:Lucene.Net.Store.Lock.Obtain(System.Int64)"/> waits, in milliseconds,
in between attempts to acquire the lock.
</summary>
</member>
<member name="M:Lucene.Net.Store.Lock.Obtain">
<summary>Attempts to obtain exclusive access and immediately return
upon success or failure.
</summary>
<returns> true iff exclusive access is obtained
</returns>
</member>
<member name="F:Lucene.Net.Store.Lock.failureReason">
<summary> If a lock obtain called, this failureReason may be set
with the "root cause" Exception as to why the lock was
not obtained.
</summary>
</member>
<member name="M:Lucene.Net.Store.Lock.Obtain(System.Int64)">
<summary>Attempts to obtain an exclusive lock within amount of
time given. Polls once per <see cref="F:Lucene.Net.Store.Lock.LOCK_POLL_INTERVAL"/>
(currently 1000) milliseconds until lockWaitTimeout is
passed.
</summary>
<param name="lockWaitTimeout">length of time to wait in
milliseconds or <see cref="F:Lucene.Net.Store.Lock.LOCK_OBTAIN_WAIT_FOREVER"/>
to retry forever
</param>
<returns> true if lock was obtained
</returns>
<throws> LockObtainFailedException if lock wait times out </throws>
<throws> IllegalArgumentException if lockWaitTimeout is </throws>
<summary> out of bounds
</summary>
<throws> IOException if obtain() throws IOException </throws>
</member>
<member name="M:Lucene.Net.Store.Lock.Release">
<summary>Releases exclusive access. </summary>
</member>
<member name="M:Lucene.Net.Store.Lock.IsLocked">
<summary>Returns true if the resource is currently locked. Note that one must
still call <see cref="M:Lucene.Net.Store.Lock.Obtain"/> before using the resource.
</summary>
</member>
<member name="T:Lucene.Net.Store.Lock.With">
<summary>Utility class for executing code with exclusive access. </summary>
</member>
<member name="M:Lucene.Net.Store.Lock.With.#ctor(Lucene.Net.Store.Lock,System.Int64)">
<summary>Constructs an executor that will grab the named lock. </summary>
</member>
<member name="M:Lucene.Net.Store.Lock.With.DoBody">
<summary>Code to execute with exclusive access. </summary>
</member>
<member name="M:Lucene.Net.Store.Lock.With.run">
<summary>Calls <see cref="M:Lucene.Net.Store.Lock.With.DoBody"/> while <i>lock</i> is obtained. Blocks if lock
cannot be obtained immediately. Retries to obtain lock once per second
until it is obtained, or until it has tried ten times. Lock is released when
<see cref="M:Lucene.Net.Store.Lock.With.DoBody"/> exits.
</summary>
<throws> LockObtainFailedException if lock could not </throws>
<summary> be obtained
</summary>
<throws> IOException if <see cref="M:Lucene.Net.Store.Lock.Obtain(System.Int64)"/> throws IOException </throws>
</member>
<member name="T:Lucene.Net.Store.LockObtainFailedException">
<summary> This exception is thrown when the <c>write.lock</c>
could not be acquired. This
happens when a writer tries to open an index
that another writer already has open.
</summary>
<seealso cref="M:Lucene.Net.Store.Lock.Obtain(System.Int64)">
</seealso>
</member>
<member name="T:Lucene.Net.Store.LockReleaseFailedException">
<summary> This exception is thrown when the <c>write.lock</c>
could not be released.
</summary>
<seealso cref="M:Lucene.Net.Store.Lock.Release">
</seealso>
</member>
<member name="T:Lucene.Net.Store.LockStressTest">
<summary> Simple standalone tool that forever acquires & releases a
lock using a specific LockFactory. Run without any args
to see usage.
</summary>
<seealso cref="T:Lucene.Net.Store.VerifyingLockFactory">
</seealso>
<seealso cref="T:Lucene.Net.Store.LockVerifyServer">
</seealso>
</member>
<member name="T:Lucene.Net.Store.LockVerifyServer">
<summary> Simple standalone server that must be running when you
use <see cref="T:Lucene.Net.Store.VerifyingLockFactory"/>. This server simply
verifies at most one process holds the lock at a time.
Run without any args to see usage.
</summary>
<seealso cref="T:Lucene.Net.Store.VerifyingLockFactory">
</seealso>
<seealso cref="T:Lucene.Net.Store.LockStressTest">
</seealso>
</member>
<member name="T:Lucene.Net.Store.MMapDirectory">
<summary>File-based <see cref="T:Lucene.Net.Store.Directory"/> implementation that uses
mmap for reading, and <see cref="T:Lucene.Net.Store.SimpleFSDirectory.SimpleFSIndexOutput"/>
for writing.
<p/><b>NOTE</b>: memory mapping uses up a portion of the
virtual memory address space in your process equal to the
size of the file being mapped. Before using this class,
be sure your have plenty of virtual address space, e.g. by
using a 64 bit JRE, or a 32 bit JRE with indexes that are
guaranteed to fit within the address space.
On 32 bit platforms also consult <see cref="P:Lucene.Net.Store.MMapDirectory.MaxChunkSize"/>
if you have problems with mmap failing because of fragmented
address space. If you get an OutOfMemoryException, it is recommened
to reduce the chunk size, until it works.
<p/>Due to <a href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4724038">
this bug</a> in Sun's JRE, MMapDirectory's <see cref="M:Lucene.Net.Store.IndexInput.Close"/>
is unable to close the underlying OS file handle. Only when GC
finally collects the underlying objects, which could be quite
some time later, will the file handle be closed.
<p/>This will consume additional transient disk usage: on Windows,
attempts to delete or overwrite the files will result in an
exception; on other platforms, which typically have a "delete on
last close" semantics, while such operations will succeed, the bytes
are still consuming space on disk. For many applications this
limitation is not a problem (e.g. if you have plenty of disk space,
and you don't rely on overwriting files on Windows) but it's still
an important limitation to be aware of.
<p/>This class supplies the workaround mentioned in the bug report
(disabled by default, see <see cref="P:Lucene.Net.Store.MMapDirectory.UseUnmap"/>), which may fail on
non-Sun JVMs. It forcefully unmaps the buffer on close by using
an undocumented internal cleanup functionality.
<see cref="F:Lucene.Net.Store.MMapDirectory.UNMAP_SUPPORTED"/> is <c>true</c>, if the workaround
can be enabled (with no guarantees).
</summary>
</member>
<member name="M:Lucene.Net.Store.MMapDirectory.#ctor(System.IO.DirectoryInfo,Lucene.Net.Store.LockFactory)">
<summary>Create a new MMapDirectory for the named location.
</summary>
<param name="path">the path of the directory
</param>
<param name="lockFactory">the lock factory to use, or null for the default.
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Store.MMapDirectory.#ctor(System.IO.DirectoryInfo)">
<summary>Create a new MMapDirectory for the named location and the default lock factory.
</summary>
<param name="path">the path of the directory
</param>
<throws> IOException </throws>
</member>
<member name="F:Lucene.Net.Store.MMapDirectory.UNMAP_SUPPORTED">
<summary> <c>true</c>, if this platform supports unmapping mmaped files.</summary>
</member>
<member name="M:Lucene.Net.Store.MMapDirectory.CleanMapping(System.IO.MemoryStream)">
<summary> Try to unmap the buffer, this method silently fails if no support
for that in the JVM. On Windows, this leads to the fact,
that mmapped files cannot be modified or deleted.
</summary>
</member>
<member name="M:Lucene.Net.Store.MMapDirectory.OpenInput(System.String,System.Int32)">
<summary>Creates an IndexInput for the file with the given name. </summary>
</member>
<member name="M:Lucene.Net.Store.MMapDirectory.CreateOutput(System.String)">
<summary>Creates an IndexOutput for the file with the given name. </summary>
</member>
<member name="P:Lucene.Net.Store.MMapDirectory.UseUnmap">
<summary> Enables or disables the workaround for unmapping the buffers
from address space after closing <see cref="T:Lucene.Net.Store.IndexInput"/>, that is
mentioned in the bug report. This hack may fail on non-Sun JVMs.
It forcefully unmaps the buffer on close by using
an undocumented internal cleanup functionality.
<p/><b>NOTE:</b> Enabling this is completely unsupported
by Java and may lead to JVM crashs if <c>IndexInput</c>
is closed while another thread is still accessing it (SIGSEGV).
</summary>
<throws> IllegalArgumentException if <see cref="F:Lucene.Net.Store.MMapDirectory.UNMAP_SUPPORTED"/> </throws>
<summary> is <c>false</c> and the workaround cannot be enabled.
</summary>
</member>
<member name="P:Lucene.Net.Store.MMapDirectory.MaxChunkSize">
<summary> Gets or sets the maximum chunk size (default is <see cref="F:System.Int32.MaxValue"/> for
64 bit JVMs and 256 MiBytes for 32 bit JVMs) used for memory mapping.
Especially on 32 bit platform, the address space can be very fragmented,
so large index files cannot be mapped.
Using a lower chunk size makes the directory implementation a little
bit slower (as the correct chunk must be resolved on each seek)
but the chance is higher that mmap does not fail. On 64 bit
Java platforms, this parameter should always be <see cref="F:System.Int32.MaxValue"/>,
as the adress space is big enough.
</summary>
</member>
<member name="T:Lucene.Net.Store.NativeFSLockFactory">
<summary> <p/>Implements <see cref="T:Lucene.Net.Store.LockFactory"/> using native OS file
locks. Note that because this LockFactory relies on
java.nio.* APIs for locking, any problems with those APIs
will cause locking to fail. Specifically, on certain NFS
environments the java.nio.* locks will fail (the lock can
incorrectly be double acquired) whereas <see cref="T:Lucene.Net.Store.SimpleFSLockFactory"/>
worked perfectly in those same
environments. For NFS based access to an index, it's
recommended that you try <see cref="T:Lucene.Net.Store.SimpleFSLockFactory"/>
first and work around the one limitation that a lock file
could be left when the JVM exits abnormally.<p/>
<p/>The primary benefit of <see cref="T:Lucene.Net.Store.NativeFSLockFactory"/> is
that lock files will be properly removed (by the OS) if
the JVM has an abnormal exit.<p/>
<p/>Note that, unlike <see cref="T:Lucene.Net.Store.SimpleFSLockFactory"/>, the existence of
leftover lock files in the filesystem on exiting the JVM
is fine because the OS will free the locks held against
these files even though the files still remain.<p/>
<p/>If you suspect that this or any other LockFactory is
not working properly in your environment, you can easily
test it by using <see cref="T:Lucene.Net.Store.VerifyingLockFactory"/>, <see cref="T:Lucene.Net.Store.LockVerifyServer"/>
and <see cref="T:Lucene.Net.Store.LockStressTest"/>.<p/>
</summary>
<seealso cref="T:Lucene.Net.Store.LockFactory">
</seealso>
</member>
<member name="M:Lucene.Net.Store.NativeFSLockFactory.#ctor">
<summary> Create a NativeFSLockFactory instance, with null (unset)
lock directory. When you pass this factory to a <see cref="T:Lucene.Net.Store.FSDirectory"/>
subclass, the lock directory is automatically set to the
directory itsself. Be sure to create one instance for each directory
your create!
</summary>
</member>
<member name="M:Lucene.Net.Store.NativeFSLockFactory.#ctor(System.String)">
<summary> Create a NativeFSLockFactory instance, storing lock
files into the specified lockDirName:
</summary>
<param name="lockDirName">where lock files are created.
</param>
</member>
<member name="M:Lucene.Net.Store.NativeFSLockFactory.#ctor(System.IO.DirectoryInfo)">
<summary> Create a NativeFSLockFactory instance, storing lock
files into the specified lockDir:
</summary>
<param name="lockDir">where lock files are created.
</param>
</member>
<member name="T:Lucene.Net.Store.NIOFSDirectory">
<summary>
Not implemented. Waiting for volunteers.
</summary>
</member>
<member name="T:Lucene.Net.Store.NIOFSDirectory.NIOFSIndexInput">
<summary>
Not implemented. Waiting for volunteers.
</summary>
</member>
<member name="T:Lucene.Net.Store.NoLockFactory">
<summary> Use this <see cref="T:Lucene.Net.Store.LockFactory"/> to disable locking entirely.
Only one instance of this lock is created. You should call <see cref="P:Lucene.Net.Store.NoLockFactory.Instance"/>
to get the instance.
</summary>
<seealso cref="T:Lucene.Net.Store.LockFactory">
</seealso>
</member>
<member name="T:Lucene.Net.Store.NoSuchDirectoryException">
<summary> This exception is thrown when you try to list a
non-existent directory.
</summary>
</member>
<member name="T:Lucene.Net.Store.RAMDirectory">
<summary> A memory-resident <see cref="T:Lucene.Net.Store.Directory"/> implementation. Locking
implementation is by default the <see cref="T:Lucene.Net.Store.SingleInstanceLockFactory"/>
but can be changed with <see cref="M:Lucene.Net.Store.Directory.SetLockFactory(Lucene.Net.Store.LockFactory)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.#ctor">
<summary>Constructs an empty <see cref="T:Lucene.Net.Store.Directory"/>. </summary>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.#ctor(Lucene.Net.Store.Directory)">
<summary> Creates a new <c>RAMDirectory</c> instance from a different
<c>Directory</c> implementation. This can be used to load
a disk-based index into memory.
<p/>
This should be used only with indices that can fit into memory.
<p/>
Note that the resulting <c>RAMDirectory</c> instance is fully
independent from the original <c>Directory</c> (it is a
complete copy). Any subsequent changes to the
original <c>Directory</c> will not be visible in the
<c>RAMDirectory</c> instance.
</summary>
<param name="dir">a <c>Directory</c> value
</param>
<exception cref="T:System.IO.IOException">if an error occurs
</exception>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.FileExists(System.String)">
<summary>Returns true iff the named file exists in this directory. </summary>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.FileModified(System.String)">
<summary>Returns the time the named file was last modified.</summary>
<throws> IOException if the file does not exist </throws>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.TouchFile(System.String)">
<summary>Set the modified time of an existing file to now.</summary>
<throws> IOException if the file does not exist </throws>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.FileLength(System.String)">
<summary>Returns the length in bytes of a file in the directory.</summary>
<throws> IOException if the file does not exist </throws>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.SizeInBytes">
<summary>Return total size in bytes of all files in this
directory. This is currently quantized to
RAMOutputStream.BUFFER_SIZE.
</summary>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.DeleteFile(System.String)">
<summary>Removes an existing file in the directory.</summary>
<throws> IOException if the file does not exist </throws>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.CreateOutput(System.String)">
<summary>Creates a new, empty file in the directory with the given name. Returns a stream writing this file. </summary>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.OpenInput(System.String)">
<summary>Returns a stream reading an existing file. </summary>
</member>
<member name="M:Lucene.Net.Store.RAMDirectory.Dispose(System.Boolean)">
<summary>Closes the store to future operations, releasing associated memory. </summary>
</member>
<member name="T:Lucene.Net.Store.RAMInputStream">
<summary> A memory-resident <see cref="T:Lucene.Net.Store.IndexInput"/> implementation.
</summary>
</member>
<member name="T:Lucene.Net.Store.RAMOutputStream">
<summary> A memory-resident <see cref="T:Lucene.Net.Store.IndexOutput"/> implementation.
<para>For lucene internal use.</para>
</summary>
</member>
<member name="M:Lucene.Net.Store.RAMOutputStream.#ctor">
<summary>Construct an empty output buffer. </summary>
</member>
<member name="M:Lucene.Net.Store.RAMOutputStream.WriteTo(Lucene.Net.Store.IndexOutput)">
<summary>Copy the current contents of this buffer to the named output. </summary>
</member>
<member name="M:Lucene.Net.Store.RAMOutputStream.Reset">
<summary>Resets this to an empty buffer. </summary>
</member>
<member name="M:Lucene.Net.Store.RAMOutputStream.SizeInBytes">
<summary>Returns byte usage of all buffers. </summary>
</member>
<member name="T:Lucene.Net.Store.SimpleFSDirectory">
<summary>A straightforward implementation of <see cref="T:Lucene.Net.Store.FSDirectory"/>
using java.io.RandomAccessFile. However, this class has
poor concurrent performance (multiple threads will
bottleneck) as it synchronizes when multiple threads
read from the same file. It's usually better to use
<see cref="T:Lucene.Net.Store.NIOFSDirectory"/> or <see cref="T:Lucene.Net.Store.MMapDirectory"/> instead.
</summary>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.#ctor(System.IO.DirectoryInfo,Lucene.Net.Store.LockFactory)">
<summary>Create a new SimpleFSDirectory for the named location.
</summary>
<param name="path">the path of the directory
</param>
<param name="lockFactory">the lock factory to use, or null for the default.
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.#ctor(System.IO.DirectoryInfo)">
<summary>Create a new SimpleFSDirectory for the named location and the default lock factory.
</summary>
<param name="path">the path of the directory
</param>
<throws> IOException </throws>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.CreateOutput(System.String)">
<summary>Creates an IndexOutput for the file with the given name. </summary>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.OpenInput(System.String,System.Int32)">
<summary>Creates an IndexInput for the file with the given name. </summary>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.SimpleFSIndexInput.ReadInternal(System.Byte[],System.Int32,System.Int32)">
<summary>IndexInput methods </summary>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.SimpleFSIndexInput.IsFDValid">
<summary>Method used for testing. Returns true if the underlying
file descriptor is valid.
</summary>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.SimpleFSIndexOutput.FlushBuffer(System.Byte[],System.Int32,System.Int32)">
<summary>output methods: </summary>
</member>
<member name="M:Lucene.Net.Store.SimpleFSDirectory.SimpleFSIndexOutput.Seek(System.Int64)">
<summary>Random-access methods </summary>
</member>
<member name="T:Lucene.Net.Store.SimpleFSLockFactory">
<summary> <p/>Implements <see cref="T:Lucene.Net.Store.LockFactory"/> using <see cref="M:System.IO.FileInfo.Create"/>
.<p/>
<p/><b>NOTE:</b> the <a target="_top" href="http://java.sun.com/j2se/1.4.2/docs/api/java/io/File.html#createNewFile()">javadocs
for <c>File.createNewFile</c></a> contain a vague
yet spooky warning about not using the API for file
locking. This warning was added due to <a target="_top" href="http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4676183">this
bug</a>, and in fact the only known problem with using
this API for locking is that the Lucene write lock may
not be released when the JVM exits abnormally.<p/>
<p/>When this happens, a <see cref="T:Lucene.Net.Store.LockObtainFailedException"/>
is hit when trying to create a writer, in which case you
need to explicitly clear the lock file first. You can
either manually remove the file, or use the
<see cref="M:Lucene.Net.Index.IndexWriter.Unlock(Lucene.Net.Store.Directory)"/>
API. But, first be certain that no writer is in fact
writing to the index otherwise you can easily corrupt
your index.<p/>
<p/>If you suspect that this or any other LockFactory is
not working properly in your environment, you can easily
test it by using <see cref="T:Lucene.Net.Store.VerifyingLockFactory"/>, <see cref="T:Lucene.Net.Store.LockVerifyServer"/>
and <see cref="T:Lucene.Net.Store.LockStressTest"/>.<p/>
</summary>
<seealso cref="T:Lucene.Net.Store.LockFactory">
</seealso>
</member>
<member name="M:Lucene.Net.Store.SimpleFSLockFactory.#ctor">
<summary> Create a SimpleFSLockFactory instance, with null (unset)
lock directory. When you pass this factory to a <see cref="T:Lucene.Net.Store.FSDirectory"/>
subclass, the lock directory is automatically set to the
directory itsself. Be sure to create one instance for each directory
your create!
</summary>
</member>
<member name="M:Lucene.Net.Store.SimpleFSLockFactory.#ctor(System.IO.DirectoryInfo)">
<summary> Instantiate using the provided directory (as a File instance).</summary>
<param name="lockDir">where lock files should be created.
</param>
</member>
<member name="M:Lucene.Net.Store.SimpleFSLockFactory.#ctor(System.String)">
<summary> Instantiate using the provided directory name (String).</summary>
<param name="lockDirName">where lock files should be created.
</param>
</member>
<member name="T:Lucene.Net.Store.SingleInstanceLockFactory">
<summary> Implements <see cref="T:Lucene.Net.Store.LockFactory"/> for a single in-process instance,
meaning all locking will take place through this one instance.
Only use this <see cref="T:Lucene.Net.Store.LockFactory"/> when you are certain all
IndexReaders and IndexWriters for a given index are running
against a single shared in-process Directory instance. This is
currently the default locking for RAMDirectory.
</summary>
<seealso cref="T:Lucene.Net.Store.LockFactory">
</seealso>
</member>
<member name="T:Lucene.Net.Store.VerifyingLockFactory">
<summary> A <see cref="T:Lucene.Net.Store.LockFactory"/> that wraps another <see cref="T:Lucene.Net.Store.LockFactory"/>
and verifies that each lock obtain/release
is "correct" (never results in two processes holding the
lock at the same time). It does this by contacting an
external server (<see cref="T:Lucene.Net.Store.LockVerifyServer"/>) to assert that
at most one process holds the lock at a time. To use
this, you should also run <see cref="T:Lucene.Net.Store.LockVerifyServer"/> on the
host & port matching what you pass to the constructor.
</summary>
<seealso cref="T:Lucene.Net.Store.LockVerifyServer">
</seealso>
<seealso cref="T:Lucene.Net.Store.LockStressTest">
</seealso>
</member>
<member name="M:Lucene.Net.Store.VerifyingLockFactory.#ctor(System.SByte,Lucene.Net.Store.LockFactory,System.String,System.Int32)">
<param name="id">should be a unique id across all clients
</param>
<param name="lf">the LockFactory that we are testing
</param>
<param name="host">host or IP where <see cref="T:Lucene.Net.Store.LockVerifyServer"/>
is running
</param>
<param name="port">the port <see cref="T:Lucene.Net.Store.LockVerifyServer"/> is
listening on
</param>
</member>
<member name="T:Lucene.Net.Support.AppSettings">
<summary>
</summary>
</member>
<member name="M:Lucene.Net.Support.AppSettings.Set(System.String,System.Int32)">
<summary>
</summary>
<param name="key"></param>
<param name="defValue"></param>
</member>
<member name="M:Lucene.Net.Support.AppSettings.Set(System.String,System.Int64)">
<summary>
</summary>
<param name="key"></param>
<param name="defValue"></param>
</member>
<member name="M:Lucene.Net.Support.AppSettings.Set(System.String,System.String)">
<summary>
</summary>
<param name="key"></param>
<param name="defValue"></param>
</member>
<member name="M:Lucene.Net.Support.AppSettings.Set(System.String,System.Boolean)">
<summary>
</summary>
<param name="key"></param>
<param name="defValue"></param>
</member>
<member name="M:Lucene.Net.Support.AppSettings.Get(System.String,System.Int32)">
<summary>
</summary>
<param name="key"></param>
<param name="defValue"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.AppSettings.Get(System.String,System.Int64)">
<summary>
</summary>
<param name="key"></param>
<param name="defValue"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.AppSettings.Get(System.String,System.String)">
<summary>
</summary>
<param name="key"></param>
<param name="defValue"></param>
<returns></returns>
</member>
<member name="T:Lucene.Net.Support.AttributeImplItem">
<summary>
A simple wrapper to allow for the use of the GeneralKeyedCollection. The
wrapper is required as there can be several keys for an object depending
on how many interfaces it implements.
</summary>
</member>
<member name="T:Lucene.Net.Support.BitSetSupport">
<summary>
This class provides supporting methods of java.util.BitSet
that are not present in System.Collections.BitArray.
</summary>
</member>
<member name="M:Lucene.Net.Support.BitSetSupport.NextSetBit(System.Collections.BitArray,System.Int32)">
<summary>
Returns the next set bit at or after index, or -1 if no such bit exists.
</summary>
<param name="bitArray"></param>
<param name="index">the index of bit array at which to start checking</param>
<returns>the next set bit or -1</returns>
</member>
<member name="M:Lucene.Net.Support.BitSetSupport.NextClearBit(System.Collections.BitArray,System.Int32)">
<summary>
Returns the next un-set bit at or after index, or -1 if no such bit exists.
</summary>
<param name="bitArray"></param>
<param name="index">the index of bit array at which to start checking</param>
<returns>the next set bit or -1</returns>
</member>
<member name="M:Lucene.Net.Support.BitSetSupport.Cardinality(System.Collections.BitArray)">
<summary>
Returns the number of bits set to true in this BitSet.
</summary>
<param name="bits">The BitArray object.</param>
<returns>The number of bits set to true in this BitSet.</returns>
</member>
<member name="T:Lucene.Net.Support.Character">
<summary>
Mimics Java's Character class.
</summary>
</member>
<member name="M:Lucene.Net.Support.Character.ForDigit(System.Int32,System.Int32)">
<summary>
</summary>
<param name="digit"></param>
<param name="radix"></param>
<returns></returns>
</member>
<member name="P:Lucene.Net.Support.Character.MAX_RADIX">
<summary>
</summary>
</member>
<member name="P:Lucene.Net.Support.Character.MIN_RADIX">
<summary>
</summary>
</member>
<member name="T:Lucene.Net.Support.CloseableThreadLocalProfiler">
<summary>
For Debuging purposes.
</summary>
</member>
<member name="T:Lucene.Net.Support.CollectionsHelper">
<summary>
Support class used to handle Hashtable addition, which does a check
first to make sure the added item is unique in the hash.
</summary>
</member>
<member name="M:Lucene.Net.Support.CollectionsHelper.CollectionToString(System.Collections.ICollection)">
<summary>
Converts the specified collection to its string representation.
</summary>
<param name="c">The collection to convert to string.</param>
<returns>A string representation of the specified collection.</returns>
</member>
<member name="M:Lucene.Net.Support.CollectionsHelper.CompareStringArrays(System.String[],System.String[])">
<summary>
Compares two string arrays for equality.
</summary>
<param name="l1">First string array list to compare</param>
<param name="l2">Second string array list to compare</param>
<returns>true if the strings are equal in both arrays, false otherwise</returns>
</member>
<member name="M:Lucene.Net.Support.CollectionsHelper.Sort(System.Collections.IList,System.Collections.IComparer)">
<summary>
Sorts an IList collections
</summary>
<param name="list">The System.Collections.IList instance that will be sorted</param>
<param name="Comparator">The Comparator criteria, null to use natural comparator.</param>
</member>
<member name="M:Lucene.Net.Support.CollectionsHelper.Fill(System.Array,System.Int32,System.Int32,System.Object)">
<summary>
Fills the array with an specific value from an specific index to an specific index.
</summary>
<param name="array">The array to be filled.</param>
<param name="fromindex">The first index to be filled.</param>
<param name="toindex">The last index to be filled.</param>
<param name="val">The value to fill the array with.</param>
</member>
<member name="M:Lucene.Net.Support.CollectionsHelper.Fill(System.Array,System.Object)">
<summary>
Fills the array with an specific value.
</summary>
<param name="array">The array to be filled.</param>
<param name="val">The value to fill the array with.</param>
</member>
<member name="M:Lucene.Net.Support.CollectionsHelper.Equals(System.Array,System.Array)">
<summary>
Compares the entire members of one array whith the other one.
</summary>
<param name="array1">The array to be compared.</param>
<param name="array2">The array to be compared with.</param>
<returns>Returns true if the two specified arrays of Objects are equal
to one another. The two arrays are considered equal if both arrays
contain the same number of elements, and all corresponding pairs of
elements in the two arrays are equal. Two objects e1 and e2 are
considered equal if (e1==null ? e2==null : e1.equals(e2)). In other
words, the two arrays are equal if they contain the same elements in
the same order. Also, two array references are considered equal if
both are null.</returns>
</member>
<member name="T:Lucene.Net.Support.Compare">
<summary>
Summary description for TestSupportClass.
</summary>
</member>
<member name="M:Lucene.Net.Support.Compare.CompareTermArrays(Lucene.Net.Index.Term[],Lucene.Net.Index.Term[])">
<summary>
Compares two Term arrays for equality.
</summary>
<param name="t1">First Term array to compare</param>
<param name="t2">Second Term array to compare</param>
<returns>true if the Terms are equal in both arrays, false otherwise</returns>
</member>
<member name="T:Lucene.Net.Support.IChecksum">
<summary>
Contains conversion support elements such as classes, interfaces and static methods.
</summary>
</member>
<member name="T:Lucene.Net.Support.Double">
<summary>
</summary>
</member>
<member name="T:Lucene.Net.Support.EquatableList`1">
<summary>Represents a strongly typed list of objects that can be accessed by index.
Provides methods to search, sort, and manipulate lists. Also provides functionality
to compare lists against each other through an implementations of
<see cref="T:System.IEquatable`1"/>.</summary>
<typeparam name="T">The type of elements in the list.</typeparam>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.#ctor">
<summary>Initializes a new instance of the
<see cref="T:Lucene.Net.Support.EquatableList`1"/> class that is empty and has the
default initial capacity.</summary>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.#ctor(System.Collections.Generic.IEnumerable{`0})">
<summary>Initializes a new instance of the <see cref="T:Lucene.Net.Support.EquatableList`1"/>
class that contains elements copied from the specified collection and has
sufficient capacity to accommodate the number of elements copied.</summary>
<param name="collection">The collection whose elements are copied to the new list.</param>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.#ctor(System.Int32)">
<summary>Initializes a new instance of the <see cref="T:Lucene.Net.Support.EquatableList`1"/>
class that is empty and has the specified initial capacity.</summary>
<param name="capacity">The number of elements that the new list can initially store.</param>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.AddRange(System.Collections.ICollection)">
<summary>Adds a range of objects represented by the <see cref="T:System.Collections.ICollection"/>
implementation.</summary>
<param name="c">The <see cref="T:System.Collections.ICollection"/>
implementation to add to this list.</param>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.EnumerableCountsEqual(System.Collections.Generic.IEnumerable{`0},System.Collections.Generic.IEnumerable{`0})">
<summary>Compares the counts of two <see cref="T:System.Collections.Generic.IEnumerable`1"/>
implementations.</summary>
<remarks>This uses a trick in LINQ, sniffing types for implementations
of interfaces that might supply shortcuts when trying to make comparisons.
In this case, that is the <see cref="T:System.Collections.Generic.ICollection`1"/> and
<see cref="T:System.Collections.ICollection"/> interfaces, either of which can provide a count
which can be used in determining the equality of sequences (if they don't have
the same count, then they can't be equal).</remarks>
<param name="x">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> from the left hand side of the
comparison to check the count of.</param>
<param name="y">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> from the right hand side of the
comparison to check the count of.</param>
<returns>Null if the result is indeterminate. This occurs when either <paramref name="x"/>
or <paramref name="y"/> doesn't implement <see cref="T:System.Collections.ICollection"/> or <see cref="T:System.Collections.Generic.ICollection`1"/>.
Otherwise, it will get the count from each and return true if they are equal, false otherwise.</returns>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.Equals(System.Collections.Generic.IEnumerable{`0},System.Collections.Generic.IEnumerable{`0})">
<summary>Compares the contents of a <see cref="T:System.Collections.Generic.IEnumerable`1"/>
implementation to another one to determine equality.</summary>
<remarks>Thinking of the <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation as
a string with any number of characters, the algorithm checks
each item in each list. If any item of the list is not equal (or
one list contains all the elements of another list), then that list
element is compared to the other list element to see which
list is greater.</remarks>
<param name="x">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation
that is considered the left hand side.</param>
<param name="y">The <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation
that is considered the right hand side.</param>
<returns>True if the items are equal, false otherwise.</returns>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.Equals(System.Collections.Generic.IEnumerable{`0})">
<summary>Compares this sequence to another <see cref="T:System.Collections.Generic.IEnumerable`1"/>
implementation, returning true if they are equal, false otherwise.</summary>
<param name="other">The other <see cref="T:System.Collections.Generic.IEnumerable`1"/> implementation
to compare against.</param>
<returns>True if the sequence in <paramref name="other"/>
is the same as this one.</returns>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.Equals(System.Object)">
<summary>Compares this object for equality against other.</summary>
<param name="obj">The other object to compare this object against.</param>
<returns>True if this object and <paramref name="obj"/> are equal, false
otherwise.</returns>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.GetHashCode">
<summary>Gets the hash code for the list.</summary>
<returns>The hash code value.</returns>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.GetHashCode(System.Collections.Generic.IEnumerable{`0})">
<summary>Gets the hash code for the list.</summary>
<param name="source">The <see cref="T:System.Collections.Generic.IEnumerable`1"/>
implementation which will have all the contents hashed.</param>
<returns>The hash code value.</returns>
</member>
<member name="M:Lucene.Net.Support.EquatableList`1.Clone">
<summary>Clones the <see cref="T:Lucene.Net.Support.EquatableList`1"/>.</summary>
<remarks>This is a shallow clone.</remarks>
<returns>A new shallow clone of this
<see cref="T:Lucene.Net.Support.EquatableList`1"/>.</returns>
</member>
<member name="T:Lucene.Net.Support.FileSupport">
<summary>
Represents the methods to support some operations over files.
</summary>
</member>
<member name="M:Lucene.Net.Support.FileSupport.GetFiles(System.IO.FileInfo)">
<summary>
Returns an array of abstract pathnames representing the files and directories of the specified path.
</summary>
<param name="path">The abstract pathname to list it childs.</param>
<returns>An array of abstract pathnames childs of the path specified or null if the path is not a directory</returns>
</member>
<member name="M:Lucene.Net.Support.FileSupport.GetLuceneIndexFiles(System.String,Lucene.Net.Index.IndexFileNameFilter)">
<summary>
Returns a list of files in a give directory.
</summary>
<param name="fullName">The full path name to the directory.</param>
<param name="indexFileNameFilter"></param>
<returns>An array containing the files.</returns>
</member>
<member name="M:Lucene.Net.Support.FileSupport.Sync(System.IO.FileStream)">
<summary>
Flushes the specified file stream. Ensures that all buffered
data is actually written to the file system.
</summary>
<param name="fileStream">The file stream.</param>
</member>
<member name="T:Lucene.Net.Support.GeneralKeyedCollection`2">
<summary>A collection of <typeparamref name="TItem"/> which can be
looked up by instances of <typeparamref name="TKey"/>.</summary>
<typeparam name="TItem">The type of the items contains in this
collection.</typeparam>
<typeparam name="TKey">The type of the keys that can be used to look
up the items.</typeparam>
</member>
<member name="M:Lucene.Net.Support.GeneralKeyedCollection`2.#ctor(System.Converter{`1,`0})">
<summary>Creates a new instance of the
<see cref="T:Lucene.Net.Support.GeneralKeyedCollection`2"/> class.</summary>
<param name="converter">The <see cref="T:System.Converter`2"/> which will convert
instances of <typeparamref name="TItem"/> to <typeparamref name="TKey"/>
when the override of <see cref="M:Lucene.Net.Support.GeneralKeyedCollection`2.GetKeyForItem(`1)"/> is called.</param>
</member>
<member name="F:Lucene.Net.Support.GeneralKeyedCollection`2.converter">
<summary>The <see cref="T:System.Converter`2"/> which will convert
instances of <typeparamref name="TItem"/> to <typeparamref name="TKey"/>
when the override of <see cref="M:Lucene.Net.Support.GeneralKeyedCollection`2.GetKeyForItem(`1)"/> is called.</summary>
</member>
<member name="M:Lucene.Net.Support.GeneralKeyedCollection`2.GetKeyForItem(`1)">
<summary>Converts an item that is added to the collection to
a key.</summary>
<param name="item">The instance of <typeparamref name="TItem"/>
to convert into an instance of <typeparamref name="TKey"/>.</param>
<returns>The instance of <typeparamref name="TKey"/> which is the
key for this item.</returns>
</member>
<member name="M:Lucene.Net.Support.GeneralKeyedCollection`2.ContainsKey(`0)">
<summary>Determines if a key for an item exists in this
collection.</summary>
<param name="key">The instance of <typeparamref name="TKey"/>
to see if it exists in this collection.</param>
<returns>True if the key exists in the collection, false otherwise.</returns>
</member>
<member name="T:Lucene.Net.Support.HashMap`2">
<summary>
A C# emulation of the <a href="http://download.oracle.com/javase/1,5.0/docs/api/java/util/HashMap.html">Java Hashmap</a>
<para>
A <see cref="T:System.Collections.Generic.Dictionary`2"/> is a close equivalent to the Java
Hashmap. One difference java implementation of the class is that
the Hashmap supports both null keys and values, where the C# Dictionary
only supports null values not keys. Also, <c>V Get(TKey)</c>
method in Java returns null if the key doesn't exist, instead of throwing
an exception. This implementation doesn't throw an exception when a key
doesn't exist, it will return null. This class is slower than using a
<see cref="T:System.Collections.Generic.Dictionary`2"/>, because of extra checks that have to be
done on each access, to check for null.
</para>
<para>
<b>NOTE:</b> This class works best with nullable types. default(T) is returned
when a key doesn't exist in the collection (this being similar to how Java returns
null). Therefore, if the expected behavior of the java code is to execute code
based on if the key exists, when the key is an integer type, it will return 0 instead of null.
</para>
<remaks>
Consider also implementing IDictionary, IEnumerable, and ICollection
like <see cref="T:System.Collections.Generic.Dictionary`2"/> does, so HashMap can be
used in substituted in place for the same interfaces it implements.
</remaks>
</summary>
<typeparam name="TKey">The type of keys in the dictionary</typeparam>
<typeparam name="TValue">The type of values in the dictionary</typeparam>
</member>
<member name="T:Lucene.Net.Support.HashMap`2.NullValueCollection">
<summary>
Wraps a dictionary and adds the value
represented by the null key
</summary>
</member>
<member name="T:Lucene.Net.Support.HashMap`2.NullKeyCollection">
<summary>
Wraps a dictionary's collection, adding in a
null key.
</summary>
</member>
<member name="T:Lucene.Net.Support.Number">
<summary>
A simple class for number conversions.
</summary>
</member>
<member name="F:Lucene.Net.Support.Number.MIN_RADIX">
<summary>
Min radix value.
</summary>
</member>
<member name="F:Lucene.Net.Support.Number.MAX_RADIX">
<summary>
Max radix value.
</summary>
</member>
<member name="M:Lucene.Net.Support.Number.ToString(System.Int64)">
<summary>
Converts a number to System.String.
</summary>
<param name="number"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.Number.ToString(System.Single)">
<summary>
Converts a number to System.String.
</summary>
<param name="f"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.Number.ToString(System.Int64,System.Int32)">
<summary>
Converts a number to System.String in the specified radix.
</summary>
<param name="i">A number to be converted.</param>
<param name="radix">A radix.</param>
<returns>A System.String representation of the number in the specified redix.</returns>
</member>
<member name="M:Lucene.Net.Support.Number.Parse(System.String,System.Int32)">
<summary>
Parses a number in the specified radix.
</summary>
<param name="s">An input System.String.</param>
<param name="radix">A radix.</param>
<returns>The parsed number in the specified radix.</returns>
</member>
<member name="M:Lucene.Net.Support.Number.URShift(System.Int32,System.Int32)">
<summary>
Performs an unsigned bitwise right shift with the specified number
</summary>
<param name="number">Number to operate on</param>
<param name="bits">Ammount of bits to shift</param>
<returns>The resulting number from the shift operation</returns>
</member>
<member name="M:Lucene.Net.Support.Number.URShift(System.Int64,System.Int32)">
<summary>
Performs an unsigned bitwise right shift with the specified number
</summary>
<param name="number">Number to operate on</param>
<param name="bits">Ammount of bits to shift</param>
<returns>The resulting number from the shift operation</returns>
</member>
<member name="M:Lucene.Net.Support.Number.NextSetBit(System.Collections.BitArray,System.Int32)">
<summary>
Returns the index of the first bit that is set to true that occurs
on or after the specified starting index. If no such bit exists
then -1 is returned.
</summary>
<param name="bits">The BitArray object.</param>
<param name="fromIndex">The index to start checking from (inclusive).</param>
<returns>The index of the next set bit.</returns>
</member>
<member name="M:Lucene.Net.Support.Number.ToInt64(System.String)">
<summary>
Converts a System.String number to long.
</summary>
<param name="s"></param>
<returns></returns>
</member>
<member name="T:Lucene.Net.Support.OS">
<summary>
Provides platform infos.
</summary>
</member>
<member name="P:Lucene.Net.Support.OS.IsUnix">
<summary>
Whether we run under a Unix platform.
</summary>
</member>
<member name="P:Lucene.Net.Support.OS.IsWindows">
<summary>
Whether we run under a supported Windows platform.
</summary>
</member>
<member name="T:Lucene.Net.Support.Single">
<summary>
</summary>
</member>
<member name="M:Lucene.Net.Support.Single.Parse(System.String,System.Globalization.NumberStyles,System.IFormatProvider)">
<summary>
</summary>
<param name="s"></param>
<param name="style"></param>
<param name="provider"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.Single.Parse(System.String,System.IFormatProvider)">
<summary>
</summary>
<param name="s"></param>
<param name="provider"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.Single.Parse(System.String,System.Globalization.NumberStyles)">
<summary>
</summary>
<param name="s"></param>
<param name="style"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.Single.Parse(System.String)">
<summary>
</summary>
<param name="s"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.Single.ToString(System.Single)">
<summary>
</summary>
<param name="f"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.Single.ToString(System.Single,System.String)">
<summary>
</summary>
<param name="f"></param>
<param name="format"></param>
<returns></returns>
</member>
<member name="M:Lucene.Net.Support.TextSupport.GetCharsFromString(System.String,System.Int32,System.Int32,System.Char[],System.Int32)">
<summary>
Copies an array of chars obtained from a String into a specified array of chars
</summary>
<param name="sourceString">The String to get the chars from</param>
<param name="sourceStart">Position of the String to start getting the chars</param>
<param name="sourceEnd">Position of the String to end getting the chars</param>
<param name="destinationArray">Array to return the chars</param>
<param name="destinationStart">Position of the destination array of chars to start storing the chars</param>
<returns>An array of chars</returns>
</member>
<member name="T:Lucene.Net.Support.ThreadLock">
<summary>
Abstract base class that provides a synchronization interface
for derived lock types
</summary>
</member>
<member name="P:Lucene.Net.Support.ThreadLock.NullLock">
<summary>
A ThreadLock class that actually does no locking
Used in ParallelMultiSearcher/MultiSearcher
</summary>
</member>
<member name="P:Lucene.Net.Support.ThreadLock.MonitorLock">
<summary>
Wrapper class for the Monitor Enter/Exit methods
using the <see cref="T:Lucene.Net.Support.ThreadLock"/> interface
</summary>
</member>
<member name="T:Lucene.Net.Support.WeakDictionary`2.WeakKey`1">
<summary>
A weak reference wrapper for the hashtable keys. Whenever a key\value pair
is added to the hashtable, the key is wrapped using a WeakKey. WeakKey saves the
value of the original object hashcode for fast comparison.
</summary>
</member>
<member name="T:Lucene.Net.Util.ArrayUtil">
<summary> Methods for manipulating arrays.</summary>
</member>
<member name="M:Lucene.Net.Util.ArrayUtil.ParseInt(System.Char[])">
<summary> Parses the string argument as if it was an int value and returns the
result. Throws NumberFormatException if the string does not represent an
int quantity.
</summary>
<param name="chars">a string representation of an int quantity.
</param>
<returns> int the value represented by the argument
</returns>
<throws> NumberFormatException if the argument could not be parsed as an int quantity. </throws>
</member>
<member name="M:Lucene.Net.Util.ArrayUtil.ParseInt(System.Char[],System.Int32,System.Int32)">
<summary> Parses a char array into an int.</summary>
<param name="chars">the character array
</param>
<param name="offset">The offset into the array
</param>
<param name="len">The length
</param>
<returns> the int
</returns>
<throws> NumberFormatException if it can't parse </throws>
</member>
<member name="M:Lucene.Net.Util.ArrayUtil.ParseInt(System.Char[],System.Int32,System.Int32,System.Int32)">
<summary> Parses the string argument as if it was an int value and returns the
result. Throws NumberFormatException if the string does not represent an
int quantity. The second argument specifies the radix to use when parsing
the value.
</summary>
<param name="chars">a string representation of an int quantity.
</param>
<param name="offset"></param>
<param name="len"></param>
<param name="radix">the base to use for conversion.
</param>
<returns> int the value represented by the argument
</returns>
<throws> NumberFormatException if the argument could not be parsed as an int quantity. </throws>
</member>
<member name="M:Lucene.Net.Util.ArrayUtil.HashCode(System.Char[],System.Int32,System.Int32)">
<summary> Returns hash of chars in range start (inclusive) to
end (inclusive)
</summary>
</member>
<member name="M:Lucene.Net.Util.ArrayUtil.HashCode(System.Byte[],System.Int32,System.Int32)">
<summary> Returns hash of chars in range start (inclusive) to
end (inclusive)
</summary>
</member>
<member name="T:Lucene.Net.Util.AverageGuessMemoryModel">
<summary> An average, best guess, MemoryModel that should work okay on most systems.
</summary>
</member>
<member name="T:Lucene.Net.Util.MemoryModel">
<summary> Returns primitive memory sizes for estimating RAM usage.
</summary>
</member>
<member name="M:Lucene.Net.Util.MemoryModel.GetPrimitiveSize(System.Type)">
<param name="clazz">a primitive Class - bool, byte, char, short, long, float,
short, double, int
</param>
<returns> the size in bytes of given primitive Class
</returns>
</member>
<member name="P:Lucene.Net.Util.MemoryModel.ArraySize">
<value> size of array beyond contents </value>
</member>
<member name="P:Lucene.Net.Util.MemoryModel.ClassSize">
<value> Class size overhead </value>
</member>
<member name="P:Lucene.Net.Util.MemoryModel.ReferenceSize">
<value> size of reference </value>
</member>
<member name="T:Lucene.Net.Util.BitUtil">
<summary>A variety of high efficiencly bit twiddling routines.
</summary>
<version> $Id$
</version>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Pop(System.Int64)">
<summary>Returns the number of bits set in the long </summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Pop_array(System.Int64[],System.Int32,System.Int32)">
<summary> Returns the number of set bits in an array of longs. </summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Pop_intersect(System.Int64[],System.Int64[],System.Int32,System.Int32)">
<summary>Returns the popcount or cardinality of the two sets after an intersection.
Neither array is modified.
</summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Pop_union(System.Int64[],System.Int64[],System.Int32,System.Int32)">
<summary>Returns the popcount or cardinality of the union of two sets.
Neither array is modified.
</summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Pop_andnot(System.Int64[],System.Int64[],System.Int32,System.Int32)">
<summary>Returns the popcount or cardinality of A & ~B
Neither array is modified.
</summary>
</member>
<member name="F:Lucene.Net.Util.BitUtil.ntzTable">
<summary>table of number of trailing zeros in a byte </summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Ntz(System.Int64)">
<summary>Returns number of trailing zeros in a 64 bit long value. </summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Ntz(System.Int32)">
<summary>Returns number of trailing zeros in a 32 bit int value. </summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Ntz2(System.Int64)">
<summary>returns 0 based index of first set bit
(only works for x!=0)
<br/> This is an alternate implementation of ntz()
</summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.Ntz3(System.Int64)">
<summary>returns 0 based index of first set bit
<br/> This is an alternate implementation of ntz()
</summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.IsPowerOfTwo(System.Int32)">
<summary>returns true if v is a power of two or zero</summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.IsPowerOfTwo(System.Int64)">
<summary>returns true if v is a power of two or zero</summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.NextHighestPowerOfTwo(System.Int32)">
<summary>returns the next highest power of two, or the current value if it's already a power of two or zero</summary>
</member>
<member name="M:Lucene.Net.Util.BitUtil.NextHighestPowerOfTwo(System.Int64)">
<summary>returns the next highest power of two, or the current value if it's already a power of two or zero</summary>
</member>
<member name="T:Lucene.Net.Util.BitVector">
<summary>Optimized implementation of a vector of bits. This is more-or-less like
java.util.BitSet, but also includes the following:
<list type="bullet">
<item>a count() method, which efficiently computes the number of one bits;</item>
<item>optimized read from and write to disk;</item>
<item>inlinable get() method;</item>
<item>store and load, as bit set or d-gaps, depending on sparseness;</item>
</list>
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.#ctor(System.Int32)">
<summary>Constructs a vector capable of holding <c>n</c> bits. </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.Set(System.Int32)">
<summary>Sets the value of <c>bit</c> to one. </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.GetAndSet(System.Int32)">
<summary>Sets the value of <c>bit</c> to true, and
returns true if bit was already set
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.Clear(System.Int32)">
<summary>Sets the value of <c>bit</c> to zero. </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.Get(System.Int32)">
<summary>Returns <c>true</c> if <c>bit</c> is one and
<c>false</c> if it is zero.
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.Size">
<summary>Returns the number of bits in this vector. This is also one greater than
the number of the largest valid bit number.
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.Count">
<summary>Returns the total number of one bits in this vector. This is efficiently
computed and cached, so that, if the vector is not changed, no
recomputation is done for repeated calls.
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.GetRecomputedCount">
<summary>
For testing
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.Write(Lucene.Net.Store.Directory,System.String)">
<summary>Writes this vector to the file <c>name</c> in Directory
<c>d</c>, in a format that can be read by the constructor
<see cref="M:Lucene.Net.Util.BitVector.#ctor(Lucene.Net.Store.Directory,System.String)"/>.
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.WriteBits(Lucene.Net.Store.IndexOutput)">
<summary>Write as a bit set </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.WriteDgaps(Lucene.Net.Store.IndexOutput)">
<summary>Write as a d-gaps list </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.IsSparse">
<summary>Indicates if the bit vector is sparse and should be saved as a d-gaps list, or dense, and should be saved as a bit set. </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.#ctor(Lucene.Net.Store.Directory,System.String)">
<summary>Constructs a bit vector from the file <c>name</c> in Directory
<c>d</c>, as written by the <see cref="M:Lucene.Net.Util.BitVector.Write(Lucene.Net.Store.Directory,System.String)"/> method.
</summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.ReadBits(Lucene.Net.Store.IndexInput)">
<summary>Read as a bit set </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.ReadDgaps(Lucene.Net.Store.IndexInput)">
<summary>read as a d-gaps list </summary>
</member>
<member name="M:Lucene.Net.Util.BitVector.Subset(System.Int32,System.Int32)">
<summary> Retrieve a subset of this BitVector.
</summary>
<param name="start">starting index, inclusive
</param>
<param name="end">ending index, exclusive
</param>
<returns> subset
</returns>
</member>
<member name="T:Lucene.Net.Util.Constants">
<summary> Some useful constants.</summary>
</member>
<member name="F:Lucene.Net.Util.Constants.JAVA_VERSION">
<summary>The value of <tt>System.getProperty("java.version")</tt>. *</summary>
</member>
<member name="F:Lucene.Net.Util.Constants.JAVA_1_1">
<summary>True iff this is Java version 1.1. </summary>
</member>
<member name="F:Lucene.Net.Util.Constants.JAVA_1_2">
<summary>True iff this is Java version 1.2. </summary>
</member>
<member name="F:Lucene.Net.Util.Constants.JAVA_1_3">
<summary>True iff this is Java version 1.3. </summary>
</member>
<member name="F:Lucene.Net.Util.Constants.OS_NAME">
<summary>The value of <tt>System.getProperty("os.name")</tt>. *</summary>
</member>
<member name="F:Lucene.Net.Util.Constants.LINUX">
<summary>True iff running on Linux. </summary>
</member>
<member name="F:Lucene.Net.Util.Constants.WINDOWS">
<summary>True iff running on Windows. </summary>
</member>
<member name="F:Lucene.Net.Util.Constants.SUN_OS">
<summary>True iff running on SunOS. </summary>
</member>
<member name="T:Lucene.Net.Util.DocIdBitSet">
<summary>Simple DocIdSet and DocIdSetIterator backed by a BitSet </summary>
</member>
<member name="P:Lucene.Net.Util.DocIdBitSet.IsCacheable">
<summary>This DocIdSet implementation is cacheable.</summary>
</member>
<member name="P:Lucene.Net.Util.DocIdBitSet.BitSet">
<summary> Returns the underlying BitSet. </summary>
</member>
<member name="T:Lucene.Net.Util.FieldCacheSanityChecker">
<summary> Provides methods for sanity checking that entries in the FieldCache
are not wasteful or inconsistent.
<p/>
<p/>
Lucene 2.9 Introduced numerous enhancements into how the FieldCache
is used by the low levels of Lucene searching (for Sorting and
ValueSourceQueries) to improve both the speed for Sorting, as well
as reopening of IndexReaders. But these changes have shifted the
usage of FieldCache from "top level" IndexReaders (frequently a
MultiReader or DirectoryReader) down to the leaf level SegmentReaders.
As a result, existing applications that directly access the FieldCache
may find RAM usage increase significantly when upgrading to 2.9 or
Later. This class provides an API for these applications (or their
Unit tests) to check at run time if the FieldCache contains "insane"
usages of the FieldCache.
<p/>
<p/>
<b>EXPERIMENTAL API:</b> This API is considered extremely advanced and
experimental. It may be removed or altered w/o warning in future releases
of Lucene.
<p/>
</summary>
<seealso cref="T:Lucene.Net.Search.FieldCache">
</seealso>
<seealso cref="T:Lucene.Net.Util.FieldCacheSanityChecker.Insanity">
</seealso>
<seealso cref="T:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType">
</seealso>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.SetRamUsageEstimator(Lucene.Net.Util.RamUsageEstimator)">
<summary> If set, will be used to estimate size for all CacheEntry objects
dealt with.
</summary>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.CheckSanity(Lucene.Net.Search.FieldCache)">
<summary> Quick and dirty convenience method</summary>
<seealso cref="M:Lucene.Net.Util.FieldCacheSanityChecker.Check(Lucene.Net.Search.CacheEntry[])">
</seealso>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.CheckSanity(Lucene.Net.Search.CacheEntry[])">
<summary> Quick and dirty convenience method that instantiates an instance with
"good defaults" and uses it to test the CacheEntrys
</summary>
<seealso cref="M:Lucene.Net.Util.FieldCacheSanityChecker.Check(Lucene.Net.Search.CacheEntry[])">
</seealso>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.Check(Lucene.Net.Search.CacheEntry[])">
<summary> Tests a CacheEntry[] for indication of "insane" cache usage.
<p/>
NOTE:FieldCache CreationPlaceholder objects are ignored.
(:TODO: is this a bad idea? are we masking a real problem?)
<p/>
</summary>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.CheckValueMismatch(Lucene.Net.Util.MapOfSets{System.Int32,Lucene.Net.Search.CacheEntry},Lucene.Net.Util.MapOfSets{Lucene.Net.Util.FieldCacheSanityChecker.ReaderField,System.Int32},System.Collections.Generic.HashSet{Lucene.Net.Util.FieldCacheSanityChecker.ReaderField})">
<summary> Internal helper method used by check that iterates over
valMismatchKeys and generates a Collection of Insanity
instances accordingly. The MapOfSets are used to populate
the Insantiy objects.
</summary>
<seealso cref="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.VALUEMISMATCH">
</seealso>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.CheckSubreaders(Lucene.Net.Util.MapOfSets{System.Int32,Lucene.Net.Search.CacheEntry},Lucene.Net.Util.MapOfSets{Lucene.Net.Util.FieldCacheSanityChecker.ReaderField,System.Int32})">
<summary> Internal helper method used by check that iterates over
the keys of readerFieldToValIds and generates a Collection
of Insanity instances whenever two (or more) ReaderField instances are
found that have an ancestery relationships.
</summary>
<seealso cref="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.SUBREADER">
</seealso>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.GetAllDecendentReaderKeys(System.Object)">
<summary> Checks if the seed is an IndexReader, and if so will walk
the hierarchy of subReaders building up a list of the objects
returned by obj.getFieldCacheKey()
</summary>
</member>
<member name="T:Lucene.Net.Util.FieldCacheSanityChecker.ReaderField">
<summary> Simple pair object for using "readerKey + fieldName" a Map key</summary>
</member>
<member name="T:Lucene.Net.Util.FieldCacheSanityChecker.Insanity">
<summary> Simple container for a collection of related CacheEntry objects that
in conjunction with eachother represent some "insane" usage of the
FieldCache.
</summary>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.Insanity.GetCacheEntries">
<summary> CacheEntry objects which suggest a problem</summary>
</member>
<member name="M:Lucene.Net.Util.FieldCacheSanityChecker.Insanity.ToString">
<summary> Multi-Line representation of this Insanity object, starting with
the Type and Msg, followed by each CacheEntry.toString() on it's
own line prefaced by a tab character
</summary>
</member>
<member name="P:Lucene.Net.Util.FieldCacheSanityChecker.Insanity.Type">
<summary> Type of insane behavior this object represents</summary>
</member>
<member name="P:Lucene.Net.Util.FieldCacheSanityChecker.Insanity.Msg">
<summary> Description of hte insane behavior</summary>
</member>
<member name="T:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType">
<summary> An Enumaration of the differnet types of "insane" behavior that
may be detected in a FieldCache.
</summary>
<seealso cref="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.SUBREADER">
</seealso>
<seealso cref="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.VALUEMISMATCH">
</seealso>
<seealso cref="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.EXPECTED">
</seealso>
</member>
<member name="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.SUBREADER">
<summary> Indicates an overlap in cache usage on a given field
in sub/super readers.
</summary>
</member>
<member name="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.VALUEMISMATCH">
<summary> <p/>
Indicates entries have the same reader+fieldname but
different cached values. This can happen if different datatypes,
or parsers are used -- and while it's not necessarily a bug
it's typically an indication of a possible problem.
<p/>
<p/>
PNOTE: Only the reader, fieldname, and cached value are actually
tested -- if two cache entries have different parsers or datatypes but
the cached values are the same Object (== not just equal()) this method
does not consider that a red flag. This allows for subtle variations
in the way a Parser is specified (null vs DEFAULT_LONG_PARSER, etc...)
<p/>
</summary>
</member>
<member name="F:Lucene.Net.Util.FieldCacheSanityChecker.InsanityType.EXPECTED">
<summary> Indicates an expected bit of "insanity". This may be useful for
clients that wish to preserve/log information about insane usage
but indicate that it was expected.
</summary>
</member>
<member name="T:Lucene.Net.Util.IdentityDictionary`2">
<summary>
A class that mimics Java's IdentityHashMap in that it determines
object equality solely on ReferenceEquals rather than (possibly overloaded)
object.Equals().
NOTE: Java's documentation on IdentityHashMap says that it also uses
ReferenceEquals on it's Values as well. This class does not follow this behavior
</summary>
<typeparam name="TKey">The type of the keys in the dictionary</typeparam>
<typeparam name="TValue">The type of the values in the dictionary</typeparam>
</member>
<member name="T:Lucene.Net.Util.IndexableBinaryStringTools">
<summary> Provides support for converting byte sequences to Strings and back again.
The resulting Strings preserve the original byte sequences' sort order.
The Strings are constructed using a Base 8000h encoding of the original
binary data - each char of an encoded String represents a 15-bit chunk
from the byte sequence. Base 8000h was chosen because it allows for all
lower 15 bits of char to be used without restriction; the surrogate range
[U+D8000-U+DFFF] does not represent valid chars, and would require
complicated handling to avoid them and allow use of char's high bit.
Although unset bits are used as padding in the final char, the original
byte sequence could contain trailing bytes with no set bits (null bytes):
padding is indistinguishable from valid information. To overcome this
problem, a char is appended, indicating the number of encoded bytes in the
final content char.
This class's operations are defined over CharBuffers and ByteBuffers, to
allow for wrapped arrays to be reused, reducing memory allocation costs for
repeated operations. Note that this class calls array() and arrayOffset()
on the CharBuffers and ByteBuffers it uses, so only wrapped arrays may be
used. This class interprets the arrayOffset() and limit() values returned by
its input buffers as beginning and end+1 positions on the wrapped array,
resprectively; similarly, on the output buffer, arrayOffset() is the first
position written to, and limit() is set to one past the final output array
position.
</summary>
</member>
<member name="M:Lucene.Net.Util.IndexableBinaryStringTools.GetEncodedLength(System.Collections.Generic.List{System.Byte})">
<summary> Returns the number of chars required to encode the given byte sequence.
</summary>
<param name="original">The byte sequence to be encoded. Must be backed by an array.
</param>
<returns> The number of chars required to encode the given byte sequence
</returns>
<throws> IllegalArgumentException If the given ByteBuffer is not backed by an array </throws>
</member>
<member name="M:Lucene.Net.Util.IndexableBinaryStringTools.GetDecodedLength(System.Collections.Generic.List{System.Char})">
<summary> Returns the number of bytes required to decode the given char sequence.
</summary>
<param name="encoded">The char sequence to be encoded. Must be backed by an array.
</param>
<returns> The number of bytes required to decode the given char sequence
</returns>
<throws> IllegalArgumentException If the given CharBuffer is not backed by an array </throws>
</member>
<member name="M:Lucene.Net.Util.IndexableBinaryStringTools.Encode(System.Collections.Generic.List{System.Byte},System.Collections.Generic.List{System.Char})">
<summary> Encodes the input byte sequence into the output char sequence. Before
calling this method, ensure that the output CharBuffer has sufficient
capacity by calling <see cref="M:Lucene.Net.Util.IndexableBinaryStringTools.GetEncodedLength(System.Collections.Generic.List{System.Byte})"/>.
</summary>
<param name="input">The byte sequence to encode
</param>
<param name="output">Where the char sequence encoding result will go. The limit
is set to one past the position of the final char.
</param>
<throws> IllegalArgumentException If either the input or the output buffer </throws>
<summary> is not backed by an array
</summary>
</member>
<member name="M:Lucene.Net.Util.IndexableBinaryStringTools.Decode(System.Collections.Generic.List{System.Char},System.Collections.Generic.List{System.Byte})">
<summary> Decodes the input char sequence into the output byte sequence. Before
calling this method, ensure that the output ByteBuffer has sufficient
capacity by calling <see cref="M:Lucene.Net.Util.IndexableBinaryStringTools.GetDecodedLength(System.Collections.Generic.List{System.Char})"/>.
</summary>
<param name="input">The char sequence to decode
</param>
<param name="output">Where the byte sequence decoding result will go. The limit
is set to one past the position of the final char.
</param>
<throws> IllegalArgumentException If either the input or the output buffer </throws>
<summary> is not backed by an array
</summary>
</member>
<member name="M:Lucene.Net.Util.IndexableBinaryStringTools.Decode(System.Collections.Generic.List{System.Char})">
<summary> Decodes the given char sequence, which must have been encoded by
<see cref="M:Lucene.Net.Util.IndexableBinaryStringTools.Encode(System.Collections.Generic.List{System.Byte})"/> or
<see cref="M:Lucene.Net.Util.IndexableBinaryStringTools.Encode(System.Collections.Generic.List{System.Byte},System.Collections.Generic.List{System.Char})"/>.
</summary>
<param name="input">The char sequence to decode
</param>
<returns> A byte sequence containing the decoding result. The limit
is set to one past the position of the final char.
</returns>
<throws> IllegalArgumentException If the input buffer is not backed by an </throws>
<summary> array
</summary>
</member>
<member name="M:Lucene.Net.Util.IndexableBinaryStringTools.Encode(System.Collections.Generic.List{System.Byte})">
<summary> Encodes the input byte sequence.
</summary>
<param name="input">The byte sequence to encode
</param>
<returns> A char sequence containing the encoding result. The limit is set
to one past the position of the final char.
</returns>
<throws> IllegalArgumentException If the input buffer is not backed by an </throws>
<summary> array
</summary>
</member>
<member name="T:Lucene.Net.Util.MapOfSets`2">
<summary> Helper class for keeping Listss of Objects associated with keys. <b>WARNING: THIS CLASS IS NOT THREAD SAFE</b></summary>
</member>
<member name="M:Lucene.Net.Util.MapOfSets`2.#ctor(System.Collections.Generic.IDictionary{`0,System.Collections.Generic.HashSet{`1}})">
<param name="m">the backing store for this object
</param>
</member>
<member name="M:Lucene.Net.Util.MapOfSets`2.Put(`0,`1)">
<summary> Adds val to the Set associated with key in the Map. If key is not
already in the map, a new Set will first be created.
</summary>
<returns> the size of the Set associated with key once val is added to it.
</returns>
</member>
<member name="M:Lucene.Net.Util.MapOfSets`2.PutAll(`0,System.Collections.Generic.IEnumerable{`1})">
<summary> Adds multiple vals to the Set associated with key in the Map.
If key is not
already in the map, a new Set will first be created.
</summary>
<returns> the size of the Set associated with key once val is added to it.
</returns>
</member>
<member name="P:Lucene.Net.Util.MapOfSets`2.Map">
<value> direct access to the map backing this object. </value>
</member>
<member name="T:Lucene.Net.Util.OpenBitSet">
<summary>An "open" BitSet implementation that allows direct access to the array of words
storing the bits.
<p/>
Unlike java.util.bitset, the fact that bits are packed into an array of longs
is part of the interface. This allows efficient implementation of other algorithms
by someone other than the author. It also allows one to efficiently implement
alternate serialization or interchange formats.
<p/>
<c>OpenBitSet</c> is faster than <c>java.util.BitSet</c> in most operations
and *much* faster at calculating cardinality of sets and results of set operations.
It can also handle sets of larger cardinality (up to 64 * 2**32-1)
<p/>
The goals of <c>OpenBitSet</c> are the fastest implementation possible, and
maximum code reuse. Extra safety and encapsulation
may always be built on top, but if that's built in, the cost can never be removed (and
hence people re-implement their own version in order to get better performance).
If you want a "safe", totally encapsulated (and slower and limited) BitSet
class, use <c>java.util.BitSet</c>.
<p/>
<h3>Performance Results</h3>
Test system: Pentium 4, Sun Java 1.5_06 -server -Xbatch -Xmx64M
<br/>BitSet size = 1,000,000
<br/>Results are java.util.BitSet time divided by OpenBitSet time.
<table border="1">
<tr>
<th></th> <th>cardinality</th> <th>intersect_count</th> <th>union</th> <th>nextSetBit</th> <th>get</th> <th>iterator</th>
</tr>
<tr>
<th>50% full</th> <td>3.36</td> <td>3.96</td> <td>1.44</td> <td>1.46</td> <td>1.99</td> <td>1.58</td>
</tr>
<tr>
<th>1% full</th> <td>3.31</td> <td>3.90</td> <td> </td> <td>1.04</td> <td> </td> <td>0.99</td>
</tr>
</table>
<br/>
Test system: AMD Opteron, 64 bit linux, Sun Java 1.5_06 -server -Xbatch -Xmx64M
<br/>BitSet size = 1,000,000
<br/>Results are java.util.BitSet time divided by OpenBitSet time.
<table border="1">
<tr>
<th></th> <th>cardinality</th> <th>intersect_count</th> <th>union</th> <th>nextSetBit</th> <th>get</th> <th>iterator</th>
</tr>
<tr>
<th>50% full</th> <td>2.50</td> <td>3.50</td> <td>1.00</td> <td>1.03</td> <td>1.12</td> <td>1.25</td>
</tr>
<tr>
<th>1% full</th> <td>2.51</td> <td>3.49</td> <td> </td> <td>1.00</td> <td> </td> <td>1.02</td>
</tr>
</table>
</summary>
<version> $Id$
</version>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.#ctor(System.Int64)">
<summary>Constructs an OpenBitSet large enough to hold numBits.
</summary>
<param name="numBits">
</param>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.#ctor(System.Int64[],System.Int32)">
<summary>Constructs an OpenBitSet from an existing long[].
<br/>
The first 64 bits are in long[0],
with bit index 0 at the least significant bit, and bit index 63 at the most significant.
Given a bit index,
the word containing it is long[index/64], and it is at bit number index%64 within that word.
<p/>
numWords are the number of elements in the array that contain
set bits (non-zero longs).
numWords should be <= bits.length, and
any existing words in the array at position >= numWords should be zero.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Capacity">
<summary>Returns the current capacity in bits (1 greater than the index of the last bit) </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Size">
<summary> Returns the current capacity of this set. Included for
compatibility. This is *not* equal to <see cref="M:Lucene.Net.Util.OpenBitSet.Cardinality"/>
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.IsEmpty">
<summary>Returns true if there are no set bits </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Get(System.Int32)">
<summary>Returns true or false for the specified bit index. </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastGet(System.Int32)">
<summary>Returns true or false for the specified bit index.
The index should be less than the OpenBitSet size
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Get(System.Int64)">
<summary>Returns true or false for the specified bit index</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastGet(System.Int64)">
<summary>Returns true or false for the specified bit index.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.GetBit(System.Int32)">
<summary>returns 1 if the bit is set, 0 if not.
The index should be less than the OpenBitSet size
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Set(System.Int64)">
<summary>sets a bit, expanding the set size if necessary </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastSet(System.Int32)">
<summary>Sets the bit at the specified index.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastSet(System.Int64)">
<summary>Sets the bit at the specified index.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Set(System.Int64,System.Int64)">
<summary>Sets a range of bits, expanding the set size if necessary
</summary>
<param name="startIndex">lower index
</param>
<param name="endIndex">one-past the last bit to set
</param>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastClear(System.Int32)">
<summary>clears a bit.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastClear(System.Int64)">
<summary>clears a bit.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Clear(System.Int64)">
<summary>clears a bit, allowing access beyond the current set size without changing the size.</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Clear(System.Int32,System.Int32)">
<summary>Clears a range of bits. Clearing past the end does not change the size of the set.
</summary>
<param name="startIndex">lower index
</param>
<param name="endIndex">one-past the last bit to clear
</param>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Clear(System.Int64,System.Int64)">
<summary>Clears a range of bits. Clearing past the end does not change the size of the set.
</summary>
<param name="startIndex">lower index
</param>
<param name="endIndex">one-past the last bit to clear
</param>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.GetAndSet(System.Int32)">
<summary>Sets a bit and returns the previous value.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.GetAndSet(System.Int64)">
<summary>Sets a bit and returns the previous value.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastFlip(System.Int32)">
<summary>flips a bit.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FastFlip(System.Int64)">
<summary>flips a bit.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Flip(System.Int64)">
<summary>flips a bit, expanding the set size if necessary </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FlipAndGet(System.Int32)">
<summary>flips a bit and returns the resulting bit value.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.FlipAndGet(System.Int64)">
<summary>flips a bit and returns the resulting bit value.
The index should be less than the OpenBitSet size.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Flip(System.Int64,System.Int64)">
<summary>Flips a range of bits, expanding the set size if necessary
</summary>
<param name="startIndex">lower index
</param>
<param name="endIndex">one-past the last bit to flip
</param>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Cardinality">
<returns> the number of set bits
</returns>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.IntersectionCount(Lucene.Net.Util.OpenBitSet,Lucene.Net.Util.OpenBitSet)">
<summary>Returns the popcount or cardinality of the intersection of the two sets.
Neither set is modified.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.UnionCount(Lucene.Net.Util.OpenBitSet,Lucene.Net.Util.OpenBitSet)">
<summary>Returns the popcount or cardinality of the union of the two sets.
Neither set is modified.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.AndNotCount(Lucene.Net.Util.OpenBitSet,Lucene.Net.Util.OpenBitSet)">
<summary>Returns the popcount or cardinality of "a and not b"
or "intersection(a, not(b))".
Neither set is modified.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.XorCount(Lucene.Net.Util.OpenBitSet,Lucene.Net.Util.OpenBitSet)">
<summary>Returns the popcount or cardinality of the exclusive-or of the two sets.
Neither set is modified.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.NextSetBit(System.Int32)">
<summary>Returns the index of the first set bit starting at the index specified.
-1 is returned if there are no more set bits.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.NextSetBit(System.Int64)">
<summary>Returns the index of the first set bit starting at the index specified.
-1 is returned if there are no more set bits.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Intersect(Lucene.Net.Util.OpenBitSet)">
<summary>this = this AND other </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Union(Lucene.Net.Util.OpenBitSet)">
<summary>this = this OR other </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Remove(Lucene.Net.Util.OpenBitSet)">
<summary>Remove all elements set in other. this = this AND_NOT other </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Xor(Lucene.Net.Util.OpenBitSet)">
<summary>this = this XOR other </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Intersects(Lucene.Net.Util.OpenBitSet)">
<summary>returns true if the sets have any elements in common </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.EnsureCapacityWords(System.Int32)">
<summary>Expand the long[] with the size given as a number of words (64 bit longs).
getNumWords() is unchanged by this call.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.EnsureCapacity(System.Int64)">
<summary>Ensure that the long[] is big enough to hold numBits, expanding it if necessary.
getNumWords() is unchanged by this call.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.TrimTrailingZeros">
<summary>Lowers numWords, the number of words in use,
by checking for trailing zero words.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Bits2words(System.Int64)">
<summary>returns the number of 64 bit words it would take to hold numBits </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSet.Equals(System.Object)">
<summary>returns true if both sets have the same bits set </summary>
</member>
<member name="P:Lucene.Net.Util.OpenBitSet.IsCacheable">
<summary>This DocIdSet implementation is cacheable. </summary>
</member>
<member name="P:Lucene.Net.Util.OpenBitSet.Bits">
<summary>Expert: Gets or sets the long[] storing the bits </summary>
</member>
<member name="P:Lucene.Net.Util.OpenBitSet.NumWords">
<summary>Expert: gets or sets the number of longs in the array that are in use </summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSetDISI.#ctor(Lucene.Net.Search.DocIdSetIterator,System.Int32)">
<summary>Construct an OpenBitSetDISI with its bits set
from the doc ids of the given DocIdSetIterator.
Also give a maximum size one larger than the largest doc id for which a
bit may ever be set on this OpenBitSetDISI.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSetDISI.#ctor(System.Int32)">
<summary>Construct an OpenBitSetDISI with no bits set, and a given maximum size
one larger than the largest doc id for which a bit may ever be set
on this OpenBitSetDISI.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSetDISI.InPlaceOr(Lucene.Net.Search.DocIdSetIterator)">
<summary> Perform an inplace OR with the doc ids from a given DocIdSetIterator,
setting the bit for each such doc id.
These doc ids should be smaller than the maximum size passed to the
constructor.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSetDISI.InPlaceAnd(Lucene.Net.Search.DocIdSetIterator)">
<summary> Perform an inplace AND with the doc ids from a given DocIdSetIterator,
leaving only the bits set for which the doc ids are in common.
These doc ids should be smaller than the maximum size passed to the
constructor.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSetDISI.InPlaceNot(Lucene.Net.Search.DocIdSetIterator)">
<summary> Perform an inplace NOT with the doc ids from a given DocIdSetIterator,
clearing all the bits for each such doc id.
These doc ids should be smaller than the maximum size passed to the
constructor.
</summary>
</member>
<member name="M:Lucene.Net.Util.OpenBitSetDISI.InPlaceXor(Lucene.Net.Search.DocIdSetIterator)">
<summary> Perform an inplace XOR with the doc ids from a given DocIdSetIterator,
flipping all the bits for each such doc id.
These doc ids should be smaller than the maximum size passed to the
constructor.
</summary>
</member>
<member name="T:Lucene.Net.Util.OpenBitSetIterator">
<summary>An iterator to iterate over set bits in an OpenBitSet.
This is faster than nextSetBit() for iterating over the complete set of bits,
especially when the density of the bits set is high.
</summary>
<version> $Id$
</version>
</member>
<member name="F:Lucene.Net.Util.OpenBitSetIterator.arr">
<summary>** the python code that generated bitlist
def bits2int(val):
arr=0
for shift in range(8,0,-1):
if val & 0x80:
arr = (arr << 4) | shift
val = val << 1
return arr
def int_table():
tbl = [ hex(bits2int(val)).strip('L') for val in range(256) ]
return ','.join(tbl)
****
</summary>
</member>
<member name="T:Lucene.Net.Util.Cache.Cache`2">
<summary> Base class for cache implementations.</summary>
</member>
<member name="M:Lucene.Net.Util.Cache.Cache`2.SynchronizedCache(Lucene.Net.Util.Cache.Cache{`0,`1})">
<summary> Returns a thread-safe cache backed by the specified cache.
In order to guarantee thread-safety, all access to the backed cache must
be accomplished through the returned cache.
</summary>
</member>
<member name="M:Lucene.Net.Util.Cache.Cache`2.GetSynchronizedCache">
<summary> Called by <see cref="M:Lucene.Net.Util.Cache.Cache`2.SynchronizedCache(Lucene.Net.Util.Cache.Cache{`0,`1})"/>. This method
returns a <see cref="M:Lucene.Net.Util.Cache.Cache`2.SynchronizedCache(Lucene.Net.Util.Cache.Cache{`0,`1})"/> instance that wraps
this instance by default and can be overridden to return
e. g. subclasses of <see cref="M:Lucene.Net.Util.Cache.Cache`2.SynchronizedCache(Lucene.Net.Util.Cache.Cache{`0,`1})"/> or this
in case this cache is already synchronized.
</summary>
</member>
<member name="M:Lucene.Net.Util.Cache.Cache`2.Put(`0,`1)">
<summary> Puts a (key, value)-pair into the cache. </summary>
</member>
<member name="M:Lucene.Net.Util.Cache.Cache`2.Get(System.Object)">
<summary> Returns the value for the given key. </summary>
</member>
<member name="M:Lucene.Net.Util.Cache.Cache`2.ContainsKey(System.Object)">
<summary> Returns whether the given key is in this cache. </summary>
</member>
<member name="M:Lucene.Net.Util.Cache.Cache`2.Close">
<summary> Closes the cache.</summary>
</member>
<member name="T:Lucene.Net.Util.Cache.Cache`2.SynchronizedCache_Renamed_Class">
<summary> Simple Cache wrapper that synchronizes all
calls that access the cache.
</summary>
</member>
<member name="T:Lucene.Net.Util.Cache.SimpleMapCache`2">
<summary> Simple cache implementation that uses a HashMap to store (key, value) pairs.
This cache is not synchronized, use <see cref="M:Lucene.Net.Util.Cache.Cache`2.SynchronizedCache(Lucene.Net.Util.Cache.Cache{`0,`1})"/>
if needed.
</summary>
</member>
<member name="M:Lucene.Net.Util.Cache.SimpleMapCache`2.KeySet">
<summary> Returns a Set containing all keys in this cache.</summary>
</member>
<member name="F:Lucene.Net.Util.Cache.SimpleLRUCache`2.capacity">
<summary>
The maximum number of items to cache.
</summary>
</member>
<member name="F:Lucene.Net.Util.Cache.SimpleLRUCache`2.list">
<summary>
The list to efficiently maintain the LRU state.
</summary>
</member>
<member name="F:Lucene.Net.Util.Cache.SimpleLRUCache`2.lookup">
<summary>
The dictionary to hash into any location in the list.
</summary>
</member>
<member name="F:Lucene.Net.Util.Cache.SimpleLRUCache`2.openNode">
<summary>
The node instance to use/re-use when adding an item to the cache.
</summary>
</member>
<member name="T:Lucene.Net.Util.Cache.SimpleLRUCache`2.ListValueEntry`2">
<summary>
Container to hold the key and value to aid in removal from
the <see cref="F:Lucene.Net.Util.Cache.SimpleLRUCache`2.lookup"/> dictionary when an item is removed from cache.
</summary>
</member>
<member name="T:Lucene.Net.Util.RamUsageEstimator">
<summary> Estimates the size of a given Object using a given MemoryModel for primitive
size information.
Resource Usage:
Internally uses a Map to temporally hold a reference to every
object seen.
If checkIntered, all Strings checked will be interned, but those
that were not already interned will be released for GC when the
estimate is complete.
</summary>
</member>
<member name="M:Lucene.Net.Util.RamUsageEstimator.#ctor">
<summary> Constructs this object with an AverageGuessMemoryModel and
checkInterned = true.
</summary>
</member>
<member name="M:Lucene.Net.Util.RamUsageEstimator.#ctor(System.Boolean)">
<param name="checkInterned">check if Strings are interned and don't add to size
if they are. Defaults to true but if you know the objects you are checking
won't likely contain many interned Strings, it will be faster to turn off
intern checking.
</param>
</member>
<member name="M:Lucene.Net.Util.RamUsageEstimator.#ctor(Lucene.Net.Util.MemoryModel)">
<param name="memoryModel">MemoryModel to use for primitive object sizes.
</param>
</member>
<member name="M:Lucene.Net.Util.RamUsageEstimator.#ctor(Lucene.Net.Util.MemoryModel,System.Boolean)">
<param name="memoryModel">MemoryModel to use for primitive object sizes.
</param>
<param name="checkInterned">check if Strings are interned and don't add to size
if they are. Defaults to true but if you know the objects you are checking
won't likely contain many interned Strings, it will be faster to turn off
intern checking.
</param>
</member>
<member name="M:Lucene.Net.Util.RamUsageEstimator.HumanReadableUnits(System.Int64,System.IFormatProvider)">
<summary> Return good default units based on byte size.</summary>
</member>
<member name="T:Lucene.Net.Util.ReaderUtil">
<summary>
Common util methods for dealing with <see cref="T:Lucene.Net.Index.IndexReader"/>s.
</summary>
</member>
<member name="M:Lucene.Net.Util.ReaderUtil.GatherSubReaders(System.Collections.Generic.IList{Lucene.Net.Index.IndexReader},Lucene.Net.Index.IndexReader)">
<summary>Gathers sub-readers from reader into a List.</summary>
<param name="allSubReaders"></param>
<param name="reader"></param>
</member>
<member name="M:Lucene.Net.Util.ReaderUtil.SubReader(System.Int32,Lucene.Net.Index.IndexReader)">
<summary> Returns sub IndexReader that contains the given document id.
</summary>
<param name="doc">id of document
</param>
<param name="reader">parent reader
</param>
<returns> sub reader of parent which contains the specified doc id
</returns>
</member>
<member name="M:Lucene.Net.Util.ReaderUtil.SubReader(Lucene.Net.Index.IndexReader,System.Int32)">
<summary> Returns sub-reader subIndex from reader.
</summary>
<param name="reader">parent reader
</param>
<param name="subIndex">index of desired sub reader
</param>
<returns> the subreader at subINdex
</returns>
</member>
<member name="M:Lucene.Net.Util.ReaderUtil.SubIndex(System.Int32,System.Int32[])">
<summary> Returns index of the searcher/reader for document <c>n</c> in the
array used to construct this searcher/reader.
</summary>
</member>
<member name="T:Lucene.Net.Util.ScorerDocQueue">
<summary>A ScorerDocQueue maintains a partial ordering of its Scorers such that the
least Scorer can always be found in constant time. Put()'s and pop()'s
require log(size) time. The ordering is by Scorer.doc().
</summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.#ctor(System.Int32)">
<summary>Create a ScorerDocQueue with a maximum size. </summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.Put(Lucene.Net.Search.Scorer)">
<summary> Adds a Scorer to a ScorerDocQueue in log(size) time.
If one tries to add more Scorers than maxSize
a RuntimeException (ArrayIndexOutOfBound) is thrown.
</summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.Insert(Lucene.Net.Search.Scorer)">
<summary> Adds a Scorer to the ScorerDocQueue in log(size) time if either
the ScorerDocQueue is not full, or not lessThan(scorer, top()).
</summary>
<param name="scorer">
</param>
<returns> true if scorer is added, false otherwise.
</returns>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.Top">
<summary>Returns the least Scorer of the ScorerDocQueue in constant time.
Should not be used when the queue is empty.
</summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.TopDoc">
<summary>Returns document number of the least Scorer of the ScorerDocQueue
in constant time.
Should not be used when the queue is empty.
</summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.Pop">
<summary>Removes and returns the least scorer of the ScorerDocQueue in log(size)
time.
Should not be used when the queue is empty.
</summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.PopNoResult">
<summary>Removes the least scorer of the ScorerDocQueue in log(size) time.
Should not be used when the queue is empty.
</summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.AdjustTop">
<summary>Should be called when the scorer at top changes doc() value.
Still log(n) worst case, but it's at least twice as fast to <c>
{ pq.top().change(); pq.adjustTop(); }
</c> instead of <c>
{ o = pq.pop(); o.change(); pq.push(o); }
</c>
</summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.Size">
<summary>Returns the number of scorers currently stored in the ScorerDocQueue. </summary>
</member>
<member name="M:Lucene.Net.Util.ScorerDocQueue.Clear">
<summary>Removes all entries from the ScorerDocQueue. </summary>
</member>
<member name="T:Lucene.Net.Util.SimpleStringInterner">
<summary> Simple lockless and memory barrier free String intern cache that is guaranteed
to return the same String instance as String.intern() does.
</summary>
</member>
<member name="T:Lucene.Net.Util.StringInterner">
<summary> Subclasses of StringInterner are required to
return the same single String object for all equal strings.
Depending on the implementation, this may not be
the same object returned as String.intern().
This StringInterner base class simply delegates to String.intern().
</summary>
</member>
<member name="M:Lucene.Net.Util.StringInterner.Intern(System.String)">
<summary>Returns a single object instance for each equal string. </summary>
</member>
<member name="M:Lucene.Net.Util.StringInterner.Intern(System.Char[],System.Int32,System.Int32)">
<summary>Returns a single object instance for each equal string. </summary>
</member>
<member name="M:Lucene.Net.Util.SimpleStringInterner.#ctor(System.Int32,System.Int32)">
<param name="tableSize"> Size of the hash table, should be a power of two.
</param>
<param name="maxChainLength"> Maximum length of each bucket, after which the oldest item inserted is dropped.
</param>
</member>
<member name="T:Lucene.Net.Util.SmallFloat">
<summary>Floating point numbers smaller than 32 bits.
</summary>
<version> $Id$
</version>
</member>
<member name="M:Lucene.Net.Util.SmallFloat.FloatToByte(System.Single,System.Int32,System.Int32)">
<summary>Converts a 32 bit float to an 8 bit float.
<br/>Values less than zero are all mapped to zero.
<br/>Values are truncated (rounded down) to the nearest 8 bit value.
<br/>Values between zero and the smallest representable value
are rounded up.
</summary>
<param name="f">the 32 bit float to be converted to an 8 bit float (byte)
</param>
<param name="numMantissaBits">the number of mantissa bits to use in the byte, with the remainder to be used in the exponent
</param>
<param name="zeroExp">the zero-point in the range of exponent values
</param>
<returns> the 8 bit float representation
</returns>
</member>
<member name="M:Lucene.Net.Util.SmallFloat.ByteToFloat(System.Byte,System.Int32,System.Int32)">
<summary>Converts an 8 bit float to a 32 bit float. </summary>
</member>
<member name="M:Lucene.Net.Util.SmallFloat.FloatToByte315(System.Single)">
<summary>floatToByte(b, mantissaBits=3, zeroExponent=15)
<br/>smallest non-zero value = 5.820766E-10
<br/>largest value = 7.5161928E9
<br/>epsilon = 0.125
</summary>
</member>
<member name="M:Lucene.Net.Util.SmallFloat.Byte315ToFloat(System.Byte)">
<summary>byteToFloat(b, mantissaBits=3, zeroExponent=15) </summary>
</member>
<member name="M:Lucene.Net.Util.SmallFloat.FloatToByte52(System.Single)">
<summary>floatToByte(b, mantissaBits=5, zeroExponent=2)
<br/>smallest nonzero value = 0.033203125
<br/>largest value = 1984.0
<br/>epsilon = 0.03125
</summary>
</member>
<member name="M:Lucene.Net.Util.SmallFloat.Byte52ToFloat(System.Byte)">
<summary>byteToFloat(b, mantissaBits=5, zeroExponent=2) </summary>
</member>
<member name="T:Lucene.Net.Util.SortedVIntList">
<summary> Stores and iterate on sorted integers in compressed form in RAM. <br/>
The code for compressing the differences between ascending integers was
borrowed from <see cref="T:Lucene.Net.Store.IndexInput"/> and
<see cref="T:Lucene.Net.Store.IndexOutput"/>.<p/>
<b>NOTE:</b> this class assumes the stored integers are doc Ids (hence why it
extends <see cref="T:Lucene.Net.Search.DocIdSet"/>). Therefore its <see cref="M:Lucene.Net.Util.SortedVIntList.Iterator"/> assumes <see cref="F:Lucene.Net.Search.DocIdSetIterator.NO_MORE_DOCS"/>
can be used as sentinel. If you intent to use
this value, then make sure it's not used during search flow.
</summary>
</member>
<member name="F:Lucene.Net.Util.SortedVIntList.BITS2VINTLIST_SIZE">
<summary>When a BitSet has fewer than 1 in BITS2VINTLIST_SIZE bits set,
a SortedVIntList representing the index numbers of the set bits
will be smaller than that BitSet.
</summary>
</member>
<member name="M:Lucene.Net.Util.SortedVIntList.#ctor(System.Int32[])">
<summary> Create a SortedVIntList from all elements of an array of integers.
</summary>
<param name="sortedInts"> A sorted array of non negative integers.
</param>
</member>
<member name="M:Lucene.Net.Util.SortedVIntList.#ctor(System.Int32[],System.Int32)">
<summary> Create a SortedVIntList from an array of integers.</summary>
<param name="sortedInts"> An array of sorted non negative integers.
</param>
<param name="inputSize"> The number of integers to be used from the array.
</param>
</member>
<member name="M:Lucene.Net.Util.SortedVIntList.#ctor(System.Collections.BitArray)">
<summary> Create a SortedVIntList from a BitSet.</summary>
<param name="bits"> A bit set representing a set of integers.
</param>
</member>
<member name="M:Lucene.Net.Util.SortedVIntList.#ctor(Lucene.Net.Util.OpenBitSet)">
<summary> Create a SortedVIntList from an OpenBitSet.</summary>
<param name="bits"> A bit set representing a set of integers.
</param>
</member>
<member name="M:Lucene.Net.Util.SortedVIntList.#ctor(Lucene.Net.Search.DocIdSetIterator)">
<summary> Create a SortedVIntList.</summary>
<param name="docIdSetIterator"> An iterator providing document numbers as a set of integers.
This DocIdSetIterator is iterated completely when this constructor
is called and it must provide the integers in non
decreasing order.
</param>
</member>
<member name="M:Lucene.Net.Util.SortedVIntList.Iterator">
<returns> An iterator over the sorted integers.
</returns>
</member>
<member name="P:Lucene.Net.Util.SortedVIntList.Size">
<value> The total number of sorted integers. </value>
</member>
<member name="P:Lucene.Net.Util.SortedVIntList.ByteSize">
<value> The size of the byte array storing the compressed sorted integers. </value>
</member>
<member name="P:Lucene.Net.Util.SortedVIntList.IsCacheable">
<summary>This DocIdSet implementation is cacheable. </summary>
</member>
<member name="T:Lucene.Net.Util.SorterTemplate">
<summary> Borrowed from Cglib. Allows custom swap so that two arrays can be sorted
at the same time.
</summary>
</member>
<member name="T:Lucene.Net.Util.StringHelper">
<summary> Methods for manipulating strings.</summary>
</member>
<member name="F:Lucene.Net.Util.StringHelper.interner">
<summary> Expert:
The StringInterner implementation used by Lucene.
This shouldn't be changed to an incompatible implementation after other Lucene APIs have been used.
</summary>
</member>
<member name="M:Lucene.Net.Util.StringHelper.Intern(System.String)">
<summary>Return the same string object for all equal strings </summary>
</member>
<member name="M:Lucene.Net.Util.StringHelper.BytesDifference(System.Byte[],System.Int32,System.Byte[],System.Int32)">
<summary> Compares two byte[] arrays, element by element, and returns the
number of elements common to both arrays.
</summary>
<param name="bytes1">The first byte[] to compare
</param>
<param name="len1"></param>
<param name="bytes2">The second byte[] to compare
</param>
<param name="len2"></param>
<returns> The number of common elements.
</returns>
</member>
<member name="M:Lucene.Net.Util.StringHelper.StringDifference(System.String,System.String)">
<summary> Compares two strings, character by character, and returns the
first position where the two strings differ from one another.
</summary>
<param name="s1">The first string to compare
</param>
<param name="s2">The second string to compare
</param>
<returns> The first position where the two strings differ.
</returns>
</member>
<member name="T:Lucene.Net.Util.ToStringUtils">
<summary> Helper methods to ease implementing <see cref="M:System.Object.ToString"/>.</summary>
</member>
<member name="M:Lucene.Net.Util.ToStringUtils.Boost(System.Single)">
<summary>for printing boost only if not 1.0 </summary>
</member>
<member name="T:Lucene.Net.Util.UnicodeUtil">
<summary> Class to encode java's UTF16 char[] into UTF8 byte[]
without always allocating a new byte[] as
String.getBytes("UTF-8") does.
<p/><b>WARNING</b>: This API is a new and experimental and
may suddenly change. <p/>
</summary>
</member>
<member name="M:Lucene.Net.Util.UnicodeUtil.UTF16toUTF8(System.Char[],System.Int32,Lucene.Net.Util.UnicodeUtil.UTF8Result)">
<summary>Encode characters from a char[] source, starting at
offset and stopping when the character 0xffff is seen.
Returns the number of bytes written to bytesOut.
</summary>
</member>
<member name="M:Lucene.Net.Util.UnicodeUtil.UTF16toUTF8(System.Char[],System.Int32,System.Int32,Lucene.Net.Util.UnicodeUtil.UTF8Result)">
<summary>Encode characters from a char[] source, starting at
offset for length chars. Returns the number of bytes
written to bytesOut.
</summary>
</member>
<member name="M:Lucene.Net.Util.UnicodeUtil.UTF16toUTF8(System.String,System.Int32,System.Int32,Lucene.Net.Util.UnicodeUtil.UTF8Result)">
<summary>Encode characters from this String, starting at offset
for length characters. Returns the number of bytes
written to bytesOut.
</summary>
</member>
<member name="M:Lucene.Net.Util.UnicodeUtil.UTF8toUTF16(System.Byte[],System.Int32,System.Int32,Lucene.Net.Util.UnicodeUtil.UTF16Result)">
<summary>Convert UTF8 bytes into UTF16 characters. If offset
is non-zero, conversion starts at that starting point
in utf8, re-using the results from the previous call
up until offset.
</summary>
</member>
<member name="T:Lucene.Net.Util.Version">
<summary> Use by certain classes to match version compatibility
across releases of Lucene.
<p/>
<b>WARNING</b>: When changing the version parameter
that you supply to components in Lucene, do not simply
change the version at search-time, but instead also adjust
your indexing code to match, and re-index.
</summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_20">
<summary>Match settings and bugs in Lucene's 2.0 release.</summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_21">
<summary>Match settings and bugs in Lucene's 2.1 release. </summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_22">
<summary>Match settings and bugs in Lucene's 2.2 release. </summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_23">
<summary>Match settings and bugs in Lucene's 2.3 release.</summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_24">
<summary>Match settings and bugs in Lucene's 2.4 release.</summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_29">
<summary>Match settings and bugs in Lucene's 2.9 release.</summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_30">
<summary>
Match settings and bugs in Lucene's 3.0 release.
<para>
Use this to get the latest and greatest settings, bug fixes,
etc, for Lucene.
</para>
</summary>
</member>
<member name="F:Lucene.Net.Util.Version.LUCENE_CURRENT">
<summary>
<p/><b>WARNING</b>: if you use this setting, and then
upgrade to a newer release of Lucene, sizable changes
may happen. If precise back compatibility is important
then you should instead explicitly specify an actual
version.
If you use this constant then you may need to
<b>re-index all of your documents</b> when upgrading
Lucene, as the way text is indexed may have changed.
Additionally, you may need to <b>re-test your entire
application</b> to ensure it behaves as expected, as
some defaults may have changed and may break functionality
in your application.
</summary>
</member>
</members>
</doc>