When indexer tries to insert a new URL into database or is trying to index an existing one, it first of all checks whether this URL has corresponding Server, Realm or Subnet command given in indexer.conf. URLs without corresponding Server, Realm or Subnet command are not indexed. By default those URLs which are already in database and have no Server/Realm/Subnet commands will be deleted from database. It may happen for example after removing some Server/Realm/Subnet commands from indexer.conf.
These commands have following format:
<command> [method] [subsection] [CaseType] [MatchType] [CmpType] pattern [alias]
pattern specify an URL, or it part, or pattern to compare.
method specify an document action for this command.
May take values:
Allow, Disallow, HrefOnly,
CheckOnly, Skip, CheckMP3,
CheckMP3Only. By default, the value Allow is used.
Value Allow specify that all corresponding documents will be indexed and scanned for new links. Depends on Content-Type appropriate external parser is executed if need.
Value Disallow specify that all corresponding documents will be ignored and deleted from database, if its was placed into before.
Value HrefOnly specify that all corresponding documents will be only scanned for new links (not indexed). This is useful, for example, for getting new documents from a feed, when the feed page is only scanned to detect new messages for indexing.
Server HrefOnly Page http://www.site.ext/feed.xml Server Allow Path http://www.site.ext/
Value CheckOnly specify that all corresponding documents will be requested by HTTP HEAD request, not HTTP GET, i.e. inly brief info about documents (size, last modified, content type) will be fetched. This allow, for example, check links on your site:
Server HrefOnly http://www.dataparksearch.org/ Realm CheckOnly *
These commands instruct indexer to scan all documents on www.dataparksearch.org site and collect all links. Brief info about every document found will be requested by HEAD method. After indexing done, indexer -S command will show status for all documents from this site.
Value Skip specify that all corresponding documents will be skipped while indexing. This is useful when need temporally disable reindexing several sites, but able search on. These documents will marked as expired.
Value CheckMP3 specify that corresponding documents will be checked for MP3 tags along if its Content-Type is equal to audio/mpeg. This is useful, for example, if remote server supply application/octet-stream as Content-Type for MP3 files. If this tag is present, these files will indexed as MP3 file, otherwise its will be processed according to Content-Type.
This value is equal to CheckMP3, but if MP3 tag is not present, processing on Content-Type will not be taken.
parameter to specify server's checking behavior. Subsection value must be
one of the following: nofollow, page, path,
site, world and has "path" value by
When indexer seeks for a "Server" command corresponding to an URL it checks that the discovered URL starts with URL given in Server command argument but without trailing file name. For example, if Server path http://localhost/path/to/index.html is given, all URLs which have http://localhost/path/to/ at the beginning correspond to this Server command.
The following commands have the same effect except that they insert different URLs into database:
Server path http://localhost/path/to/index.html Server path http://localhost/path/to/index Server path http://localhost/path/to/index.cgi?q=bla Server path http://localhost/path/to/index?q=bla
indexer checks that the discovered URL have the same hostname with URL given in Server command. For example, Server site http://localhost/path/to/a.html will allow to index whole http://localhost/ server.
If world subsection is specified in Server command, it has the same effect that URL is considered to match this Server command. See explanation below.
This subsection describes the only one URL given in Server argument.
Skip links following for URL that match the pattern.
subsection in news:// schema
Subsection is always considered as "site" for news:// URL schema. This is because news:// schema has no nested paths like ftp:// or http:// Use Server news://news.server.com/ to index whole news server or for example Server news://news.server.com/udm to index all messages from "udm" hierarchy.
Optional parameter CaseType is specify the case sensivity for string comparison, it can take one of follow value: case - case insensitive comparison, or nocase - case sensitive comparison.
Optional parameter CmpType is specify the type of comparison and can take two value: Regex and String. String wildcards is default match type. You can use ? and * signs in URLMask parameters, they means "one character" and "any number of characters" respectively. Use \ character to escape these characters in you patterns. For example, if you want to index all HTTP sites in .ru domain, use this command:
Regex comparison type takes a regular expression
as it's argument. Activate regex comparison type using
keyword. For example, you can describe everything in .ru domain using
regex comparison type:
Realm Regex ^http://.*\.ru/
Optional parameter MatchType means match type. There
are Match and NoMatch possible values with Match as
default. Realm NoMatch has reverse effect. It means
that URL that does not match given
pattern will correspond to this
Realm command. For example, use this command to index everything
without .com domain:
Realm NoMatch http://*.com/*
alias argument allows providing very
complicated URL rewrite more powerful than other aliasing
mechanism. Take a look Section 3.7> for
alias argument usage
Alias works only with
Regex comparison type and has no
This is the main command of the indexer.conf file. It is used to add servers or their parts to be indexed. This command also says indexer to insert given URL into database at startup.
E.g. command Server http://localhost/ allows to index whole http://localhost/ server. It also makes indexer insert given URL into database at startup. You can also specify some path to index server subsection: Server http://localhost/subsection/. It also says indexer to insert given URL at startup.
Note: You can suppress indexer behavior to add URL given in Server command by using -q indexer command line argument. It is useful when you have hundreds or thousands Server commands and their URLs are already in database. This allows having more quick indexer startup.
Realm command is a more powerful means of describing web area to be indexed.
It works almost like Server command but takes
a regular expression or string wildcards as it's
pattern parameter and
do not insert any URL into database for indexing.
Subnet command is another way to describe web area to be indexed.
It works almost like Server command but takes
a string wildcards or network specified in CIDR presentation format as it's
pattern argument which is compared against IP
address instead of URL. In case of string wilcards formant, argument may have * and ? signs, they means
"one character" and "any number of characters" respectively. For
example, if you want to index all HTTP sites in your local subnet,
use this command:
Subnet 192.168.*.*In case of network specified in CIDR presentation format, you may specify subnet in forms: a.b.c.d/m, a.b.c, a.b, a
You may use "NoMatch" optional argument. For example, if you want to index everything without 195.x.x.x subnet, use:
Subnet NoMatch 195.*.*.*
Indexer seeks for "Server" and "Realm" commands in order of their appearance. Thus if you want to give different parameters to e.g. whole server and its subsection you should add subsection line before whole server's. Imagine that you have server subdirectory which contains news articles. Surely those articles are to be reindexed more often than the rest of the server. The following combination may be useful in such cases:
# Add subsection Period 200000 Server http://servername/news/ # Add server Period 600000 Server http://servername/
These commands give different reindexing period for /news/ subdirectory comparing with the period of server as a whole. indexer will choose the first "Server" record for the http://servername/news/page1.html as far as it matches and was given first.
The default behavior of indexer is to follow through links having correspondent Server/Realm command in the indexer.conf file. It also jumps between servers if both of them are present in indexer.conf either directly in Server command or indirectly in Realm command. For example, there are two Server commands:
Server http://www/ Server http://web/
When indexing http://www/page1.html indexer WILL follow the link http://web/page2.html if the last one has been found. Note that these pages are on different servers, but BOTH of them have correspondent Server record.
If one of the Server command is deleted, indexer will remove all expired URLs from this server during next reindexing.
The third scheme is very useful for indexer -i -f url.txt running. You may maintain required servers in the url.txt. When new URL is added into url.txt indexer will index the server of this URL during next startup.
This command inserts given
URL into database. This is usefull to add
several entry points to one server. Has no effect if an URL is already
in the database.
These commands are equal to Server, Realm, Subnet and
URL commands respectively, but takes arguments from field of SQL-table specified.
In example above, URLs are takes from database
These commands are equal to Server, Realm, Subnet and
URL commands respectively, but takes arguments from a text file specified.
In example above, URLs are takes from the text file
url.lst located in
/usr/local/dpsearch/etc directory, but the full path to a file can be specified as well.
DataparkSearch obeys the robots.txt standard. robots.txt is a file that you place in your web server's root directory that tells search engines what pages you do not want to be indexed.
DataparkSearch also obeys the
noindex meta tags.
DataparkSearch also supports the
Host directives in robots.txt.
Below are commands in indexer.conf file related to the Robots exclusion standard.
Allows/disallows using robots.txt and <META NAME="robots" ...>
no, for example for link validation of your server(s).
Command may be used several times before Server command and
takes effect till the end of config file or till next Robots command.
Default value is "yes".
By defaults, robots.txt data holds in SQL-database for one week. You may change this period using RobotsPeriod command:
For <time> format see description of Period command in Section 3.10.28>.
Use this command to specify default pause in seconds between consecutive fetches from same server. This is similar to crawl-delay command in robots.txt file, but can specified in indexer.conf file on per server basis. If no crawl-delay value is specified in robots.txt, the value of CrawlDelay is used. If crawl-delay is specified in robots.txt, then the maximum of CrawlDelay and crawl-delay is used as interval between consecutive fetches.
When indexer is ready to index an URL from a server for which the Crawl-deley interval isn't expired yet since previous access, it waits until this period will be expired, if waiting peiod is less than amount of time specified by MaxCrawlDelay command. If the waiting period is greater or equal to this value, selected URL is posponed in indexing for the time remained.
Default value is 300 seconds.