nutch-default.xml配置参数解释(部分)

  1. http.max.delays


<property>

<name>http.max.delays</name>

<value>100</value>

<description>Thenumber of times a thread will delay when trying to

fetch a page. Each time it finds that a host is busy,it will wait

fetcher.server.delay. After http.max.delays attepts,it will give

up on the page fornow.</description>

</property>


爬虫的网络延时线程等待时间,以秒计时默认的配时间是3秒,视网络状况而定。如果在爬虫运行的时候发现服务器返回了主机忙消息,则等待时间由fetcher.server.delay决定,所以在网络状况不太好的情况下fetcher.server.delay也设置稍大一点的值较好,此外还有一个http.timeout也和网络状况有关系。

    1. http.content.limit

<property>

<name>http.content.limit</name>

<value>65536</value>

<description>Thelength limit for downloaded content,in bytes.

If this value isnonnegative (>=0),content longer than it will be truncated;

otherwise,notruncation at all.

</description>

</property>


描述爬虫抓取的文档内容长度的配置项。原来的值是65536 也就是说抓取到的一个文档截取65KB左右,超过部分将被忽略,对于抓取特定内容搜索引擎需要修改此项,比如XML文档。

    1. db.default.fetch.interval

<property>

<name>db.default.fetch.interval</name>

<value>30</value>

<description>Thedefault number of days between re-fetches of a page.

</description>

</property>


这个功能对定期自动爬取需求的开发有用,设置多少天重新爬一个页面

    1. fetcher.server.delay

      <property>

      <name>fetcher.server.delay</name>

      <value>5.0</value>

      <description>Thenumber of seconds the fetcher will delay between

      successiverequests to the same server.</description>

      </property>

    2. fetcher.threads.fetch

<property>

<name>fetcher.threads.fetch</name>

<value>10</value>

<description>Thenumber of FetcherThreads the fetcher should use.

This is alsodetermines the maximum number of requests that are

made at once(each FetcherThread handles one connection).</description>

</property>

最大抓取线程数量

    1. fetcher.threads.per.host

<property>

<name>fetcher.threads.per.host</name>

<value>1</value>

<description>Thisnumber is the maximum number of threads that

should beallowed to access a host at one time.</description>

</property>

同一时刻同一网站最大抓取线程数量

    1. fetcher.verbose

<property>

<name>fetcher.verbose</name>

<value>false</value>

<description>Iftrue,fetcher will log more verbosely.</description>

</property>


如果是true,打印出更多详细信息

    1. parser.threads.parse

0.9版本nutch-default.xml无此参数

<property>
<name>parser.threads.parse</name>
<value>10</value>
<description>Numberof ParserThreads ParseSegment shoulduse.</description>
</property>



解析爬取到的文档线程数,和爬虫线程对应,因为爬虫主要的处理类是有很多地方使用到了同步,所以此配置项和爬虫线程保持一直对处理有好处。

    1. fs.default.name

0.9无此参数

<property>
<name>fs.default.name</name>
<value>local</value>
<description>Thename of the default file system. Either the
literal string "local" or a host:port forNDFS.</description>
</property>


分布式文件系统使用的配置项,默认的是local表示使用本地文件系统,如果使用host:port 的形式表示使用分布式文件系统NDFS,此处的文件系统地址是nameserver ,也就是通过bin/nutch namenode xxxx启动的主机地址和端口号。

    1. ndfs.name.dir

0.9无此参数

<property>
<name>ndfs.name.dir</name>
<value>/tmp/nutch/ndfs/name</value>
<description>Determines where on the local filesystem theNDFS name node
should store thename table.</description>
</property>



分布式文件系统namenode使用的存放数据的目录Namenode 会使用此项,另外在启动namenode datanode的时候也可以加上路径作为参数也可以生效。

    1. ndfs.data.dir

0.9无此参数


<property>
<name>ndfs.data.dir</name>
<value>/tmp/nutch/ndfs/data</value>
<description>Determines where on the local filesystem anNDFS data node
should store itsblocks.</description>
</property>


分布式文件系统ndatanode,datanode datanode的时候也可以加上路径作为参数也可以生效。

    1. indexer.max.tokens


<property>

<name>indexer.max.tokens</name>

<value>10000</value>

<description>

The maximum numberof tokens that will be indexed for a single field

in a document.This limits the amount of memory required for

indexing,so thatcollections with very large files will not crash

the indexingprocess by running out of memory.


Note that thiseffectively truncates large documents,excluding

from the indextokens that occur further in the document. If you

know your sourcedocuments are large,be sure to set this value

high enough toaccomodate the expected size. If you set it to

Integer.MAX_VALUE,then the only limit is your memory,but you

should anticipatean OutOfMemoryError.

</description>

</property>


这个配置项的功能是限制索引的时候每个文档的单个字段最大10000Tokens,也就是说在采用默认的一元分词的情况下,最大的文档字数限制是10000,如果采用其他中文非一元分词系统,则索引的单个文档单个字段将会超过10000个,对内存有影响。

    1. indexer.mergeFactor


<property>

<name>indexer.mergeFactor</name>

<value>50</value>

<description>Thefactor that determines the frequency of Lucene segment

merges. This mustnot be less than 2,higher values increase indexing

speed but lead toincreased RAM usage,and increase the number of

open file handles(which may lead to "Too many open files" errors).

NOTE: the"segments" here have nothing to do with Nutch segments,they

are a low-leveldata unit used by Lucene.

</description>

</property>


合并因子,在建立索引的时候用到,表示索引多少个文档的时候回写到存储设备。

    1. indexer.minMergeDocs

<property>

<name>indexer.minMergeDocs</name>

<value>50</value>

<description>Thisnumber determines the minimum number of Lucene

Documents bufferedin memory between Lucene segment merges. Larger

values increaseindexing speed and increase RAM usage.

</description>

</property>


个配置项对内存影响巨大,功能是在建立索引的时候最小的合并文档数量,这个值设置太小一个会影响索引速度,另外一个在需要索引的文档数量很大的时候会出现Too Many Open files错误,这个时候需要调整此配置项,有试验表明1000的时候会有比较快的索引速度,但我把此项值调整到10000索引的时候最高内存占用到1.8G,索引创建速度是25page/sec并且多次索引的时候有一个衰减。不过对查询的相应时间有很大提升,如果内存足够的话修改较大的值比较好。

    1. indexer.maxMergeDocs

<property>

<name>indexer.maxMergeDocs</name>

<value>2147483647</value>

<description>Thisnumber determines the maximum number of Lucene

Documents to bemerged into a new Lucene segment. Larger values

increase batchindexing speed and reduce the number of Lucene segments,

which reduces thenumber of open file handles; however,this also

decreasesincremental indexing performance.

</description>

</property>


这个好像不需要设置,因为默认的值是Integer.MAX_VALUE 不会比这个更大了。

    1. searcher.summary.context

<property>

<name>searcher.summary.context</name>

<value>5</value>

<description>

The number ofcontext terms to display preceding and following

matching terms ina hit summary.

</description>

</property>


这个比较有用,在前面的文章里有介绍。

    1. searcher.summary.length

<property>

<name>searcher.summary.length</name>

<value>20</value>

<description>

The total numberof terms to display in a hit summary.

</description>

</property>


在前面的文章里也有介绍。

    1. plugin.folders

<property>

<name>plugin.folders</name>

<value>plugins</value>

<description>Directories where nutch plugins are located. Each

element may be arelative or absolute path. If absolute,it is used

as is. Ifrelative,it is searched for on the classpath.</description>

</property>


配置插件功能的配置项plugin.folders制定插件加载路径

    1. plugin.includes

<property>

<name>plugin.includes</name>

<value>protocol-http|urlfilter-regex|parse-(text|html|js)|index-basic|query-(basic|site|url)|summary-basic|scoring-opic|urlnormalizer-(pass|regex|basic)</value>

<description>Regular expression naming plugin directorynames to

include. Anyplugin not matching this expression is excluded.

In any case youneed at least include the nutch-extensionpoints plugin. By

default Nutchincludes crawling just HTML and plain text via HTTP,

and basic indexingand search plugins. In order to use HTTPS please enable

protocol-httpclient,but be aware of possible intermittentproblems with the

underlyingcommons-httpclient library.

</description>

</property>


plugin.includes表示需要加载的插件列表

    1. parser.character.encoding.default

<property>

<name>parser.character.encoding.default</name>

<value>windows-1252</value>

<description>Thecharacter encoding to fall back to when no other information

isavailable</description>

</property>


解析文档的时候使用的默认编码windows-1252好像比较少用到的一种编码,我不太熟悉。

    1. parser.html.impl

<property>

<name>parser.html.impl</name>

<value>neko</value>

<description>HTMLParser implementation. Currently the following keywords

are recognized:"neko" uses NekoHTML,"tagsoup" uses TagSoup.

</description>

</property>


制定解析HTML文档的时候使用的解析器,NEKO功能比较强大,后面会有专门的文章介绍NekoHTMLTEXT以及html片断的解析等功能做介绍。

    1. extension.clustering.hits-to-cluster

<property>

<name>extension.clustering.hits-to-cluster</name>

<value>100</value>

<description>Number of snippets retrieved for the clusteringextension

if clusteringextension is available and user requested results

to beclustered.</description>

</property>

聚合功能,对搜索结果有聚合需求的应用可能会用到。

    1. extension.ontology.extension-name

<property>

<name>extension.ontology.extension-name</name>

<value></value>

<description>Usethe specified online ontology extension. If empty,

the firstavailable extension will be used. The "name" here refersto an 'id'

attribute of the'implementation' element in the plugin descriptor XML

file.</description>

</property>



人工智能,这个功能在我以后的开发过程中会逐步深入,等我有相关的经验以后在给大家介绍。◎_

    1. query.url.boost

<property>

<name>query.url.boost</name>

<value>4.0</value>

<description>Used as a boost for url field in Lucene query.

</description>

</property>


<property>

<name>query.anchor.boost</name>

<value>2.0</value>

<description>Used as a boost for anchor field in Lucene query.

</description>

</property>


<property>

<name>query.title.boost</name>

<value>1.5</value>

<description>Used as a boost for title field in Lucene query.

</description>

</property>


<property>

<name>query.host.boost</name>

<value>2.0</value>

<description>Used as a boost for host field in Lucene query.

</description>

</property>


<property>

<name>query.phrase.boost</name>

<value>1.0</value>

<description>Used as a boost for phrase in Lucene query.

Multiplied byboost for field phrase is matched in.

</description>

</property>


以上的几个关于搜索结果排序的分值计算因子在以后的搜索结果排序会专门做介绍,这几个项对垂直搜索的用处不太大。

    1. lang.analyze.max.length

<property>

<name>lang.analyze.max.length</name>

<value>2048</value>

<description>The maximum bytes of data to uses to indentify

the language (0means full content analysis).

The larger is thisvalue,the better is the analysis,but the

slowest it is.

</description>

</property>


和语言有关系,分词的时候会用到,不过我没用过这个配置项。

还有几个重要的配置项在nutch-site.xml里面配置。

    1. searcher.dir

<property>

<name>searcher.dir</name>

<value>crawl</value>

<description>

Path to root ofcrawl. This directory is searched (in

order) for eitherthe file search-servers.txt,containing a list of

distributed searchservers,or the directory "index" containing

merged indexes,orthe directory "segments" containing segment

indexes.

</description>

</property>


可以有两种方式,如果指向的目录下面有search-servers.txt 文件,那么优先处理search-servers.txt文件中的内容,并解析其中复合hostnameport格式的内容(即分布式查询请求),解析到后就想该服务器发送查询请求,如果没有则查找segements 目录segments是本地索引文件
如果两个都没有找到,她就要报错了。
search-servers.txt
内容很简单例如:
127.0.0.1 9999
不过需要注意的是,这个 9999的端口启动的 查询服务器,是用 bin/nutchserver 9999 的命令启动的,
启动比较相似,我当初接触的时候就以为是的地址,郁闷的很久。
namenode
searchserver 结合不太好 没有提供直接从searchserver文件访问接口,需要自己开发,如果大
家知道有可以直接从方法或者现成的程序,请告诉我一下,我需要,要是实在找不到,那就没办法了,自己写。
我现在从namenode
searchserver方法比较原始,不值得推荐,所以就不作介绍了。

相关文章

引言 NOKIA 有句著名的广告语:“科技以人为本”。任何技术都是为了满足人的生产生活需要而产生的。具体...
Writer:BYSocket(泥沙砖瓦浆木匠) 微博:BYSocket 豆瓣:BYSocket Reprint it anywhere u want. 文章...
Writer:BYSocket(泥沙砖瓦浆木匠) 微博:BYSocket 豆瓣:BYSocket Reprint it anywhere u want. 文章...
http://blog.jobbole.com/79252/ 引言 NOKIA 有句著名的广告语:“科技以人为本”。任何技术都是为了满...
(点击上方公众号,可快速关注) 公众号:smart_android 作者:耿广龙|loonggg 点击“阅读原文”,可查看...
一、xml与xslt 相信所有人对xml都不陌生,其被广泛的应用于数据数据传输、保存与序列化中,是一种极为强...