我的需求是相当具体的.我通过Net :: SSH库收到最多50MB的最大响应.我想将这些数据传递给XML库,因为我收到它,以保持内存中的最小数据量.我需要然后在某些标签中查找数据,并做任何事情,在某些情况下,总结一堆值,在其他情况下,只需提取值并将其写入文件或其他任何内容.所以我需要一个可以连续工作的XML解析器,工作速度很快,并且使用最少的内存.我得到的数据是多达1024个字节的块,所以我想要像$myparser-> sendData($mynewData)这样做,然后在打开或关闭一个新标签时调用的函数类似于XML :: SAX.
我不一定需要XPath或XSLT.
解决方法
XML::Parser
,这几乎只是你要求的:
“This module provides ways to parse XML documents. It is built on top of XML::Parser::Expat,which is a lower level interface to James Clark’s expat library. Each call to one of the parsing methods creates a new instance of XML::Parser::Expat which is then used to parse the document. Expat options may be provided when the XML::Parser object is created. These options are then passed on to the Expat object on each parse call. They can also be given as extra arguments to the parse methods,in which case they override options given at XML::Parser creation time.”
“Expat is an event based parser. As the parser recognizes parts of the document (say the start or end tag for an XML element),then any handlers registered for that type of an event are called with suitable parameters.”
我已经用它来解析Wikipedia XML dumps,即使在压缩之后它们也是几GB的大小,并且发现它在这方面工作得很好.相比之下,一个50 MB的文件应该是一块蛋糕.