Abstract
This short paper gives an overview of the principles behind an algorithm that separates the core-content of a web document from hyperlinked-clutter such as text advertisements and long links of syndicated references to other resources.Its advantage over other approaches is its ability to identify both loosely as well as tightly defined "table-like" or "list-like" structures of hyperlinks (from nested tables to simple, bullet-pointed lists) by operating at various levels within the DOM tree.The resulting data can then be used to extract the core-content from a web document for semantic analysis or other information retrieval purposes as well as to aid in the process of "clipping" a web document to its bare essentials for use with hardware-limited devices such as PDAs and cell phones.
Original language | English |
---|---|
Title of host publication | Proceedings of the HT 2005 - Sixteenth ACM Conference on Hypertext and Hypermedia 6-9 Sept. 2005, Salzburg, Austria |
Editors | Siegfried Reich, Manolis Tzagarkis |
Place of Publication | New York, NY |
Publisher | ACM |
Pages | 145-147 |
Number of pages | 3 |
ISBN (Print) | 1595931686, 9781595931689 |
DOIs | |
Publication status | Published - Sep 2005 |
Event | 16th ACM Conference on Hypertext and Hypermedia, HT - 2005 - Salzburg, Austria Duration: 6 Sep 2005 → 9 Sep 2005 |
Other
Other | 16th ACM Conference on Hypertext and Hypermedia, HT - 2005 |
---|---|
Country | Austria |
City | Salzburg |
Period | 6/09/05 → 9/09/05 |
Keywords
- Content extraction
- DOM tree
- HTML document
- Hyperlink lists
- Information retrieval
- XHTML document