Current Web search engines are based on keyword search, and relevance of a web page is dependent on the number of hit count on the keywords. As keyword matching is not at the same level as semantic matching, the searching scope is unnecessarily broad and the precision (and recall) can be rather low. These problems give rise to undesirable performance on web information searching. In this paper, we describe a mechanism called WebReader, which is a middleware between the browser and the Web for automating the search and collecting information from the Web. By facilitating meta-data specification in XML and manipulation in XSL, WebReader provides the users with a centralized, structured, and categorized means to specify and Web information. An experimental prototype based on XML, XSL and Java has been developed to show the feasibility and practicality of our approach through a real-life application example.