Data+search,+collection+and+analysi

= **Research engine:** =

=== They use software programs known as robots, spiders or crawlers. A robot is a piece of software that automatically follows hyperlinks from one document to the next around the Web. When a robot discovers a new site, it sends information back to its main site to be indexed. Because Web documents are one of the least static forms of publishing (i.e., they change a lot), robots also update previously catalogued sites. How quickly and comprehensively they carry out these tasks varies from one search engine to the next. ===

How Google processes the Query:
===The query processor has several parts, including the user interface (search box); the “ engine ” that evaluates queries and matches them to relevant documents, and the results formatted. ===



==== Google search works on determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page. ====

** ﻿ Collection:**
====Collecting information is an important part of the process of continual proving and improving. This can be done by asking people to report on something that’s happened to them, by observing that change has happened, or by using some sort of tool to measure the presence or absence of a change.====

Replacing books
References: [] []