According to Bing’s blog post, “Today we are introducing a new blog series we are labeling “Bing Search Quality Insights” aimed at giving you deeper insight into the algorithms, trends and people behind Bing. This blog is the first in a series that will take you behind the search box for an up close view into the core of the Bing search engine.”
The results from Google’s initiative have essentially been extremely interesting to search industry watchers and webmasters who are concerned with search engine optimization.
But as opposed to Google, some alterations may be pretty refined, Microsoft asserts that these changes are the result of years of research. “Quality improvements in Bing are often subtle but often those very small changes are the result of years of research,” says Dr. Harry Shum, Corporate Vice President of Bing R&D.
In the coming weeks and months, you will hear from members of my team on a variety of topics, from the complexities of social search and disambiguating spelling errors to whole page relevance and making search more personal. We will also put more emphasis on the ideas and projects we have collaborated with colleagues from Microsoft Research and academia to advance the state of the art for our industry. We hope this will not only be useful information for our blog readers, but that they will spark conversations that help us all move the search industry forward, Shum added.
Interestingly, the first entry in Bing’s series comes from Jan Pedersen, Chief Scientist for Core Search at Bing, who discusses about how Bing determines “Whole Page Relevance,” which it uses to determine not just where to rank a result on the search results page, but whether to just return a link or an “answer”.
The post further describes how Bing “blends” results from its vertical search engines like Bing Video, Bing News, Bing Maps and Bing Images along with web listings and direct answers through a system called “Answer Ranking.”
“As with any relevance problem we start with the question of how to measure if Bing has done a good job,” states Pedersen. “We could accomplish this by simply asking human judges to gauge the output of competing blending algorithms and assess which is better. This turns out to be a difficult judgment task that produces quite noisy and unreliable results,” he said.
He continued, stating that we evaluate at how people behave on Bing in the real world. Based on how they respond to changes we make an assumption that a better blending algorithm will move people’s clicks towards the top of the page. This turns out to be the same as saying that a block of content, or answer, is well placed if it receives at least as many clicks as the equivalently sized block of content below it–or, as we say internally, if its win rate is greater than 0.5. So a good blending algorithm will promote an answer on the page upward as long as its win rate is greater than 0.5. Armed with this metric, we can run online experiments and compare the results of competing blending algorithms giving us a realistic data set.”
However, Google implements exactly the same thing, though a system it calls “Universal Search.” Lately, Google’s system has come under intense pressure over the past two years as somehow “favoring” Google over its competitors, including attacks by Microsoft-backed FairSearch.
Lastly, if you are really curious to learn about the inner-workings of search engines, it is a pretty interesting read, and is frankly not he kind of thing we see from Bing very often. It looks like that is changing now. You can read about that here.