Register's Googlewashing Story Overblown

I’m disappointed by the pile-on of people rising to Andrew Orlowski’s classic bit of yellow journalism (or trolling, as it’s more often referred to today), Google Cranks Up the Consensus Engine. If so many other people weren’t taking it seriously, I’d just ignore it. (I just picked this story up via Jim Warren’s alarmed forward to Dave Farber’s IP list.)

Orlowski breathlessly “reports”: “Google this week admitted that its staff will pick and choose what appears in its search results. It’s a historic statement – and nobody has yet grasped its significance.”

Orlowski has divined this fact based on the following “evidence,” a report by Techcrunch’s Michael Arrington on comments made by Marissa Mayer at the Le Web Conference in Paris:

Mayer also talked about Google’s use of user data created by actions on Wiki search to improve search results on Google in general. For now that data is not being used to change overall search results, she said. But in the future it’s likely Google will use the data to at least make obvious changes. An example is if “thousands of people” were to knock a search result off a search page, they’d be likely to make a change.

While I agree that, if true, Google’s manipulation of search results would be a serious problem, I don’t see any evidence in this comment of a change in Google’s approach to search. I fail to see how tuning Google’s algorithms based on the input of thousands of people about which search results they prefer is different from Google’s initial algorithms like Pagerank, in which Google weights links from sites differently based on a calculated value that reflects — guess what, the opinions of the thousands of people linking to each of those sites in turn.

The idea that Google’s algorithms are somehow magically neutral to human values misses their point entirely. What distinguished Google from its peers in 1998 was precisely that it exploited an additional layer of implicit human values as expressed by link behavior, rather than relying on purely mechanistic analysis of the text contained on pages.

Google is always tuning their algorithms to produce what they consider to be better results. What makes a better result? More people click on it.

There’s a feedback loop here that has always guided Google. Google’s algorithms have never been purely mechanistic. They are an attempt to capture the flow of human meaning that is expressed via choices like linking, clicking on search results, and perhaps in the future, gasp, whether people using the wikified version of the search engine de-value certain links.

This is not to say that Google’s search quality team doesn’t actually make human interventions from time to time. In fact, search for O’Reilly (my name) and you’ll see one of them. You’ll see an unusual split page, with the organic search results, dominated by yours truly and my namesake Bill O’Reilly, with the second half of the page given over to Fortune 500 company O’Reilly Auto Parts.

Why? Because Google’s algorithms pushed O’Reilly Auto Parts off the first page in favor of lots more Tim O’Reilly and Bill O’Reilly links, and Google judged, based on search behavior, that folks looking for O’Reilly Auto Parts were going away frustrated. Google uses direct human intervention when it believes that there is no easy way to accomplish the same goal by tuning the algorithms to solve the general case of providing the best results.

(I should note that my only inside knowledge of this subject comes from a few conversations with Peter Norvig, plus a few attempts to persuade Google to give more prominence to book search results, which failed due to the resistance of the search quality team to mucking with the algorithms in ways that they don’t consider justified by the goal of providing greater search satisfaction.)

Even if Google were to become manipulative for their own benefit in the way Orlowski implies, I don’t think we have to worry. They’d soon start losing share to someone who gives better results.

P.S. Speaking of the dark underbelly of editorial bias, consider this: Orlowski doesn’t even bother to link to his source, the Techcrunch article. There’s only one external link in his piece, and it’s done in such a way as to minimize the search engine value of the link (i.e. with no key search terms in the anchor text.) Orlowski either doesn’t understand how search engines work, or he understands them all too well, and is trying not to lead anyone away from his own site. A good lesson in how human judgment can be applied to search results: consider the source.

tags:
  • http://webandlife.blogspot.com Andy Wong

    Google’s operational model is scalable collective intelligence, technically through algorithms. Any attempt of hand pick result is a leak to the scalability of the model, and will never end, leading to the collapse of the scalability. I doubt the executives of Google could not see that and dare to go off the core.

  • http://basiscraft.com Thomas Lord

    I think you and Orlowski complement one another in making a common case that, in any event, the gaming of search is a really childish, dumb game played people with too much money. A few well-considered NPOs hosting open platform map-reduce engines backed by crawlers would kill Google pretty quick, and de-value results gaming on that scale (while enabling it at a smaller scale of numbers of users).

    -t

  • peter cowan

    Orlowski himself states in that article that the Telegraph is in the business of publishing stories about topics that are already being written about heavily on the internet in order to secure more inbound links from Google, so i think the latter is the case, “he understands [how search engines work] all too well”.

  • http://blog.pint.org.uk Tony Kennick

    For a log of Brits in the tech field orlowski’s tendancy away from balanced reporting is a known irritant. While the Register’s irreverance was a key factor in it’s popularity in the early days thier childishness is begining to irritate particularly his.
    http://userscripts.org/scripts/show/38411

  • http://epeus.blogspot.com Kevin Marks

    When Orlowski was trolling about ‘Googlewashing’ five years ago, several of us pointed out his errors and deceit:
    http://epeus.blogspot.com/2003/04/googlewash-hogwash.html

    Misleadingly selective quotation without linking to sources is a key part of his method.

  • http://www.mymeemz.com Alex Tolley

    I’d be more interested in some analysis of this WSJ story concerning Google trying (with others) to negotiate “fast lanes” and thus moving away from net neutrality. This seems to me to be a far more important story for the development of the internet, at least in the US.

    http://online.wsj.com/article/SB122929270127905065.html

  • http://www.mymeemz.com Alex Tolley

    I’d be more interested in some analysis of this WSJ story concerning Google trying (with others) to negotiate “fast lanes” and thus moving away from net neutrality. This seems to me to be a far more important story for the development of the internet, at least in the US.

    http://online.wsj.com/article/SB122929270127905065.html

  • Steve

    Lots of ad hominem attacks on Orlowski here – but like Tim, all seem to ignore the point that Marissa Meyer made at Le Web:

    Google is now making explicit editorial judgements on what Google search results should look like.

    Google is encouraging people to game the system (like Digg), and then will pick the winners. No more neutrality.

    That’s a very big change indeed.

  • Matthieu Catillon

    Google and all the search engines available so far are very low tek alternative to relevancy.

    One can’t expect something worst than ranking based on popularity. The reason why we rank on popularity is just because we don’t know how to rank based on relevancy. That would require AI that are not yet available in the military… we would then need to setup powerful enough algorithm to dig into a big enough index.

    Popularity means the lowest common denominator… this is exactly what Google is offering us right now. Did you ever try the advanced search features? Pathetic… almost the same as basic / standard search.

    I agree with you guys, I read the WSJ story today.

    Same goals, same results: the lowest common denominator. You will watch and read what we tell you to…

    Go back to the library, get rid of you Internet access and buy a real newspaper. It seems to be the only sure way to reach quality content in the years ahead…

    I hope we don’t end up with negative IQ too fast.

    Good luck All…

  • reinko

    This is a weird article;

    My own brother has worked for Google (not on the Google payroll but via via), he could work from home and he did the next:

    He downloaded or got by email lists of search items, for every search item he processed he viewed a lot of the web pages that turned on.

    He had to rate them on a five rank scale from ‘very good pic’ to ‘not rateable’.

    These are verified facts, why should my brother lie and make up work like that?

    It was only Google got the big IPO bag of money they started doing this…

  • http://tim.oreilly.com Tim O'Reilly

    Steve -

    I don’t see what you claim anywhere in Marissa’s comments. All she is saying is that they will consider this data set. If you know anything at all about Google, you understand that they have lots and lots of data sets that they mine to give better results.

    reinko -

    Google didn’t used to do image search. It’s hard to use some of the techniques they use for web pages. So they use other data sources. Nothing new, nothing that suggests nefarious aims.

    I’m not saying that Google will never go down that path, just that there’s not a shred of evidence in Marissa’s comments that this is a turning point of any kind.

    It’s sensationalism for the hope of traffic, nothing more.

  • Steve

    Is that Kevin Marks, Google employee, smearing a journalist?

    Maybe the story tells us something we Google doesn’t want us to hear!

    I am suspicious of people who shoot the messenger – they must have a lousy argument.

  • Steve

    “All she is saying is that they will consider this data set. “

    No, Meyer is saying Google will ultimately consider what is acceptable or not. I am glad this debate is finally taking place, it is good for our democracy, but you seem blind to this.

    Should I trust someone who has personally profited from Google stock? Could you declare your current financial interest in GOOG, Tim?

  • http://tim.oreilly.com Tim O'Reilly

    Steve -

    Google has always considered what is acceptable or not. All I’m saying is that there is no story here. Even where their algorithms are fully automated, those algorithms are designed by a human to give results that maximize user utility, as demonstrated by clicks.

    I have no financial interest in Google. I just think that this is a non-story. There’s a bit of a witch hunt going on about Google right now, which just strikes me as counterproductive.

    I bash people when I think they deserve it, and I praise people when I think they deserve it. Unlike the Register, I don’t do either in pursuit of page views.

  • http://tim.oreilly.com Tim O'Reilly

    Alex, re the WSJ story on Google and Network Neutrality, see David Isenberg’s analysis:

    http://isen.com/blog/2008/12/bogus-wsj-story-on-net-neutrality.html

    David’s “Rise of the Stupid Network” is to network neutrality what my “What is Web 2.0?” is to Web 2.0, one of the seminal documents that helped people understand what was going on. When he says the story is bogus, he’s likely right.

    Meanwhile, see Lessig’s complaint about the story:

    http://lessig.org/blog/2008/12/the_madeup_dramas_of_the_wall.html

    In short, more googlebashing, not googlewashing.

    Also see Google’s own response:

    http://googlepublicpolicy.blogspot.com/2008/12/net-neutrality-and-benefits-of-caching.html

  • http://www.mymeemz.com Alex Tolley

    Tim,
    Thanks so much for posting the links that undermine the WSJ story. I should have been more skeptical given that it was on the Opinion pages.

    Seems like the story was pretty much completely bogus and possibly written as part of their long standing position decrying net neutrality in favor of carriers having unregulated control.

    Alex

  • Falafulu Fisi

    Tim O’Reilly said…
    I fail to see how tuning Google’s algorithms based on the input of thousands of people about which search results they prefer is different from Google’s initial algorithms like Pagerank, in which Google weights links from sites differently based on a calculated value that reflects — guess what, the opinions of the thousands of people linking to each of those sites in turn.

    Tim, the Google PageRank algorithm (popularity vote) only works on 2 dimensional (rows & columns) dataset (which are the in-bound & out-bound links), ie, 2 variables only. I read on the internet that they (Google) have now included LSI (Latent Semantic Indexing) in their search engine, ie, concept searching. LSI also works on 2D dataset (words by documents frequency matrix), exactly as PageRank is and I don’t know whether the inclusion of LSI on top of PageRank by Google is a combination of those 2 into one algorithm or just 2 separate algorithms where the results of their computation are combined at the end to spit out search results which are popular (PageRank) and also relevant (concept).

    Now, there is a new search techniques which includes higher dimension, 3D, 4D, 5D, 6D, etc… which is based on Tensor Calculus (multi-dimensional algebra). Tensor is not new, because Einstein used it in 1916 to develop his Theory of General Relativity (TGR), but it is only recent that analysts realized its potential for data-analysis, which is why it is becoming a hot-topic today of research in the data-analysis community. This is why science fiction commentators over the years have associated such terms as hyperspace/multidimension with the universe, because TGR deals with multidimension.

    LSI algorithm has been tensorized (only 3D, ie, 3 variables at this stage), but there is no doubt that in the future, still higher dimension version of LSI will emerge from the community of researchers.

    I ain’t seen in the literatures if PageRank has been tensorized yet, but I can’t rule out if Google is working on a Tensor PageRank version secretly, with such algorithm is applicable to the scenario that you described above. See, people’s search result can be added as a 3rd dimension (3rd variable) on top of the 2D PageRank (2 variables as inbound & outbound link weights). Or perhaps that Google is developing a hybrid tensor PageRank/LSI, who knows? I suspect that Google is working on some tensor algorithms, either by extending the 2D PageRank into higher dimension (3D, 4D, 5D, etc…) or combination of PageRank with others to form hybrids, such as LSI to make it a 4D tensor algorithm. Such hybrids will have 4 variables which are the inbound & outbound frequency, and term and document frequency.

    Anyway, it remains to be seen if indeed Google is working on some Tensor type of algorithms.

  • Falafulu Fisi

    For search engine developers out there, here are some papers on application of tensor in information search & retrieval.

    Papers on Tensors and Their Use in Information Retrieval

    Tensor is applicable in any domain of data analysis that includes 2D or higher dimensional dataset. I have seen 3D tensor publications that described their use in speech separation problem and also imaging, such as lists of papers shown below .

    Nonnegative Tensor Factorization

  • http://piccolbo.blogspot.com/2008/08/google-returns-more-adsense-rich.html AP

    I think this view that Google will do the users’ interest is very naive. Like any business, they seek profit which requires compromise between user satisfaction and, for instance, advertiser satisfaction. Moreover users might not know what is best for them: their knowledge is imperfect (perfect markets are only an abstraction). I think I collected (see http://piccolbo.blogspot.com/2008/08/google-returns-more-adsense-rich.html) solid evidence that either google is enriching their results for adsense running sites, which bring profit to them, or microsoft live is doing the opposite. Not in a blatant way that people might rise up in arms against, but over many searches the verdict is clear. I’d love for people to take my data apart and show where the error is or find alternative explanations.

  • http://tim.oreilly.com Tim O'Reilly

    AP -

    I haven’t looked at your data, but I don’t dispute your position that Google likely balances the results for both user and advertiser satisfaction. But note that that’s a very different assertion that there’s some kind of watershed moment in Google’s behavior demonstrated by Marissa’s comment about how they might use data from the search wiki experiment.

    Google is doing the same thing that they’ve been doing for the past ten years, continually finding new data and new algorithms to optimize their business.

    I’m sure that they will be using their local search data, and their mobile search data in new ways too.

    This story is a bit like saying “Linus Torvalds doesn’t take all proposed Linux kernel modifications: A watershed moment demonstrating the failure of open source!” It’s plain stupid.

  • Reinko

    To Tim O’Reily:

    What I wrote had nothing to do with image searching.

    What I wrote was a first hand report from my brother who’s name is irrelevant and he ranked websites given a list of search items.

    What does that have to do with image search?

  • http://tim.oreilly.com Tim O'Reilly

    Reinko:

    I took your comments to be about image search because you said he was asked to rate ‘pics’ – here’s your line:

    “He had to rate them on a five rank scale from ‘very good pic’ to ‘not rateable’.”

    Even if these were just web pages, and not pictures, all this says is that Google has a mechanism for human cross-checking of algorithmic results.

    Peter Norvig told me that this is how they originally came up with the split screen approach for some searches. The example he originally used was that a search for “Glacier Bay” based on the purely algorithmic results didn’t put the national park on the first page, because results were dominated by pages from the Glacier Bay Faucet Company, a plumbing vendor.

    Google has always used human testing to verify the result of their algorithms. This doesn’t mean that they are hand selecting, censoring, or otherwise deviously manipulating results.

  • http://basiscraft.com Thomas Lord

    It’s a rich man’s game, people’s search results, browsing habits, and surveillance data.

    There’s no technical necessity to Google at all: it’s a side effect of the monopolization of search platforms. The game could be made to go away but not without retracting some greed plays made by a few along with all of the alliances those few have now built.

    Investors could pony up some real money, kill Google with an open platform, and probably see a modest return. Or, they could put that same real money in some bets aimed at reinforcing and leveraging Google for a higher return (unless someone else undertakes to kill Google).

    Betting on Google is generally assessed as the safe bet while killing it is probably off the charts risk-wise, and so money follows money. The relative risk assessments aren’t really grounded in reality any more than investment with Madoff or in Lehman was grounded in reality but, while it’s generally regarded as the safe bet, it therefore is — at least for a time.

    The Bush administration notoriously had important departments (e.g. FCC, EPA) “suppressing science” for political and economic gain for a small group. Google’s perception among the financial elite and their society of support is no different. There are too many holes in the theories of Google’s technical necessity, natural monopolies, and “value adds” for them to hold up to scrutiny but scrutiny won’t come because “everyone [in those societies] knows” what a safe and sure bet Google is and how “they must be right”.

    -t

  • Reinko

    To Tim,

    Ok ok I understand, but it had nothing to do with pictures but with search results given a search item.

    There is much more fun to it: it all was shrouded in a cloud of secret stuff, my brother worked for what here in the Netherlands is an ‘uitzend buro’.

    These are companies that hire people on a temporary basis.

    Everybody knew it was work for Google but you were never allowed to say the ‘Google’ word.

    Facts are that Google hired via via people here in the Netherlands to add a bit of ‘human rating’ into the process while in the meantime playing the Media thing with their ‘superior math’ and the ‘superior algorithm’ for search items.

    It simply lays the axe at the original root of their fame.

    There are all kinds of reasons that there is some human involvement into what pops up as search results; it is logical that Google has a business interest in keeping her results ‘clean from garbage’.

    But hiring people in other nations to do actual ranking of websites or web pages is simply from another dimension…

  • Falafulu Fisi

    Matthieu Catillon said…

    The reason why we rank on popularity is just because we don’t know how to rank based on relevancy.

    Matthieu, you’re out of date. Relevancy (or concept search) has been around for around 2 decades. See my comment above on Latent Semantic Indexing (LSI). Of course all algorithms improve over time and LSI is no different. There have other algorithms that are more robust than LSI in terms of precision & recall and also there are recent variants of LSI that improve over existing LSI.

  • Robert Long

    The Register has a general policy of not linking out on negative stories, a policy I generally accept as rational if irritating sometimes. After all, what’s the point in criticising someone by driving traffic to their site and boosting their appearance on Google for free?

    In the broader picture however, I agree that this is no watershed moment. That moment – when Google ceased to be about quality searches and started to be about…well, whatever the heck it is that Google is about, is years gone now. PageRank was a dumb idea based on an idealism that was never justified.

    Every time I search for something and get back a badly-writen Wikipedia page as the first topic and an expertly written, clear explanation from an authorative source on page 2 I curse the humans at Google who refuse to admit to their mistakes.

    Google needs something very drastic done to their search engine and it obviously isn’t to allow fads to influence the results even MORE than they do today. I want much, much less weight put on what other people are searching for and basically no weight put on cross-links between sites. Coupled with Google’s database, of course.

    And there’s the rub – the barrier to entry in this market is now so huge that although it is trivial to design a better engine that Google’s, the chance of such an engine being able to financially survive long enough to compile its database to the point where it can compete is about zero and Google can do whatever they like to search results and there’s nothing much we as consumers can do about it. I don’t think that they’re evil or any of that nonsense, but I do believe that their search engine is crap and they have absolutely no incentive to stop being crap.

    It is a very bad state of affairs, IMO.

  • http://tim.oreilly.com Tim O'Reilly

    Robert Long:

    The link whose absence I was noting was to Techcrunch, who supposedly “broke” the story the Register was finding so alarming, so your argument doesn’t hold up.

    I find your “trivial to design a better engine” argument to be invalidated by the failure of Microsoft, Yahoo!, Ask, Amazon A9, and a host of startups to do so, despite lots of effort and investment. This is a hard problem, especially in the face of massive gaming of search results.

    What’s more, I generally find Google results pretty darn good. Not perfect, but way better than the alternative. I also find wikipedia generally belongs on the first page of search results, contrary to your experience. It’s often a very useful resource.

    But as they say, your mileage may vary. In any event, my point remains: the Register story is a non-story.

  • Simon Watts

    I think the Orlowski POV is interesting because he highlights the fact that a claim of traffic neutrality actully exists, it is a most peculiar claim. If you really think about it.

    I personally do find myself resorting to Google News and feel a certain nagging worry about my dependency. Like any claim of neutrality it has to be flawed in the final analysis. I often think of a metaphor of water supply – like John Chisum, the Google owners may be capable of doing no evil but there will always be the possibility of an evil foreman damming or contaminating the water that goes to the small holders down stream.

    I think we don’t need the stinkin’ badges of a promise of neutrality, but a full on range war.

  • Reinko

    To Robert Long:

    Google is a perfect search machine as long as you understands it’s limitations.

    For myself speaking, when ‘via via’ I search for my own name inside the USA my website always pops up at number one.

    When I do the same thing in my home country the Netherlands, I always loose it to ‘Reinko’s’ who are dead over a 100 years.

    You can lamentate upon that, but why should you?

    It only proofs that Google search is cooked, if you know it is cooked you do the ‘reverse cooking thing’ to get the info you want…

    Buy the way, want to buy some Google stock at 600 US$ a piece? I am willing to seel thousands of that to you…

  • http://www.FloatingBones.com FloatingBones

    There was also Chris Rhoads’s article today in the WSJ ( http://online.wsj.com/article/SB122929270127905065.html ) about google’s “fast tracking” its web content. Rhodia questions if Google is still interested in “net neutrality”, which is a nonsensical claim. Google’s initiative is actually to start hanging caching edge machines off of ISPs just like Akamai, Limelight, and Level 3 are doing.

    Edge machines are win-win: Google wins because their users are able to download bandwidth-intensive content (e.g., YouTube videos) with a far lower-latency loop to the source of the data. ISPs win because they are pulling far less data from the backbone when multiple users on that ISP are downloading the same YouTube video.

    Edge computing has nothing to do with “net neutrality”. Actually, it obviates one of the complaints that ISPs have: that users’ increasing downloads of videos is increasing their cost.

    I’ll draft a letter to the WSJ about this. Tim: I recommend that you do the same.

    Do you have a paper published on your site anywhere discussing the pros and cons of net neutrality?

  • Falafulu Fisi

    Robert Long said…
    I do believe that their search engine is crap and they have absolutely no incentive to stop being crap

    So, you don’t use Google then? Why is it crap? Have you benchmarked Google search with others, and found out that Google’s retrieval’s precision & recall are inferior? I would be interested to see your benchmark results if you want to back up your assertion that Google is crap? Or perhaps that you just whinge about something that you have no clue about? I think that it is the latter.

  • bob

    That was a classic Reg piece. This is a classic O’Reilly piece. Balanced reporting is boring.

  • http://tim.oreilly.com Tim O'Reilly

    Got an interesting clarification of my point from a source at Google, who pointed out that “human intervention” is an ambiguous term. When people hear that term, they imagine that a person specifies that “a result for the specific query Q should be the url U.” You can call this a “direct intervention.” My Google source says they never do that.

    As I’ve argued in this piece, and in the comments above, another type of human intervention is for Google to analyze results, decide they could be better, and modify their algorithms in a way that changes the results for some class of queries. You could call this an “algorithmic intervention.” Google does that all the time.

    My source generally agreed with my assessment above, but was concerned that my post implied that Google did a direct intervention on a search for “oreilly.” He said that the result I cite is an example of an algorithmic intervention: Google decided that there was a broad class of queries with multiple interpretations that should have more diverse results. So they figured out an algorithmic way to figure out the multiple interpretations and make sure they show results from each interpretation.

    In that sense, it isn’t different from other search engines that group results. If I recall, A9 did this. Google doesn’t do this for every search, but only, I assume, when there is a sufficient diversity of important results.

    I like this distinction. To be clear, I don’t think that this is an example of Google’s morals, but of their pragmatism. Direct intervention would be prohibitively expensive and unmaintainable! Algorithmic intervention is the essence of who they are. They are always refining their algorithms to produce better results. But as I’ve pointed out, “better” is still a human judgment. Anyone who thinks otherwise completely fails to understand both Google and the nature of search algorithms.

    BTW, speaking of algorithmic intervention :-), I was amused to be interviewed by Fox News about this story. I assume (with no basis — it truly is an assumption) that the reason they heard about the story was because of a Google Alert on “o’reilly”, which they no doubt run for Bill, not for me.

  • http://basiscraft.com Thomas Lord

    Um… human and algorithmic intervention are not so separate. How does Google identify a class of searches for which an algorithmic change is to be done other than by the human examination of a small number of examples and the impact of the algorithm change on those examples? And, how often does the “algorithm change” amount to special treatment for this or that human-selected search term or combination of search terms?

    It’s also really wrong here that people are trying to “refute” the complaint by saying “this is nothing new or all that different from what others do”. No, it isn’t, on both counts. That’s why Orlowski wrote “admits”. He is trying to get a basic concept through thick skulls, pointing out that Google has made a problem of its own creation even more blatantly apparent than it was before.

    This whole theory of “human v. algorithmic” intervention in search results is made up and sold to you by your buddies inside Google. Technically, it has no meaningful basis in what they’re doing. It’s disappointing that you forward the idea not only uncritically but endorsing it.

    -t

  • Steve

    “Got an interesting clarification of my point from a source at Google, who pointed out that “human intervention” is an ambiguous term”

    That’s not a clarification, that’s an obfuscation.

    Marissa Meyer was very clear: social search may play a large role with Google’s human editors ultimately deciding what is acceptable or not.

    Tim, you’re spinning like a political Spin Doctor: changing the subject, splitting semantic hairs, smearing the source etc – but there is no ambiguity here.

    Thomas Lord is right. There are some “thick skulls” who do not want to see the consequences of this potential policy change.

  • http://www.miraclestudios.in Miracle Studios

    I see a lot of flaws in Google making search like “DIGG”…..

    Where the search results would be enhaced on the basis of number of votes casted in favour ….

    This will give rise to “BOGUS Voting”….

    and we will start seeing the results

  • http://tim.oreilly.com Tim O'Reilly

    Steve,

    This will be my last reply to the conspiracy theorists. Anyone who has followed Google with any intelligence knows that this does not represent any change to Google’s policies. They have always used human intervention to test and tune their algorithms. Don’t trust me – go to any SEO or other search engine expert.

    All you guys are doing is demonstrating your ignorance.

    Miracle Studios -

    There is no evidence that Google is considering a digg-like approach to search results.

  • http://basiscraft.com Thomas Lord

    (Steve:)

    So falleth a false Brand of intellectualism.

    -t

  • Matthieu Catillon

    Falafulu Fisi wrote…

    “Relevancy (or concept search) has been around for around 2 decades. See my comment above on Latent Semantic Indexing (LSI)”

    So turn those algo on ! We need them. Everytime I try “advanced search” I am not happy with the results.

    Same with the index. We should be able to better target the index’s parts we are interested in. Sometime it feels like the “advanced search” panel is just a toy. You know like a fake for kids. The idea is great, we just need it to actually work.

    I am a G addict (started using their Alpha in July 1998) and try most of their new products. But I might migrate to a different “system” if G doesn’t make its tools’ power really available to us.

  • http://www.raycreationsindia.com Ray Creations

    Well, whatever algorithm Google uses to generate relevant search results has from time to time come under the scanner by many. However, it is true the Google’s search results are superior to any other search engines so what ever changes they may be making to their algorithm will only be for the best.