The Dark Side of Google

Page 73

71

theory on demand

This mechanism illustrates how Google’s ranking’s ‘technological magic’ and ‘objectivity’ are actually connected to the ‘underground’ of the Net, and is partially grounded on less savory practices. Other, perfectly legit, practices have been documented that exploit Google’s approach to indexation. An example would be Search Engine Optimization (SEO), which comprises a suite of operations that push up the ranking of a website in search returns. Getting to the #1 position, for instance, is often achievable by spamming from improbable addresses by automatic programs, and these have stupendous effects. ‘We register your site with 910 search engines, registries and web-catalogs! We bring your site in pole position on Google and Yahoo! Try Us! No risk, just US$299 instead of US$349! – one shot, no obligations!’. Of course, confronted with these practices, Google still plays the transparency card: ‘nothing can guarantee that your site will appear as #1 on Google’.28 Mathematically speaking, a feature of PageRank[TM], which is based on the analysis of links, is that the data base must be fully integrated, or in other words, that the search operations can only take place within a circumscribed, albeit extremely vast, space. That means that there is always a road that leads from one indexed web page to another indexed web page - in Google’s universe, that is. Searches therefore, will tend to be functional, by avoiding ‘broken links’ and returns that are substantially different from what had been archived in the ‘cache memory’ as much as possible. But the ensuing problem is that users will be falsely made to believe that Internet is a closed world, entirely made up of transparent links, without secret paths and preferential trajectories, because it would seem that, starting from any given query, a ‘correct’ response will always be returned. This is the consequence of the ‘Googolian’ view of the Internet as being exclusively made up by the journeys its own spider makes as it jumps from one web link to the next one. If a web page is not linked anywhere, it will never appear to a user since Google’s spider had no opportunity find, weight and index it. But this does in no way mean things like ‘data islands on the Net’ do not exist! Dynamic sites are a good example of these data islands, since their functionalities are entirely based on the choices their users make. A typical example would be the websites of railways and airlines. Filling in the appropriate form will give you timetables, onward connections, the fastest itineraries and so on, for any destination in real time. Google’s system is unable to grasp these forms’ queries and hence does not index the results that have been dynamically created by the sites. Only a human person can overcome this hurdle. The only solution Google is able to provide is to redirect the user to the rail companies’ or airlines’ own sites when an itinerary, timetable or destination is asked for. The very same thing happens for social networking platform, such as Facebook, LinkedIn of Twitter, when users don’t share their profiles with the entire Internet.

28. Email received at info@ippolita.net in May 2005. The position of Google on SEO: http://www.google.it/ intl/it/webmaster/seo.html. For further technical details, cf., http://www.googlerank.com/.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.