Free Speech Wiki
Advertisement

Deep linking, on the World Wide Web, is making a hyperlink that points to a specific page or image on another website, instead of that website's main or home page. Such links are called deep links.

Example[]

This link: http://en.wikipedia.org/wiki/Deep_linking is an example of a deep link. The URL contains all the information needed to point to a particular item, in this case the Wikipedia English article on deep linking, instead of the Wikipedia home page at http://www.wikipedia.org/.

Deep linking and HTTP[]

The technology behind the World Wide Web, the Hypertext Transfer Protocol (HTTP), does not actually make any distinction between "deep" links and any other links—all links are functionally equal. This is intentional; one of the designed purposes of the Web is to allow authors to link to any published document on another site. The possibility of so-called "deep" linking is therefore built into the Web technology of HTTP and URLs by default—while a site can attempt to restrict deep links, to do so requires extra effort. According to the World Wide Web Consortium Technical Architecture Group, "any attempt to forbid the practice of deep linking is based on a misunderstanding of the technology, and threatens to undermine the functioning of the Web as a whole". One way to prevent deep linking is to configure the web server to check the referring URL using a Rewrite engine. [1]

Usage[]

Some commercial websites object to other sites making deep links into their content either because it bypasses advertising on their main pages, passes off their content as that of the linker or, like The Wall Street Journal, they charge users for permanently-valid links. Sometimes, deep linking has led to legal action such as in the 1997 case of Ticketmaster versus Microsoft, where Microsoft deep-linked to Ticketmaster's site from its Sidewalk service. This case was settled when Microsoft and Ticketmaster arranged a licensing agreement. Ticketmaster later filed a similar case against Tickets.com, and the judge in this case ruled that such linking was legal as long as it was clear to whom the linked pages belonged .[2] The court also concluded that URL's themselves were not copyrightable, writing: "A URL is simply an address, open to the public, like the street address of a building, which, if known, can enable the user to reach the building. There is nothing sufficiently original to make the URL a copyrightable item, especially the way it is used. There appear to be no cases holding the URLs to be subject to copyright. On principle, they should not be."

Deep linking and rich web technologies[]

Websites which are built on rich web technologies such as Adobe Flash and AJAX often do not support deep linking. This can result in usability problems for people visiting such websites. For example, visitors to these websites may be unable to save bookmarks to individual pages or states of the site, web browser forward and back buttons may not work as expected, and use of the browser's refresh button may return the user to the initial page.

However, this is not a fundamental limitation of these technologies. Well-known techniques, and libraries such as SWFAddress, now exist that website creators using Flash or AJAX can use to provide deep linking to pages within their sites.[3][4]

Criticism[]

Many critics charge that such sites simply want to establish policies that will "license" such links to the highest bidder. They argue that links are a fundamental part of "user-oriented" web browsing. Probably the earliest legal case arising out of deep-linking was the 1996 Scottish case of Shetland Times vs Shetland News where the Times accused the News of appropriating stories on the Times' website as its own.

Critics say that the term 'deep linking' is unnecessary: hyperlinking was always intended to refer to pages other than a publisher's front page, making deep linking nothing other than hyperlinking.

Some of those who find no fault with deep linking do find fault with inline linking, the act of using media from another website directly within one's own website. It causes browsers to request the media directly from the original web server, using the creator's network bandwidth without any benefit to them. This is often described as stealing bandwidth.

Court rulings[]

In the beginning of 2006 in a case between the search engine Bixee.com and job site Naukri.com, the Delhi High Court in India prohibited Bixee.com from deeplinking to Naukri.com.[5]

In December of 2006, a Texas court ruled that linking by a motocross website to videos on a Texas-based motocross video production website did not constitute fair use. The court subsequently issued an injunction. [6]. This case, SFX Motor Sports Inc., v. Davis, was not published in official reports, but is available at 2006 WL 3616983.

In a February 2006-ruling, the Danish Maritime and Commercial Court (Copenhagen) found systematic crawling, indexing and deeplinking by portal site ofir.dk of real estate site Home.dk not to conflict with Danish law or the database directive of the European Union. The Court even stated that search engines are desirable for the functioning of the Internet of today. And that one, when publishing information on the Internet, must assume—and accept—that search engines deep link to individual pages of one's website.[7]

Opt out[]

Web site owners wishing to prevent search engines from deep linking are able to use the existing Robots Exclusion Standard (/robots.txt file) to specify their desire or otherwise for their content to be indexed. Some feel that content owners who fail to provide a /robots.txt file are implying that they do not object to deep linking either by search engines or others who might link to their content. Others believe that content owners may be unaware of the Robots Exclusion Standard or may not use robots.txt for other reasons. Deep linking is also practiced outside the search engine context, so some participating in this debate question the relevance of the Robots Exclusion Standard to controversies about Deep Linking. The Robots Exclusion Standard does not programmatically enforce its directives so it does not prevent search engines and others who do not follow polite conventions from deep linking.

References[]

<templatestyles src="Reflist/styles.css" />

  1. Lua error in Module:Citation/CS1 at line 4069: attempt to call field 'set_selected_modules' (a nil value).
  2. Template:Cite news
  3. Lua error in Module:Citation/CS1 at line 4069: attempt to call field 'set_selected_modules' (a nil value).
  4. Lua error in Module:Citation/CS1 at line 4069: attempt to call field 'set_selected_modules' (a nil value).
  5. Lua error in Module:Citation/CS1 at line 4069: attempt to call field 'set_selected_modules' (a nil value).
  6. Lua error in Module:Citation/CS1 at line 4069: attempt to call field 'set_selected_modules' (a nil value).
  7. Lua error in Module:Citation/CS1 at line 4069: attempt to call field 'set_selected_modules' (a nil value).

See also[]

External links[]

da:Dybt link de:Deep Link fr:Lien profond it:Deep linking he:קישור עמוק nl:Dieplinken ja:ディープリンク no:Dyplenking ru:Внешнее связывание

Advertisement