Tuesday, October 14, 2008

Naver
Naver (Hangul: 네이버) is the most popular search portal in South Korea. Naver was launched in June 1999, the first portal in Korea that used its own proprietary search engine. Among Naver's innovations was "Comprehensive Search", launched in 2000, which provides results from multiple categories on a single page, and was later possibly benchmarked [1] by Google for its "Universal Search". It has since remained in the lead in the development of Korean search service, adding new services such as "Knowledge Search", launched in 2002, which was later benchmarked[2] by Yahoo! for its Yahoo! Answer

History
Naver was incorporated in June 1999, launching the first South Korean search portal that used an internally developed search engine. In August 2000, it launched the "Comprehensive Search" service. which allows users to get a variety of results from a search query on a single page, organized by type, including blogs, websites, images, cafes, etc. This was five years before Google launched a similar offering with its "Universal Search."
In July 2000, Naver was merged with Hangame, South Korea's first online game portal, and in 2001 changed its name to NHN, or Next Human Network. The combination of the country's number one search engine and number one game portal has allowed NHN to remain South Korea's largest Internet company, with the top market capitalization among companies listed on KOSDAQ.[5]
In the early days of Naver operation, there was a relative dearth of webpages in the Korean language. To fill this void, Naver became an early pioneer in user-generated content through the creation of the "Knowledge Search" service in 2002. In Knowledge Search, users pose questions on any subject, and select among answers provided by other users, awarding points to the users who provide the best answers. Knowledge Search was launched three years before Yahoo! launched its similar "Yahoo! Answers" service, and now boasts a database of over 80 million answer pages.
Over the years, Naver has continued to expand its offerings, adding a blog service in 2007, local information search and book search services in 2004,and desktop search in 2005. From 2005-2007 it expanded its multimedia search
Web directory
A web directory or link directory is a directory on the World Wide Web. It specializes in linking to other web sites and categorizing those links.
A web directory is not a search engine, and does not display lists of web pages based on keywords, instead it lists web sites by category and subcategory. The categorization is usually based on the whole web site, rather than one page or a set of keywords, and sites are often limited to inclusion in only one or two categories. Web directories often allow site owners to directly submit their site for inclusion, and have editors review submissions for fitness.
Some directories are very general in scope and list websites across a wide range of categories, regions and languages. But there are also a large number of niche directories, which focus on restricted regions, single languages, or specialist sectors. One type of niche directory with a large number of sites in existence, is the shopping directory for example. Shopping directories specialize in the listing of retail e-commerce sites.
Examples of well known, general, web directories are Yahoo! Directory and the Open Directory Project (ODP). ODP is significant due to its extensive categorization and large number of listings and its free availability for use by other directories and search engines.[1]
A debate over the quality of directories and databases continues, as search engines use ODP's content without real integration, and some experiment using clustering. There have been many attempts to make directory development easier, whether using a "links for all" type link submission site using a script, or any number of available PHP portals and programs. Recently, social software techniques have spawned new efforts of categorization, with Amazon.com adding tagging to their product pages.
Directories have various types of listings, often dependent upon the price paid for inclusion:
• Free Submission - there is no charge for the review and listing of the site
• Reciprocal Link - a link back to the directory must be added somewhere on the submitted site in order to get listed in the directory
• Paid Submissions - a one-time or recurring fee is charged for reviewing/listing the submitted link
• No Follow - there is a rel="nofollow" attribute associated with the link, meaning search engines will not follow the link.
• Featured Listing - the link is given a premium position in a category (or multiple categories) or other sections of the directory, such as the homepage
• Bid for Position - where sites are ordered based on bids
• Affiliate links - where the directory earns commission for referred customers from the listed websites
• ^ Paul Festa (December 27, 1999), Web search results still have human touch, CNET News.com, retrieved September 18, 2007
Dot-com bubble
The "dot-com bubble" (or sometimes the "I.T. bubble") was a speculative bubble covering roughly 1995–2001 (with a climax on March 10, 2000 with the NASDAQ peaking at 5132.52) during which stock markets in Western nations saw their value increase rapidly from growth in the new Internet sector and related fields. The period was marked by the founding (and, in many cases, spectacular failure) of a group of new Internet-based companies commonly referred to as dot-coms. A combination of rapidly increasing stock prices, individual speculation in stocks, and widely available venture capital created an exuberant environment in which many of these businesses dismissed standard business models, focusing on increasing market share at the expense of the bottom line. The bursting of the dot-com bubble marked the beginning of a relatively mild yet rather lengthy early 2000s recession in the developed world
The venture capitalists saw record-setting rises in stock valuations of dot-com companies, and therefore moved faster and with less caution than usual, choosing to mitigate the risk by starting many contenders and letting the market decide which would succeed. The low interest rates in 1998–99 helped increase the start-up capital amounts. Although a number of these new entrepreneurs had realistic plans and administrative ability, most of them lacked these characteristics but were able to sell their ideas to investors because of the novelty of the dot-com concept.
A canonical "dot-com" company's business model relied on harnessing network effects by operating at a sustained net loss to build market share (or mind share). These companies expected that they could build enough brand awareness to charge profitable rates for their services later. The motto "get big fast" reflected this strategy.[1] During the loss period the companies relied on venture capital and especially initial public offerings of stock to pay their expenses. The novelty of these stocks, combined with the difficulty of valuing the companies, sent many stocks to dizzying heights and made the initial controllers of the company wildly rich on paper.
Historically, the dot-com boom can be seen as similar to a number of other technology-inspired booms of the past including railroads in the 1840s, automobiles and radio in the 1920s, transistor electronics in the 1950s, computer time-sharing in the 1960s, and home computers and biotechnology in the early 1980s.
Web portal
is a site that provides a single function via a web page or site. Web portals often function as a point of access to information on the World Wide Web. Portals present information from diverse sources in a unified way. Aside from the search engine standard, web portals offer other services such as e-mail, news, stock prices, infotainment and various other features. Portals provide a way for enterprises to provide a consistent look and feel with access control and procedures for multiple applications, which otherwise would have been different entities altogether. An example of a web portal is Yahoo!.
Two broad categorizations of portals are Horizontal portals (e.g. Yahoo) and Vertical portals (or vortals, focused on one functional area, e.g. salesforce.com).
A personal portal is a site on the World Wide Web that typically provides personalized capabilities to its visitors, providing a pathway to other content. It is designed to use distributed applications, different numbers and types of middleware and hardware to provide services from a number of different sources. In addition, business portals are designed to share collaboration in workplaces. A further business-driven requirement of portals is that the content be able to work on multiple platforms such as personal computers, personal digital assistants (PDAs), and cell phones/mobile phones.
A personal or web portal can be integrated with many forum systems.
It is often necessary to have a centralized application that has access to various other applications within the same enterprise to share the information across the applications. Also the various users with different roles accessing the different applications prefer to have a single access point to all of them over the Internet. They like to personalize the applications and have the coupled applications coordinated. Above all, the administrator users like to have administrative tools all in a single place to administer all the applications. All these are achieved through portals. Since all the applications share information through portals, there is better communication between various types of users. Another advantage of portals is that they can make event-driven campaigns. Below is detailed list of advantages of using portals:
• Intelligent integration and access to enterprise content, applications and processes
• Improved communication and collaboration among customers, partners, and employees
• Unified, real-time access to information held in disparate systems
• Personalized user modification and maintenance of the website presentation
In the late 1990s, the web portal was a hot commodity. After the proliferation of web browsers in the mid-1990s, many companies tried to build or acquire a portal, to have a piece of the Internet market. The web portal gained special attention because it was, for many users, the starting point of their web browser.
The 1990s were a time of innovation for the concept of corporate web portals. Many companies began to offer tools to help webmasters manage their data, applications and information more easily, and through personalized views. Some portal solutions today are able to integrate legacy applications, other portals objects, and handle thousands of user requests.
Today’s corporate portals are sprouting new value-added capabilities for businesses. Capabilities such as managing workflows, increasing collaboration between work groups, and allowing content creators to self-publish their information are lifting the burden off already strapped IT departments.
In addition, most portal solutions today, if architected correctly, can allow internal and external access to specific corporate information using secure authentication or Single sign-on.
JSR168 Standards emerged around 2001. Java Specification Request (JSR) 168 standards allow the interoperability of portlets across different portal platforms. These standards allow portal developers, administrators and consumers to integrate standards-based portals and portlets across a variety of vendor solutions.
Microsoft's SharePoint Portal Server line of products have been gaining popularity among corporations for building their portals, partly due to the tight integration with the rest of the Microsoft Office products. Research by Forrester Research in 2004 shows that Microsoft is the vendor of choice for companies looking for portal server software[1].
In response to Microsoft's strong presence in the portal market, other portal vendors are being acquired, or are challenging their offering. Oracle Corporation, in 2007, released Web Center Suite, a similar product to SharePoint. Web Center Suite has a full line of collaboration tools (blogs, wikis, team spaces, calendaring, email, etc.).
In addition, the popularity of content aggregation is growing and portal solution will continue to evolve significantly over the next few years. The Gartner Group predicts generation 8 portals to expand on the enterprise mash-up concept of delivering a variety of information, tools, applications and access points through a single mechanism.
With the increase in user generated content, disparate data silos, and file formats, information architects and taxonomist will be required to allow users the ability to tag (classify) the data. This will ultimately cause a ripple effect where users will also be generating ad hoc navigation and information flows.
Guruji.com
Guruji.com is an Indian Internet search engine that is focused on providing better search results to Indian consumers, by leveraging proprietary algorithms and data in the Indian context
Guruji.com is derived from the Sanskrit word for teacher,Guru. The search engine aims at giving its "students"(its users) what they require. guruji.com gives pages from India as search results, thus proving to be a search engine By Indians, for Indians. Guruji.com was launched to the world on October 16, 2006.
Meta element
Meta elements are HTML or XHTML elements used to provide structured metadata about a web page. Such elements must be placed as tags in the head section of an HTML or XHTML document. Meta elements can be used to specify page description, keywords and any other metadata not provided through the other head elements and attributes.
The meta element has four valid attributes: content, http-equiv, name and scheme. Of these, only content is a required attribute.
Meta elements provide information about a given webpage, most often to help search engines categorize them correctly. They are inserted into the HTML document, but are often not directly visible to a user visiting the site.
They have been the focus of a field of marketing research known as search engine optimization (SEO), where different methods are explored to provide a user's site with a higher ranking on search engines. In the mid to late 1990s, search engines were reliant on meta data to correctly classify a web page and webmasters quickly learned the commercial significance of having the right meta element, as it frequently led to a high ranking in the search engines — and thus, high traffic to the web site.
As search engine traffic achieved greater significance in online marketing plans, consultants were brought in who were well versed in how search engines perceive a web site. These consultants used a variety of techniques (legitimate and otherwise) to improve ranking for their clients.
Meta elements have significantly less effect on search engine results pages today than they did in the 1990s and their utility has decreased dramatically as search engine robots have become more sophisticated. This is due in part to the nearly infinite re-occurrence (keyword stuffing) of meta elements and/or to attempts by unscrupulous website placement consultants to manipulate (spamdexing) or otherwise circumvent search engine ranking algorithms.
While search engine optimization can improve search engine ranking, consumers of such services should be careful to employ only reputable providers. Given the extraordinary competition and technological craftsmanship required for top search engine placement, the implication of the term "search engine optimization" has deteriorated over the last decade. Where it once implied bringing a website to the top of a search engine's results page, for the average consumer it now implies a relationship with keyword spamming or optimizing a site's internal search engine for improved performance.
Major search engine robots are more likely to quantify such extant factors as the volume of incoming links from related websites, quantity and quality of content, technical precision of source code, spelling, functional v. broken hyperlinks, volume and consistency of searches and/or viewer traffic, time within website, page views, revisits, click-throughs, technical user-features, uniqueness, redundancy, relevance, advertising revenue yield, freshness, geography, language and other intrinsic characteristics
1. ^ Statistic (June 4,1997), META attributes by count, Vancouver Webpages, retrieved June 3, 2007
2. ^ Danny Sullivan (October 1, 2002), Death Of A Meta Tag, SearchEngineWatch.com, retrieved June 03, 2007
3. ^ Rand Fishkin (April 2, 2007), Search Engine Ranking Factors V2, SEOmoz.org, retrieved June 3, 2007
4. ^ Danny Sullivan, How To Use HTML Meta Tags, Search Engine Watch, December 5, 2002
5. ^ 1 Website Designer Using language metatags in websites February 19, 2008
6. ^ Vanessa Fox, Using the robots meta tag, Official Google Webmaster Central Blog, 3/05/2007
7. ^ Danny Sullivan (March 5, 2007),Meta Robots Tag 101: Blocking Spiders, Cached Pages & More, SearchEngineLand.com, retrieved June 3, 2007
8. ^ Betsy Aoki (May 22, 2006), Opting Out of Open Directory Listings for Webmasters, Live Search Blog, retrieved June 3, 2007
9. ^ Vanessa Fox (July 13, 2006), More control over page snippets, Inside Google Sitemaps, retrieved June 3, 2007
10. ^ Yahoo! Search (October 24, 2006), Yahoo! Search Weather Update and Support for 'NOODP', Yahoo! Search Blog, retrieved June 3, 2007
11. ^ Yahoo! Search (February 28, 2007), Yahoo! Search Support for 'NOYDIR' Meta Tags and Weather Update, Yahoo! Search Blog, retrieved June 3, 2007
12. ^ Yahoo! Search (May 02, 2007), Introducing Robots-Nocontent for Page Sections, Yahoo! Search Blog, retrieved June 3, 2007
13. ^ Journal of Internet Cataloging, Volume 5(1), 2002
14. ^ W3C Recommendation (May 5, 1999), Web Content Accessibility Guidelines 1.0 - Guideline 7. W3.org, retrieved September 28, 2007

No comments: