Jumat, 28 November 2008
search engine worked
Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.
When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords.
The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of webpages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.
Revenue in the web search portals industry is projected to grow in 2008 by 13.4 percent, with broadband connections expected to rise by 15.1 percent. Between 2008 and 2012, industry revenue is projected to rise by 56 percent as Internet penetration still has some way to go to reach full saturation in American households. Furthermore, broadband services are projected to account for an ever increasing share of domestic Internet users, rising to 118.7 million by 2012, with an increasing share accounted for by fiber-optic and high speed cable lines.[8]
Google first launced in 1999. Google has one of the largest databases with blogs, wikis, and websites. Google also brings up PDF files that can be downloaded. Google will not support searches for airlines and searches are not case sensitive.
src="http://pagead2.googlesyndication.com/pagead/show_ads.js">
Sabtu, 22 November 2008
what is kubuntu?
satces
Kubuntu is an official derivation of the Ubuntu Linux desktop operating system providing KDE support. It is part of Ubuntu. All packages share the same archives as Ubuntu.
It is even possible to install KDE on Ubuntu to achieve a "Kubuntu". Otherway round Gnome can be installed on Kubuntu, so there's no real limitation according to the used desktop environment. Kubuntu comes with a preinstalled and preconfigured KDE and without Gnome.
From the Kubuntu page on the Ubuntu Wiki pages "The Kubuntu project aims to be to KDE what Ubuntu is to GNOME: a great integrated distro with all the great features of Ubuntu, but based on the KDE desktop. Kubuntu is released regularly and predictably; a new release is made with a release of a new KDE Version."
Ubuntu means "towards humanity" in Bemba.
Releases
The first Kubuntu release was published on April 8, 2005. It included KDE 3.4 and a selection of the most useful KDE programs not in KDE itself, including amaroK, Kaffeine and Gwenview. Both Live CDs and Install CDs for x86, PowerPC and AMD64 platforms are available. There are also daily builds of the CDs.
ubuntu 8.04
Ubuntu Desktop Edition
With Ubuntu Desktop Edition you can surf the web, read email, create documents and spreadsheets, edit images and much more. Ubuntu has a fast and easy graphical installer right on the Desktop CD. On a typical computer the installation should take you less than 25 minutes.
Desktop Tour
The fastest way to see Ubuntu is to take the tour
Desktop simplicity
When you start your system for the first time you'll see a desktop that is clean and tidy, no desktop icons, and a default theme that is easy on the eye.
Ubuntu 'Just Works'
We've done all the hard work for you. Once Ubuntu is installed, all the basics are in place so that your system will be immediately usable.
A complete office productivity suite
OpenOffice contains a user interface and feature set that is similar to other office suites, and includes all the key desktop applications you need, such as:
Word processor - for anything from writing a quick letter to producing an entire book. More »
Spreadsheet - a tool to calculate, analyse, and present your data in numerical reports or charts. More »
Presentation - an easy, and powerful tool for creating effective multimedia presentations. More »
Edit and share files in other formats
Easily open, edit and share files with your friends that have Microsoft Office, Word Perfect, KOffice or StarOffice.
Quick and easy updates
The task bar contains an update area where we'll notify you when there are updates available for your system, from simple security fixes to a complete version upgrade. The update facility enables you to keep your system up-to-date with just a few clicks of your mouse.
A vast library of free software
Need more software? Simply choose from thousands of software packages in the Ubuntu catalogue, all available to download and install at the click of a button. And it's all completely free!
Help and support
You'll be able to find help using the desktop browser or online. If you have a question about using Ubuntu, you can bet someone else has already asked it. Our community has developed a range of documentation that may contain the answer to your question, or give you ideas about where to look.
This is also where you'll get access to free support from the Ubuntu community in the chat and mailing lists in many languages. Alternatively, you can purchase professional support from the Canonical Global Support Services Team, or local providers.
Ubuntu in your local language
Ubuntu aims to be usable by as many people as possible, which is why we include the very best localisation and accessibility infrastructure that the free software community has to offer.
More Features »
You can download Ubuntu, or request a free CD from Canonical.
System Requirements
Ubuntu is available for PC, 64-Bit PC and Intel based Mac architectures. At least 256 MB of RAM is required to run the alternate install CD (384MB of RAM is required to use the live CD based installer). Install requires at least 4 GB of disk space.
Senin, 17 November 2008
devices
In 2007, L-com Connectivity Products, a leading manufacturer of cable assemblies, connectors, and other connectivity devices for over 25 years, acquired HyperLink Technologies of Boca Raton, FL. HyperLink Technologies is a high-quality manufacturer of antennas, amplifiers, and other wireless connectivity equipment. This acquisition strengthens L-com’s product offering, thereby creating a “one-stop source” for users of all connectivity equipment, wired or wireless.
This modem router uses Wireless G technology and a high-speed ADSL Modem to provide an all-in-one solution for connecting to the internet, e-mail and VoIP. Connect your PCs via the built-in Router and 4-port Switch to share the Internet throughout your household while advanced firewall and security features protect your PCs and your data. £39.99
Kamis, 13 November 2008
Computer Keyboard
$69.99
Advertisement ID : 749364
Ads Classification : For Sale
Location : Makati City, Metro Manila
Regular Price
Now Only
Save :
:
: P 300.00
P 200.00
P 100.00
Condition : 2nd Hand (Used)
Warranty : Personal Warranty
Computer Keyboard Reviews and Buying Guide:
The most basic component to a computer is a keyboard. Anyone who uses a computer eventually gets a "feel" for their keyboard and knows just how far away from their hands certain keys are. Good typers can type over 100 words a minute and 10-key experts can enter in numbers and digits faster than you think. Beyond the letters, numbers, and symbols you find on a computer keyboard, there are other countless functions and shortcuts you can do. Things like alt-tab, esc, shift, control, and the up, down and sideways arrows all help us navigate documents, webpages and software programs on a day to day basis. Many keyboards are slightly different leaving an end user learning a new routine if they switch jobs or computers.
Selasa, 11 November 2008
A virtual machine was originally defined by Popek and Goldberg as "an efficient, isolated duplicate of a real machine". Current use includes virtual machines which have no direct correspondence to any real hardware.[1]
Example: A program written in Java receives services from the Java Runtime Environment software by issuing commands from which the expected result is returned by the Java software. By providing these services to the program, the Java software is acting as a "virtual machine", taking the place of the operating system or hardware for which the program would ordinarily have had to have been specifically written.
Virtual machines are separated into two major categories, based on their use and degree of correspondence to any real machine. A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single program, which means that it supports a single process. An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine -- it cannot break out of its virtual world.
[edit] System virtual machines
See also: Virtualization and Comparison of virtual machines
System virtual machines (sometimes called hardware virtual machines) allow the sharing of the underlying physical machine resources between different virtual machines, each running its own operating system. The software layer providing the virtualization is called a virtual machine monitor or hypervisor. A hypervisor can run on bare hardware (Type 1 or native VM) or on top of an operating system (Type 2 or hosted VM).
The main advantages of system VMs are:
* multiple OS environments can co-exist on the same computer, in strong isolation from each other
* the virtual machine can provide an instruction set architecture (ISA) that is somewhat different from that of the real machine
Multiple VMs each running their own operating system (called guest operating system) are frequently used in server consolidation, where different services that used to run on individual machines in order to avoid interference are instead run in separate VMs on the same physical machine. This use is frequently called quality-of-service isolation (QoS isolation).
The desire to run multiple operating systems was the original motivation for virtual machines, as it allowed time-sharing a single computer between several single-tasking OSes.
The guest OSes do not have to be all the same, making it possible to run different OSes on the same computer (e.g., Microsoft Windows and Linux, or older versions of an OS in order to support software that has not yet been ported to the latest version). The use of virtual machines to support different guest OSes is becoming popular in embedded systems; a typical use is to support a real-time operating system at the same time as a high-level OS such as Linux or Windows.
Another use is to sandbox an OS that is not trusted, possibly because it is a system under development. Virtual machines have other advantages for OS development, including better debugging access and faster reboots.[2]
Alternative techniques such as Solaris Zones provides a level of isolation within a single operating system. This does not have isolation as complete as a VM, as kernel exploits in a single zone affect all zones, whereas kernel exploits in a VM do not affect other VMs on the host. Zones are not virtual machines, but an example of "operating-system virtualization". This includes other "virtual environments" (also called "virtual servers") such as Virtuozzo, FreeBSD Jails, Linux-VServer, chroot jail, and OpenVZ. These provide some form of encapsulation of processes within an operating system. These technologies have the advantage of being more resource-efficient than full virtualization; the disadvantage is that they can only run a single operating system and a single version/patch level of that operating system - so, for example, they cannot be used to run two applications, one of which only supports a newer OS version and the other only supporting an older OS version on the same hardware.
[edit] Process virtual machine
See also: Virtualization and Comparison of application virtual machines
A process VM, sometimes called an application virtual machine, runs as a normal application inside an OS and supports a single process. It is created when that process is started and destroyed when it exits. Its purpose is to provide a platform-independent programming environment that abstracts away details of the underlying hardware or operating system, and allows a program to execute in the same way on any platform.
A process VM provides a high-level abstraction — that of a high-level programming language (compared to the low-level ISA abstraction of the system VM). Process VMs are implemented using an interpreter; performance comparable to compiled programming languages is achieved by the use of just-in-time compilation.
This type of VM has become popular with the Java programming language, which is implemented using the Java virtual machine. Another example is the .NET Framework, which runs on a VM called the Common Language Runtime.
A special case of process VMs are systems that abstract over the communication mechanisms of a (potentially heterogeneous) computer cluster. Such a VM does not consist of a single process, but one process per physical machine in the cluster. They are designed to ease the task of programming parallel applications by letting the programmer focus on algorithms rather than the communication mechanisms provided by the interconnect and the OS. They do not hide the fact that communication takes place, and as such do not attempt to present the cluster as a single parallel machine.
Unlike other process VMs, these systems do not provide a specific programming language, but are embedded in an existing language; typically such a system provides bindings for several languages (e.g., C and FORTRAN). Examples are PVM (Parallel Virtual Machine) and MPI (Message Passing Interface). They are not strictly virtual machines, as the applications running on top still have access to all OS services, and are therefore not confined to the system model provided by the "VM".
[edit] Techniques
[edit] Emulation of the underlying raw hardware (native execution)
VMware Workstation running Ubuntu, on Windows Vista
This approach is described as full virtualization of the hardware, and can be implemented using a Type 1 or Type 2 hypervisor. (A Type 1 hypervisor runs directly on the hardware; a Type 2 hypervisor runs on another operating system, such as Linux). Each virtual machine can run any operating system supported by the underlying hardware. Users can thus run two or more different "guest" operating systems simultaneously, in separate "private" virtual computers.
The pioneer system using this concept was IBM's CP-40, the first (1967) version of IBM's CP/CMS (1967-1972) and the precursor to IBM's VM family (1972-present). With the VM architecture, most users run a relatively simple interactive computing single-user operating system, CMS, as a "guest" on top of the VM control program (VM-CP). This approach kept the CMS design simple, as if it were running alone; the control program quietly provides multitasking and resource management services "behind the scenes". In addition to CMS, VM users can run any of the other IBM operating systems, such as MVS or z/OS. z/VM is the current version of VM, and is used to support hundreds or thousands of virtual machines on a given mainframe. Some installations use Linux for zSeries to run Web servers, where Linux runs as the operating system within many virtual machines.
Full virtualization is particularly helpful in operating system development, when experimental new code can be run at the same time as older, more stable, versions, each in a separate virtual machine. The process can even be recursive: IBM debugged new versions of its virtual machine operating system, VM, in a virtual machine running under an older version of VM, and even used this technique to simulate new hardware.[3]
The standard x86 processor architecture as used in modern PCs does not actually meet the Popek and Goldberg virtualization requirements. Notably, there is no execution mode where all sensitive machine instructions always trap, which would allow per-instruction virtualization.
Despite these limitations, several software packages have managed to provide virtualization on the x86 architecture, even though dynamic recompilation of privileged code, as first implemented by VMware, incurs some performance overhead as compared to a VM running on a natively virtualizable architecture such as the IBM System/370 or Motorola MC68020. By now, several other software packages such as Virtual PC, VirtualBox, Parallels Workstation and Virtual Iron manage to implement virtualization on x86 hardware.
Intel and AMD have introduced features to their x86 processors to enable virtualization in hardware.
[edit] Emulation of a non-native system
Virtual machines can also perform the role of an emulator, allowing software applications and operating systems written for another computer processor architecture to be run.
Some virtual machines emulate hardware that only exists as a detailed specification. For example:
* One of the first was the p-code machine specification, which allowed programmers to write Pascal programs that would run on any computer running virtual machine software that correctly implemented the specification.
* The specification of the Java virtual machine.
* The Common Language Infrastructure virtual machine at the heart of the Microsoft .NET initiative.
* Open Firmware allows plug-in hardware to include boot-time diagnostics, configuration code, and device drivers that will run on any kind of CPU.
This technique allows diverse computers to run any software written to that specification; only the virtual machine software itself must be written separately for each type of computer on which it runs.
[edit] Operating system-level virtualization
Operating System-level Virtualization is a server virtualization technology which virtualizes servers on an operating system (kernel) layer. It can be thought of as partitioning: a single physical server is sliced into multiple small partitions (otherwise called virtual environments (VE), virtual private servers (VPS), guests, zones, etc.); each such partition looks and feels like a real server, from the point of view of its users.
For example, Solaris Zones supports multiple guest OSes running under the same OS (such as Solaris 10). All guest OSes have to use the same kernel level and cannot run as different OS versions. Solaris Zones also requires that the host OS be a version of Solaris; other OSes from other manufacturers are not supported.[citation needed]
Another example is AIX, which provides the same technique under the name of Micro Partitioning.[citation needed]
The operating system level architecture has low overhead that helps to maximize efficient use of server resources. The virtualization introduces only a negligible overhead and allows running hundreds of virtual private servers on a single physical server. In contrast, approaches such as virtualisation (like VMware) and paravirtualization (like Xen or UML) cannot achieve such level of density, due to overhead of running multiple kernels. From the other side, operating system-level virtualization does not allow running different operating systems (i.e. different kernels), although different libraries, distributions etc. are possible.
Langganan:
Postingan (Atom)