Rabu, 20 Mei 2009

Linux



Linux


Linux (commonly pronounced /ˈlɪnəks/[4]) is a generic term referring to Unix-like computer operating systems based on the Linux kernel. Their development is one of the most prominent examples of free and open source software collaboration; typically all the underlying source code can be used, freely modified, and redistributed by anyone under the terms of the GNU GPL[5] and other free licenses.

Linux is predominantly known for its use in servers, although it is installed on a wide variety of computer hardware, ranging from embedded devices and mobile phones to supercomputers.[6] Linux distributions, installed on both desktop and laptop computers, have become increasingly commonplace in recent years, owing largely to the popular Ubuntu distribution and to the emergence of netbooks.[7][8]

The name "Linux" (Linus-linux.ogg listen ) comes from the Linux kernel, originally written in 1991 by Linus Torvalds. The rest of the system, including utilities and libraries, usually comes from the GNU operating system announced in 1983 by Richard Stallman. The GNU contribution is the basis for the Free Software Foundation's preferred name GNU/Linux.[9][10]


Windows 7



Windows 7


Windows 7 (formerly codenamed Blackcomb and Vienna) is an upcoming version of Microsoft Windows, a series of operating systems produced by Microsoft for use on personal computers, including home and business desktops, laptops, tablet PCs, netbooks and media center PCs.[1] Microsoft has stated that it plans to release Windows 7, "in time for the holiday season" of 2009,[2] less than three years after the general availability of its predecessor, Windows Vista. Its server counterpart, Windows Server 2008 R2, is slated for release around the same time. It is expected that the exact date will be October 15, 2009.[3]

Unlike its predecessor, Windows 7 is intended to be an incremental upgrade to the Windows line, with the goal of being compatible with applications and hardware with which Windows Vista is already compatible.[4] Presentations given by the company in 2008 have focused on multi-touch support, a redesigned Windows Shell with a new taskbar, a home networking system called HomeGroup,[5] and performance improvements. Some applications that have been included with prior releases of Microsoft Windows, including Windows Calendar, Windows Mail, Windows Movie Maker, Windows Photo Gallery, will not be included in Windows 7; some will instead be offered separately as part of the freeware Windows Live Essentials suite.[6]


Microsoft Windows



windows 7, the next Windows release

Microsoft Windows

Microsoft Windows is a series of software operating systems and graphical user interfaces produced by Microsoft. Microsoft first introduced an operating environment named Windows in November 1985 as an add-on to MS-DOS in response to the growing interest in graphical user interfaces (GUIs).[1] Microsoft Windows came to dominate the world's personal computer market, overtaking Mac OS, which had been introduced previously. At the 2004 IDC Directions conference, it was stated that Windows had approximately 90% of the client operating system market.[2] The most recent client version of Windows is Windows Vista; the most recent server version is Windows Server 2008. Vista's successor, Windows 7 (currently a public release candidate) is slated to be released prior to the 2009 holiday season

USB flash drive



USB flash drive


A USB flash drive consists of a NAND-type flash memory data storage device integrated with a USB (universal serial bus) interface. USB flash drives are typically removable and rewritable, much smaller than a floppy disk (1 to 4 inches or 2.5 to 10 cm), and most USB flash drives weigh less than an ounce (28g).[1] Storage capacities typically range from 64 MB to 128 GB[2] with steady improvements in size and price per gigabyte. Some allow 1 million write or erase cycles[3][4] and have 10-year data retention,[5] connected by USB 1.1 or USB 2.0.


USB flash drives offer potential advantages over other portable storage devices, particularly the floppy disk. They have a more compact shape, operate faster, hold much more data, have a more durable design, and operate more reliably due to their lack of moving parts. Additionally, it has become increasingly common for computers to be sold without floppy disk drives. USB ports, on the other hand, appear on almost every current mainstream PC and laptop. These types of drives use the USB mass storage standard, supported natively by modern operating systems such as Windows, Mac OS X, Linux, and other Unix-like systems. USB drives with USB 2.0 support can also operate faster than an optical disc drive, while storing a larger amount of data in a much smaller space.

Nothing actually moves in a flash drive: the term drive persists because computers read and write flash-drive data using the same system commands as for a mechanical disk drive, with the storage appearing to the computer operating system and user interface as just another drive.[4]

A flash drive consists of a small printed circuit board protected inside a plastic, metal, or rubberized case, robust enough for carrying with no additional protection—in a pocket or on a key chain, for example. The USB connector is protected by a removable cap or by retracting into the body of the drive, although it is not likely to be damaged if exposed (but it may damage other items, for example a bag it is placed in). Most flash drives use a standard type-A USB connection allowing plugging into a port on a personal computer, but drives for other interfaces also exist.


Universal Serial Bus or USB


Universal Serial Bus or USB


In information technology, Universal Serial Bus (USB) is a serial bus standard to connect devices to a host computer. USB was designed to allow many peripherals to be connected using a single standardized interface socket and to improve plug and play capabilities by allowing hot swapping; that is, by allowing devices to be connected and disconnected without rebooting the computer or turning off the device. Other convenient features include providing power to low-consumption devices, eliminating the need for an external power supply; and allowing many devices to be used without requiring manufacturer-specific device drivers to be installed.

USB is intended to replace many varieties of serial and parallel ports. USB can connect computer peripherals such as mice, keyboards, PDAs, gamepads and joysticks, scanners, digital cameras, printers, personal media players, flash drives, and external hard drives. For many of those devices, USB has become the standard connection method. USB was designed for personal computers, but it has become commonplace on other devices such as PDAs and video game consoles, and as a power cord between a device and an AC adapter plugged into a wall plug for charging. As of 2008, there are about 2 billion USB devices sold per year, and about 6 billion total sold to date.[1]

The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry standards body incorporating leading companies from the computer and electronics industries. Notable members have included Agere (now merged with LSI Corporation), Apple Inc., Hewlett-Packard, Intel, NEC, and Microsoft.





Image scanner



Image scanner


In computing, a scanner is a device that optically scans images, printed text, handwriting, or an object, and converts it to a digital image. Common examples found in offices are variations of the desktop (or flatbed) scanner where the document is placed on a glass window for scanning. Hand-held scanners, where the device is moved by hand, have evolved from text scanning "wands" to 3D scanners used for industrial design, reverse engineering, test and measurement, orthotics, gaming and other applications. Mechanically driven scanners that move the document are typically used for large-format documents, where a flatbed design would be impractical.

Modern scanners typically use a charge-coupled device (CCD) or a Contact Image Sensor (CIS) as the image sensor, whereas older drum scanners use a photomultiplier tube as the image sensor. A rotary scanner, used for high-speed document scanning, is another type of drum scanner, using a CCD array instead of a photomultiplier. Other types of scanners are planetary scanners, which take photographs of books and documents, and 3D scanners, for producing three-dimensional models of objects.

Another category of scanner is digital camera scanners, which are based on the concept of reprographic cameras. Due to increasing resolution and new features such as anti-shake, digital cameras have become an attractive alternative to regular scanners. While still having disadvantages compared to traditional scanners (such as distortion, reflections, shadows, low contrast), digital cameras offer advantages such as speed, portability, gentle digitizing of thick documents without damaging the book spine. New scanning technologies are combining 3D scanners with digital cameras to create full-color, photo-realistic 3D models of objects.

In the biomedical research area, detection devices for DNA microarrays are called scanners as well. These scanners are high-resolution systems (up to 1 µm/ pixel), similar to microscopes. The detection is done via CCD or a photomultiplier tube (PMT).


Printer (computing)



Printer (computing)


In computing, a printer is a peripheral which produces a hard copy (permanent human-readable text and/or graphics) of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most newer printers, a USB cable to a computer which serves as a document source. Some printers, commonly known as network printers, have built-in network interfaces (typically wireless or Ethernet), and can serve as a hardcopy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time.

In addition, a few modern printers can directly interface to electronic media such as memory sticks or memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit, and can function as photocopiers. Printers that include non-printing features are sometimes called Multifunction Printers (MFP), Multi-Function Devices (MFD), or All-In-One (AIO) printers. Most MFPs include printing, scanning, and copying among their features. A Virtual printer is a piece of computer software whose user interface and API resemble that of a printer driver, but which is not connected with a physical computer printer.

Printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (30 pages per minute is considered fast; and many inexpensive consumer printers are far slower than that), and the cost per page is actually relatively high. The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktop publishing. The world's first computer printer was a 19th century mechanically driven apparatus invented by Charles Babbage for his Difference Engine.[1]


Computer data storage



Computer data storage


Computer data storage, often called storage or memory, refers to computer components, devices, and recording media that retain digital data used for computing for some interval of time. Computer data storage provides one of the core functions of the modern computer, that of information retention. It is one of the fundamental components of all modern computers, and coupled with a central processing unit (CPU, a processor), implements the basic computer model used since the 1940s.

In contemporary usage, memory usually refers to a form of semiconductor storage known as random access memory (RAM) and sometimes other forms of fast but temporary storage. Similarly, storage today more commonly refers to mass storage - optical discs, forms of magnetic storage like hard disks, and other types slower than RAM, but of a more permanent nature. Historically, memory and storage were respectively called primary storage and secondary storage.

The contemporary distinctions are helpful, because they are also fundamental to the architecture of computers in general. The distinctions also reflect an important and significant technical difference between memory and mass storage devices, which has been blurred by the historical usage of the term storage. Nevertheless, this article uses the traditional nomenclature.


Expansion card


Expansion card


An expansion card (also expansion board, adapter card or accessory card) in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard to add additional functionality to a computer system. One edge of the expansion card holds the contacts (the edge connector) that fit exactly into the slot. They establish the electrical contact between the electronics (mostly integrated circuits) on the card and on the motherboard.

Connectors mounted on the bracket allow the connection of external devices to the card. Depending on the form factor of the motherboard and case, around one to seven expansion cards can be added to a computer system. In the case of a backplane system, up to 19 expansion cards can be installed. There are also other factors involved in expansion card capacity. For example, some expansion cards need two slots like some nVidia GeForce FX and newer GeForce graphics cards and there is often a space left to aid cooling on some high-end cards.

Some cards are "low-profile" cards, meaning that they are shorter than standard cards and will fit in a lower height computer chassis. (There is a "low profile PCI card" standard [1] that specifies a much smaller bracket and board area). The group of expansion cards that are used for external connectivity, such as a network, SAN or modem card, are commonly referred to as input/output cards (or I/O cards).

The primary purpose of an expansion card is to provide or expand on features not offered by the motherboard. For example, the original IBM PC did not provide graphics or hard drive capability as the technology for providing that on the motherboard did not exist. In that case, a graphics expansion card and an ST-506 hard disk controller card provided graphics capability and hard drive interface respectively.

In the case of expansion of on-board capability, a motherboard may provide a single serial RS232 port or Ethernet port. An expansion card can be installed to offer multiple RS232 ports or multiple and higher bandwidth Ethernet ports. In this case, the motherboard provides basic functionallity but the expansion card offers additional or enhanced ports.



Expansion card

Microprocessor


Microprocessor


A microprocessor incorporates most or all of the functions of a central processing unit (CPU) on a single integrated circuit (IC). [1] The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4- and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc, followed rather quickly. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general purpose microcomputers in the mid-1970s.

Computer processors were for a long period constructed out of small and medium-scale ICs containing the equivalent of a few to a few hundred transistors. The integration of the whole CPU onto a single VLSI chip therefore greatly reduced the cost of processing capacity. From their humble beginnings, continued increases in microprocessor capacity have rendered other forms of computers almost completely obsolete (see history of computing hardware), with one or more microprocessor as processing element in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers.

Since the early 1970s, the increase in capacity of microprocessors has been known to generally follow Moore's Law, which suggests that the complexity of an integrated circuit, with respect to minimum component cost, doubles every two years.[2] In the late 1990s, and in the high-performance microprocessor segment, heat generation (TDP), due to switching losses, static current leakage, and other factors, emerged as a leading developmental constraint[3].



Microprocessor

Computer software

Computer software


Computer software, or just software is a general term used to describe a collection of computer programs, procedures and documentation that perform some task on a computer system.[1]

The term includes:

  • Application software such as word processors which perform productive tasks for users.
  • Firmware which is software programmed resident to electrically programmable memory devices on board mainboards or other types of integrated hardware carriers.
  • Middleware which controls and co-ordinates distributed systems.
  • System software such as operating systems, which interface with hardware to provide the necessary services for application software.
  • Software testing is a domain independent of development and programming. It consists of various methods to test and declare a software product fit before it can be launched for use by either an individual or a group. Many tests on functionality, performance and appearance are conducted by modern testers with various tools such as QTP, Load runner, Black box testing etc to edit a checklist of requirements against the developed code. ISTQB is a certification that is in demand for engineers who want to pursue a career in testing.[2]
  • Testware which is an umbrella term or container term for all utilities and application software that serve in combination for testing a software package but not necessarily may optionally contribute to operational purposes. As such, testware is not a standing configuration but merely a working environment for application software or subsets thereof.

Software includes websites, programs, video games, etc. that are coded by programming languages like C, C++, etc.


Central processing unit or CPU



Central processing unit or CPU


A central processing unit (CPU) or processor is an electronic circuit that can execute computer programs. This topic has been in use in the computer industry at least since the early 1960s (Weik 1961). The form, design and implementation of CPUs have changed dramatically since the earliest examples, but their fundamental operation has remained much the same.

Early CPUs were custom-designed as a part of a larger, sometimes one-of-a-kind, computer. However, this costly method of designing custom CPUs for a particular application has largely given way to the development of mass-produced processors that are made for one or many purposes. This standardization trend generally began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of these digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in everything from automobiles to cell phones to children's toys.


CPU socket



CPU socket

A CPU socket or CPU slot is an electrical component that attaches to a printed circuit board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of IC socket designed for very high pin counts. A CPU socket provides many functions, including providing a physical structure to support the CPU, providing support for a heatsink, facilitating ease of replacement (as well as reducing cost) and most importantly forming an electrical interface both with the CPU and the PCB. CPU sockets can most often be found in most desktop and server computers, particularly those based on the Intel x86 architecture on the motherboard.

Accelerated Graphics Port or AGP

An AGP slot (having maroon color) and two PCI slots


Accelerated Graphics Port or AGP

The Accelerated Graphics Port (also called Advanced Graphics Port, often shortened to AGP) is a high-speed point-to-point channel for attaching a video card to a computer's motherboard, primarily to assist in the acceleration of 3D computer graphics. Since 2004, AGP is being progressively phased out in favor of PCI Express. However, as of mid 2008 new AGP cards and motherboards are still available for purchase, though OEM driver support is minimal. [1]

PCI Express



PCI Express


PCI Express (Peripheral Component Interconnect Express), officially abbreviated as PCIe, is a computer expansion card standard designed to replace the older PCI, PCI-X, and AGP standards. Introduced by Intel in 2004, PCIe (or PCI-E, as it is commonly called) is the latest standard for expansion cards that is available on mainstream personal computers[citation needed]

PCI Express is used in consumer, server, and industrial applications, both as a motherboard-level interconnect (to link motherboard-mounted peripherals) and as an expansion card interface for add-in boards. A key difference between PCIe and earlier PC buses is a topology based on point-to-point serial links, rather than a shared parallel bus architecture.

The PCIe electrical interface is also used in a variety of other standards, most notably the ExpressCard laptop expansion card interface.


Conventional PCI



Conventional PCI


Conventional PCI (often shortened to PCI) is a computer bus for attaching hardware devices in a computer. These devices can take either the form of an integrated circuit fitted onto the motherboard itself, called a planar device in the PCI specification or an expansion card that fits into a socket. The name PCI is an initialism formed from Peripheral Component Interconnect. The PCI Local Bus is common in modern PCs, where it has displaced ISA and VESA Local Bus as the standard expansion bus, and it also appears in many other computer types. Despite the availability of faster interfaces such as PCI-X and PCI Express, conventional PCI remains a very common interface.

The PCI specification covers the physical size of the bus (including wire spacing), electrical characteristics, bus timing, and protocols. The specification can be purchased from the PCI Special Interest Group (PCI-SIG).

Typical PCI cards used in PCs include: network cards, sound cards, modems, extra ports such as USB or serial, TV tuner cards and disk controllers. Historically video cards were typically PCI devices, but growing bandwidth requirements soon outgrew the capabilities of PCI. PCI video cards remain available for supporting extra monitors and upgrading PCs that do not have any AGP or PCI express slots.

Many devices traditionally provided on expansion cards are now commonly integrated onto the motherboard itself, meaning that modern PCs often have no cards fitted. However, PCI is still used for certain specialized cards, although many tasks traditionally performed by expansion cards may now be performed equally well by USB devices.


Advanced Micro Devices or AMD



Advanced Micro Devices or AMD


Advanced Micro Devices, Inc. (AMD) (NYSE: AMD) is an American multinational semiconductor company based in Sunnyvale, California, that develops computer processors and related technologies for commercial and consumer markets. Its main products include microprocessors, motherboard chipsets, embedded processors and graphics processors for servers, workstations and personal computers, and processor technologies for handheld devices, digital television, automobiles, game consoles, and other embedded systems applications.

AMD is the second-largest global supplier of microprocessors based on the x86 architecture after Intel Corporation, and the third-largest supplier of graphics processing units, behind Intel and Nvidia. It also owns 21 percent of Spansion, a supplier of non-volatile flash memory. In 2007, AMD ranked eleventh among semiconductor manufacturers in terms of revenue.[2]


Intel Corporation



Intel Corporation


Intel (NASDAQ: INTC; SEHK: 4335) is the world's largest semiconductor company and the inventor of the x86 series of microprocessors, the processors found in most personal computers. Intel was founded on July 18, 1968, as Integrated Electronics Corporation and based in Santa Clara, California, USA. Intel also makes motherboard chipsets, network cards and ICs, flash memory, graphic chips, embedded processors, and other devices related to communications and computing. Founded by semiconductor pioneers Robert Noyce and Gordon Moore, and widely associated with the executive leadership and vision of Andrew Grove, Intel combines advanced chip design capability with a leading-edge manufacturing capability. Originally known primarily to engineers and technologists, Intel's successful "Intel Inside" advertising campaign of the 1990s made it and its Pentium processor household names.

Intel was an early developer of SRAM and DRAM memory chips, and this represented the majority of its business until the early 1980s. While Intel created the first commercial microprocessor chip in 1971, it was not until the success of the personal computer (PC) that this became their primary business. During the 1990s, Intel invested heavily in new microprocessor designs fostering the rapid growth of the PC industry. During this period Intel became the dominant supplier of microprocessors for PCs, and was known for aggressive and sometimes controversial tactics in defense of its market position, particularly against AMD, as well as a struggle with Microsoft for control over the direction of the PC industry.[3][4] The 2009 rankings of the world's 100 most powerful brands published by Millward Brown Optimor showed the company's brand value rising 4 places – from number 27 to number 23.[5]

In addition to its work in semiconductors, Intel has begun research in electrical transmission and generation.[6][7]


Asus



Asus


ASUSTeK Computer Incorporated (ASUS) (traditional Chinese: 華碩電腦股份有限公司; pinyin: Huáshuo Diànnaǒ Gufen Yǒuxiàn Gōngsī), a Taiwanese multinational company, produces motherboards, graphics cards, optical drives, PDAs, computer monitors, notebook computers, servers, networking products, mobile phones, computer cases, computer components, and computer cooling systems. Commonly called by its brand name ASUS (pronounced ah-SOOS ] and commonly mispronounced as AY-sus), it has listings on both the London Stock Exchange (LSE: ASKD) and the Taiwan Stock Exchange (TSE: 2357). Since 2006, 35.0% of PCs sold used an ASUS motherboard; [2] and the company's 2008 revenues reached US$22.9 billion.[1]

ASUS appears in BusinessWeek’s "InfoTech 100" and "Asia’s Top 10 IT Companies" rankings. It is the number one in quality and service according to Wall Street Journal Asia and leads the IT Hardware category of the 2008 Taiwan Top 10 Global Brands survey with a total brand value of US$1.324 billion.[3]


Motherboard



Motherboard

A motherboard is the central printed circuit board (PCB) in some complex electronic systems, such as modern personal computers. The motherboard is sometimes alternatively known as the mainboard, system board, or, on Apple computers, the logic board.[1] It is also sometimes casually shortened to mobo.[2]

Blu-ray ROM


Blu-ray


Blu-ray Disc (also known as Blu-ray or BD) is an optical disc storage medium designed by Sony to supersede the standard DVD format. Its main uses are high-definition video and data storage with 50GB per disc. The disc has the same physical dimensions as standard DVDs and CDs.

The name Blu-ray Disc derives from the blue laser used to read the disc. While a standard DVD uses a 650 nanometre red laser, Blu-ray uses a shorter wavelength, a 405 nm blue laser, and allows for almost six times more data storage than on a DVD.

During the format war over high-definition optical discs, Blu-ray competed with the HD DVD format. Toshiba, the main company supporting HD DVD, ceded in February 2008 and the format war ended.[2]

Blu-ray Disc is developed by the Blu-ray Disc Association, a group representing makers of consumer electronics, computer hardware, and motion pictures. As of January 2009, more than 890 Blu-ray disc titles are available in Australia, 720 in Japan, 1,140 in the United Kingdom, and 1,500 in the United States.[3][4][5]



DVD ROM


DVD ROM

DVD, also known as "Digital Versatile Disc" or "Digital Video Disc," is an optical disc storage media format. Its main uses are video and data storage. DVDs are of the same dimensions as compact discs (CDs) but store more than six times as much data.

Variations of the term DVD often describe the way data is stored on the discs: DVD-ROM (Read Only Memory), has data that can only be read and not written, DVD-R and DVD+R can record data only once and then function as a DVD-ROM. DVD-RW, DVD+RW and DVD-RAM can both record and erase data multiple times. The wavelength used by standard DVD lasers is 650 nm,[1] and thus the light has a red color.

DVD-Video and DVD-Audio discs respectively refer to properly formatted and structured video and audio content. Other types of DVDs, including those with video content, may be referred to as DVD-Data discs.

As next generation High Definition more advanced optical formats such as Blu-ray Disc also use a disc identical in some aspects, the original DVD is occasionally given the retronym SD DVD (for standard definition).[2][3] However, the trademarked HD DVD discs have been discontinued since Blu-ray absorbed their market share.