Your Ip is

Sign by Dealighted - Coupons and Deals

Adbrite

Monday, March 10, 2008

List of Intel Pentium 4 microprocessors


The Pentium 4 microprocessor from Intel is a seventh-generation CPU targeted at the consumer market.

Pentium 4

"Willamette" (180 nm)
All models support: MMX, SSE, SSE2
Family 15 model 1


Model Number Frequency L2-Cache Front Side Bus Multiplier Voltage TDP Socket Release Date Part Number(s)
Pentium 4 1.3 1300 MHz 256 KiB 400 MT/s 13× 1.70/1.75 V 48.9/51.6 W Socket 423 January 3, 2001 80528PC013G0K
YD80528PC013G0K
Pentium 4 1.4 1400 MHz 256 KiB 400 MT/s 14× 1.70/1.75 V 51.8/54.7 W Socket 423 November 20, 2000 80528PC017G0K
YD80528PC017G0K
Pentium 4 1.4 1400 MHz 256 KiB 400 MT/s 14× 1.75 V 55.3 W Socket 478 September 2001 RK80531PC017G0K
Pentium 4 1.5 1500 MHz 256 KiB 400 MT/s 15× 1.70/1.75 V 54.7/57.8 W Socket 423 November 20, 2000 80528PC021G0K
YD80528PC021G0K
RN80528PC021G0K
Pentium 4 1.5 1500 MHz 256 KiB 400 MT/s 15× 1.75 V 57.9 W Socket 478 August 2001 RK80531PC021G0K
RK80531PC021256
Pentium 4 1.6 1600 MHz 256 KiB 400 MT/s 16× 1.75 V 61 W Socket 423 July 2, 2001 YD80528PC025G0K
RN80528PC025G0K
Pentium 4 1.6 1600 MHz 256 KiB 400 MT/s 16× 1.75 V 60.8 W Socket 478 August 2001 RK80531PC025G0K
RK80531PC025256
Pentium 4 1.7 1700 MHz 256 KiB 400 MT/s 17× 1.75 V 64 W Socket 423 April 23, 2001 YD80528PC029G0K
RN80528PC029G0K
Pentium 4 1.7 1700 MHz 256 KiB 400 MT/s 17× 1.75 V 63.5 W Socket 478 August 2001 RK80531PC029G0K
RK80531PC029256
Pentium 4 1.8 1800 MHz 256 KiB 400 MT/s 18× 1.75 V 66.7 W Socket 423 July 2, 2001 YD80528PC033G0K
RN80528PC033G0K
Pentium 4 1.8 1800 MHz 256 KiB 400 MT/s 18× 1.75 V 66.1 W Socket 478 August 2001 RK80531PC033G0K
RK80531PC033256
Pentium 4 1.9 1900 MHz 256 KiB 400 MT/s 19× 1.75 V 69.2 W Socket 423 August 2001 RN80528PC037G0K
Pentium 4 1.9 1900 MHz 256 KiB 400 MT/s 19× 1.75 V 72.8 W Socket 478 August 26, 2001 RK80531PC037G0K
RK80531PC037256
Pentium 4 2.0 2000 MHz 256 KiB 400 MT/s 20× 1.75 V 71.8 W Socket 423 August 2001 RN80528PC041G0K
Pentium 4 2.0 2000 MHz 256 KiB 400 MT/s 20× 1.75 V 75.3 W Socket 478 August 26, 2001 RK80531PC041G0K


Northwood" (130 nm)

Family 15 model 2
All models support: MMX, SSE, SSE2
Hyper-Threading: supported by Pentium 4 3.06


Model Number sSpec number Core Stepping Frequency L2-Cache Front Side Bus Multiplier Voltage TDP Socket Release Date Part Number(s)
Pentium 4 1.6A SL62S B0 1600 MHz 512 KiB 400 MT/s 16× 1.475 V 38 W Socket 478 January 2002 RK80534PC025512
Pentium 4 1.6A SL668 B0 1600 MHz 512 KiB 400 MT/s 16× 1.5 V 46.8 W Socket 478 January 2002 BX80532PC1600D
Pentium 4 1.8A SL68Q B0 1800 MHz 512 KiB 400 MT/s 18× 1.475/1.525 V 49.6 W Socket 478 January 2002 RK80532PC033512
Pentium 4 2.0A SL66R B0 2000 MHz 512 KiB 400 MT/s 20× 1.5 V 52.4 W Socket 478 January 7, 2002 RK80532PC041512
BX80532PC2000D
Pentium 4 2.2 SL66S B0 2200 MHz 512 KiB 400 MT/s 22× 1.5 V 55.1 W Socket 478 January 7, 2002 RK80532PC049512
Pentium 4 2.26 2266 MHz 512 KiB 533 MT/s 17× 1.475/1.525 V 58 W Socket 478 May 6, 2002 RK80532PE051512
Pentium 4 2.4 SL6GS 2400 MHz 512 KiB 400 MT/s 24× 1.475/1.525 V 59.8 W Socket 478 April 2, 2002 RK80532PC056512
Pentium 4 2.4B SL6PC, SL6RZ C1, D1 2400 MHz 512 KiB 533 MT/s 18× 1.475/1.525 V 59.8 W Socket 478 May 6, 2002 RK80532PE056512
Pentium 4 2.5 SL6PN 2500 MHz 512 KiB 400 MT/s 25× 1.475/1.525 V 61 W Socket 478 August 25, 2002 RK80532PC060512
Pentium 4 2.53 SL6D8 2533 MHz 512 KiB 533 MT/s 19× 1.475/1.525 V 61.5 W Socket 478 May 6, 2002 RK80532PE061512
Pentium 4 2.6 SL6SB 2600 MHz 512 KiB 400 MT/s 26× 1.475/1.525 V 62.6 W Socket 478 August 25, 2002 RK80532PC064512
Pentium 4 2.66 SL6PE D1 2667 MHz 512 KiB 533 MT/s 20× 1.475/1.525 V 66.1 W Socket 478 August 25, 2002 RK80532PE067512
Pentium 4 2.8 SL6PF 2800 MHz 512 KiB 533 MT/s 21× 1.525 V 68.4 W Socket 478 August 25, 2002 RK80532PE072512
Pentium 4 2.8 SL7EY 2800 MHz 512 KiB 400 MT/s 28× 1.475/1.525 V 68.4 W Socket 478 November 2002 RK80532PC072512
Pentium 4 3.06 SL6SM C1 3066 MHz 512 KiB 533 MT/s 23× 1.55 V 81.8 W Socket 478 November 2002 RK80532PE083512

Thursday, March 6, 2008

XScale


The XScale, a microprocessor core, is Marvell's (formerly Intel's) implementation of the fifth generation of the ARM architecture, and consists of several distinct families: IXP, IXC, IOP, PXA and CE (see more below). The PXA family was sold to Marvell Technology Group in June 2006.
The XScale architecture is based on the ARMv5TE ISA without the floating point instructions. XScale uses a seven-stage integer and an eight-stage memory superpipelined RISC architecture. It is the successor to the Intel StrongARM line of microprocessors and microcontrollers, which Intel acquired from DEC's Digital Semiconductor division as the side-effect of a lawsuit between the two companies. Intel used the StrongARM to replace their ailing line of outdated RISC processors, the i860 and i960.
All the generations of XScale are 32-bit ARMv5TE processors manufactured with a 0.18-µm process and have a 32-KiB data cache and a 32-KiB instruction cache (this would be called a 64-KiB Level 1 cache on other processors). They also all have a 2-KiB mini-data cache.

Processor families

The XScale core is used in a number of microcontroller families manufactured by Intel and Marvell, notably:
Application Processors (with the prefix PXA). There are four generations of XScale Application Processors, described below: PXA210/PXA25x, PXA26x, PXA27x, and PXA3xx.
I/O Processors (with the prefix IOP)
Network Processors (with the prefix IXP)
Control Plane Processors (with the prefix IXC).
Consumer Electronics Processors (with the prefix CE).
There are also standalone processors: the 80200 and 80219 (targeted primarily at PCI applications).

PXA210/PXA25x

The PXA210 was Intel's entry-level XScale targeted at mobile phone applications. It was released with the PXA250 in February 2002 and comes clocked at 133 MHz and 200 MHz.
The PXA25x family consists of the PXA250 and PXA255. The PXA250 was Intel's first generation of XScale processors. There was a choice of three clock speeds: 200 MHz, 300 MHz and 400 MHz. It came out in February 2002. In March 2003, the revision C0 of the PXA250 was renamed to PXA255. The main differences were a doubled bus speed (100 MHz to 200 MHz) for faster data transfer, lower core voltage (only 1.3 V at 400 MHz) for lower power consumption and writeback functionality for the data cache, the lack of which had severely impaired performance on the PXA250.

PXA26x

The PXA26x family consists of the PXA260 and PXA261-PXA263. The PXA260 is a stand-alone processor clocked at the same frequency as the PXA25x, but features a TPBGA package which is about 53% smaller than the PXA25x's PBGA package. The PXA261-PXA263 are the same as the PXA260 but have Intel StrataFlash memory stacked on top of the processor in the same package; 16 MiB of 16-bit memory in the PXA261, 32 MiB of 16-bit memory in the PXA262 and 32 MiB of 32-bit memory in the PXA263. The PXA26x family was released in March 2003.

PXA27x

The PXA27x family (code-named Bulverde) consists of the PXA270 and PXA271-PXA272 processors. This revision is a huge update to the XScale family of processors. The PXA270 is clocked in four different speeds: 312 MHz, 416 MHz, 520 MHz and 624 MHz and is a stand-alone processor with no packaged memory. The PXA271 can be clocked to 312 MHz or 416 MHz and has 32 MiB of 16-bit stacked StrataFlash memory and 32 MiB of 16-bit SDRAM in the same package. The PXA272 can be clocked to 312 MHz, 416 MHz or 520 MHz and has 64 MiB of 32-bit stacked StrataFlash memory.

PXA3xx Monahans

In August 2005 Intel announced the successor to Bulverde, codenamed Monahans. They demoed it showing its capability to play back high definition encoded video on a PDA screen. The new processor was shown clocked at 1.25 GHz but Intel said it only offered a 25% increase in performance (800 MIPS for the 624-MHz PXA270 processor vs. 1000 MIPS for 1.25-GHz Monahans). An announced successor to the 2700G graphics processor, code named Stanwood, has since been canceled. Some of the features of Stanwood are integrated into Monahans. For extra graphics capabilities, Intel recommends third-party chips like the NVIDIA GoForce chip family.
In November of 2006, Marvell Semiconductor officially introduced the Monahans family as Marvell PXA320, PXA300, and PXA310. PXA320 is currently shipping in high volume, and is scalable up to 806 MHz. PXA300 and PXA310 deliver performance "scalable to 624 MHz", and are software-compatible with PXA320.

IXC1100

The IXC1100 processor features clock speeds at 266, 400, and 533 MHz, a 133-MHz bus, 32 KiB of instruction cache, 32 KiB of data cache, and 2 KiB of mini-data cache. It is also designed for low power consumption, using 2.4 W at 533 MHz. The chip comes in the 35-mm PBGA package.

IXP network processor


The XScale core is utilized in the second generation of Intel's IXP network processor line, while the first generation used StrongARM cores. The IXP network processor family ranges from solutions aimed at small/medium office network applications , IXP4XX, to high performance network processors such as the IXP2850, capable of sustaining up to OC-192 line rates. In IXP4XX devices the XScale core is used as both a control and data plane processor, providing both system control and data processing. The task of the XScale in the IXP2XXX devices is typically to provide control plane functionality only, with data processing performed by the microengines, examples of such control plane tasks include routing table updates, microengine control, memory management.

Wednesday, March 5, 2008

Multi-core processors


A multi-core CPU (or chip-level multiprocessor, CMP) combines two or more independent cores into a single package composed of a single integrated circuit (IC), called a die, or more dies packaged together. A dual-core processor contains two cores and a quad-core processor contains four cores. A multi-core microprocessor implements multiprocessing in a single physical package. A processor with all cores on a single die is called a monolithic processor. Cores in a multicore device may share a single coherent cache at the highest on-device cache level (e.g. L2 for the Intel Core 2) or may have separate caches (e.g. current AMD dual-core processors). The processors also share the same interconnect to the rest of the system. Each "core" independently implements optimizations such as superscalar execution, pipelining, and multithreading. A system with N cores is effective when it is presented with N or more threads concurrently. The most commercially significant (or at least the most 'obvious') multi-core processors are those used in computers (primarily from Intel & AMD) and game consoles (e.g., the Cell processor in the PS3). In this context, "multi" typically means a relatively small number of cores. However, the technology is widely used in other technology areas, especially those of embedded processors, such as network processors and digital signal processors, and in GPUs.

Terminology

There is some discrepancy in the semantics by which the terms "multi-core" and "dual-core" are defined. Most commonly they are used to refer to some sort of central processing unit (CPU), but are sometimes also applied to DSPs and SoCs. Additionally, some use these terms only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. These people generally refer to separate microprocessor dies in the same package by another name, such as "multi-chip module", "double core", or even "twin core". This article uses both the terms "multi-core" and "dual-core" to reference microelectronic CPUs manufactured on the same integrated circuit, unless otherwise noted.
A dual-core processor is a single chip that contains two distinct processors or "execution cores" in the same integrated circuit.
"Multi Core" refers to - two or more CPUs working together on one single chip (like AMD Athlon X2 or Intel Core Duo) in contrast to DUAL CPU, which refers to two separate CPUs working together.

Development

While manufacturing technology continues to improve, reducing the size of single gates, physical limits of semiconductor-based microelectronics have become a major design concern. Some effects of these physical limitations can cause significant heat dissipation and data synchronization problems. The demand for more capable microprocessors causes CPU designers to use various methods of increasing performance. Some instruction-level parallelism (ILP) methods like superscalar pipelining are suitable for many applications, but are inefficient for others that tend to contain difficult-to-predict code. Many applications are better suited to thread level parallelism (TLP) methods, and multiple independent CPUs is one common method used to increase a system's overall TLP. A combination of increased available space due to refined manufacturing processes and the demand for increased TLP is the logic behind the creation of multi-core CPUs.

Commercial incentives

Several business motives drive the development of dual-core architectures. Since symmetric multiprocessing (SMP) designs have been long implemented using discrete CPUs, the issues regarding implementing the architecture and supporting it in software are well known. Additionally, utilizing a proven processing core design (e.g. Freescale's e600 core) without architectural changes reduces design risk significantly. Finally, the terminology "dual-core" (and other multiples) lends itself to marketing efforts.
Additionally, for general-purpose processors, much of the motivation for multi-core processors comes from greatly diminished gains in processor performance from increasing the operating frequency (frequency scaling). The memory wall and the ILP wall are the culprits in why system performance has not gained as much from continued processor frequency increases as was once seen. The memory wall refers to the increasing gap between processor and memory speeds, which pushes cache sizes larger to mask the latency to memory which helps only to the extent that memory bandwidth is not the bottleneck in performance. The ILP wall refers to increasing difficulty to find enough parallelism in the instructions stream of a single process to keep higher performance processor cores busy. Finally, the often cited, power wall refers to the trend of consuming double the power with each doubling of operating frequency (which is possible to contain to just doubling only if the processor is made smaller). The power wall poses manufacturing, system design and deployment problems that have not been justified in the face of the diminished gains in performance due to the memory wall and ILP wall. Together, these three walls combine to motivate multicore processors.
In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such as Intel and AMD have turned to multi-core designs, sacrificing lower manufacturing costs for higher performance in some applications and systems.
Multi-core architectures are being developed, but so are the alternatives. An especially strong contender for established markets is to integrate more peripheral functions into the chip.

Advantages

The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock rate than is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher quality signals allow more data to be sent in a given time period since individual signals can be shorter and do not need to be repeated as often.
Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less Printed Circuit Board (PCB) space than multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages; such reduction reduces latency. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core design. Also, adding more cache suffers from diminishing returns.

Disadvantages

In addition to operating system (OS) support, adjustments to existing software are required to maximize utilization of the computing resources provided by multi-core processors. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications. The situation is improving: for example the American PC game developer Valve Corporation has stated that it will use multi core optimizations for the next version of its Source engine, shipped with Half-Life 2: Episode Two, the next installment of its Half-Life franchise, and Crytek is developing similar technologies for CryENGINE2, which powers their game, Crysis. Emergent Game Technologies' Gamebryo engine includes their Floodgate technology which simplifies multicore development across game platforms. See Dynamic Acceleration Technology for the Santa Rosa platform for an example of a technique to improve single-thread performance on dual-core processors.
Integration of a multi-core chip drives production yields down and they are more difficult to manage thermally than lower-density single-chip designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage. If a single core is close to being memory bandwidth limited, going to dual-core might only give 30% to 70% improvement. If memory bandwidth is not a problem, a 90% improvement can be expected. It would be possible for an application that used 2 CPUs to end up running faster on one dual-core if communication between the CPUs was the limiting factor, which would count as more than 100% improvement.

Hardware trend

The general trend in processor development has been from multi-core to many-core: from dual-, quad-, eight-core chips to ones with tens or even hundreds of cores; see manycore processing unit. In addition, multi-core chips mixed with simultaneous multithreading, memory-on-chip, and special-purpose "heterogeneous" cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications. There is also a trend of improving energy efficiency by focusing on performance-per-watt with advanced fine-grain or ultra fine-grain power management and dynamic voltage and frequency scaling (DVFS).

Software impact

Software benefits from multicore architectures where code can be executed in parallel. Under most common operating systems this requires code to execute in separate threads or processes. Each application running on a system runs in its own process so multiple applications will benefit from multicore architectures. Each application may also have multiple threads but, in most cases, it must be specifically written to utilize multiple threads. Operating system software also tends to run many threads as a part of its normal operation. Running virtual machines will benefit from adoption of multiple core architectures since each virtual machine runs independently of others and can be executed in parallel.
Most application software is not written to use multiple concurrent threads intensively because of the challenge of doing so. A frequent pattern in multithreaded application design is where a single thread does the intensive work while other threads do much less. For example, a virus scan application may create a new thread for the scan process, while the GUI thread waits for commands from the user (e.g. cancel the scan). In such cases, multicore architecture is of little benefit for the application itself due to the single thread doing all heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult-to-find bugs due to the interleaving of processing on data shared between threads (thread-safety). Consequently, such code is much more difficult to debug than single-threaded code when it breaks. There has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level multiprocessor hardware. Although threaded applications incur little additional performance penalty on single-processor machines, the extra overhead of development has been difficult to justify due to the preponderance of single-processor machines.

Tuesday, March 4, 2008

AMD Athlon


Athlon is the brand name applied to a series of different x86 processors designed and manufactured by AMD. The original Athlon, or Athlon Classic, was the first seventh-generation x86 processor and, in a first, retained the initial performance lead it had over Intel's competing processors for a significant period of time. AMD has continued the Athlon name with the Athlon 64, an eighth-generation processor featuring AMD64 (later renamed x86-64) technology.
The Athlon made its debut on June 23, 1999. Athlon was the ancient Greek word for "Champion/trophy of the games".

Background

AMD ex-CEO and founder Jerry Sanders developed strategic partnerships during the late 1990s to improve AMD's presence in the PC market based on the success of the K6 architecture. One major partnership announced in 1998 paired AMD with semiconductor giant Motorola. In the announcement, Sanders referred to the partnership as creating a "virtual gorilla" that would enable AMD to compete with Intel on fabrication capacity while limiting AMD's financial outlay for new facilities. This partnership also helped to co-develop copper-based semiconductor technology, which would become a cornerstone of the K7 production process.
In August 1999, AMD released the Athlon (K7) processor. Notably, the design team was led by Dirk Meyer, one of the lead engineers on the DEC Alpha project. Jerry Sanders had approached many of the engineering staff to work for AMD as DEC wound the project down, and brought in a near-complete team of engineering experts. The balance of the Athlon design team comprised AMD K5 and K6 veterans.By working with Motorola, AMD was able to refine copper interconnect manufacturing to the production stage about one year before Intel. The revised process permitted 180-nanometer processor production. The accompanying die-shrink resulted in lower power consumption, permitting AMD to increase Athlon clock-speeds to the 1 gigahertz range. AMD found processor yields on the new process exceeded expectations, and delivered high speed chips in volume in March 2000.

General architecture

Internally, the Athlon is a fully seventh generation x86 processor, the first of its kind. Like the AMD K5 and K6, the Athlon is a RISC microprocessor which decodes x86 instructions into its own internal instructions at runtime. The CPU is an out-of-order design, again like previous post-5x86 AMD CPUs. The Athlon utilizes the DEC Alpha EV6 bus architecture with double data rate (DDR) technology. This means that at 100 MHz the Athlon front side bus actually transfers at a rate similar to a 200 MHz single data rate bus (referred to as 200 MT/s), which was superior to the method used on Intel's Pentium III (with SDR bus speeds of 100 and 133 MHz).
AMD designed the CPU with more robust x86 instruction decoding capabilities than that of K6, to enhance its ability to keep more data in-flight at once. Athlon's CISC to RISC decoder triplet could potentially decode 6 x86 operations per clock, although this was somewhat unlikely in real-world use. The critical branch predictor unit, essential to keeping the pipeline busy, was enhanced compared to what was onboard the K6. Deeper pipelining with more stages allowed higher clock speeds to be attained. Whereas the AMD K6-III+ topped out at 570 MHz due to its short pipeline, even when built on the 180 nm process, the Athlon was capable of going much higher.
AMD ended its long-time handicap with floating point x87 performance by designing an impressive super-pipelined, out-of-order, triple-issue floating point unit.[3] Each of its 3 units were tailored to be able to calculate an optimal type of instructions with some redundancy. By having separate units, it was possible to operate on more than one floating point instruction at once.This FPU was a huge step forward for AMD. While the K6 FPU had looked anemic compared to the Intel P6 FPU, with Athlon this was no longer the case.The 3DNow! floating point SIMD technology, again present, received some revisions and a name change to "Enhanced 3DNow!". Additions included DSP instructions and an implementation of the extended-MMX subset of Intel SSE.
CPU Caching onboard Athlon consisted of the typical two levels. Athlon was the first x86 processor with a 128 KiB split level 1 cache; a 2-way associative, later 16-way, cache separated into 2×64 KiB for data and instructions (Harvard architecture). This cache was double the size of K6's already large 2×32 KiB cache, and quadruple the size of Pentium II and III's 2×16 KiB L1 cache. The initial Athlon (Slot A, later renamed Athlon Classic) used 512 KiB of level 2 cache separate from the CPU, on the processor cartridge board, running at 50% to 33% of core speed. This was done because the 250 nm manufacturing processes was too large to allow for on-die cache while maintaining cost-effective die size. Later Athlon CPUs, afforded greater transistor budgets by smaller 180 nm and 130 nm process nodes, moved to on-die L2 cache at full CPU clock speed.

Athlon Classic

Athlon Classic launched on June 23, 1999. It showed superior performance compared to the reigning champion, Pentium III, in every benchmark.
Athlon Classic is a cartridge-based processor. The design, called Slot A, was quite similar to Intel's Slot 1 cartridge used for Pentium II and Pentium III; actually it used mechanically the same slot part as competing Intel CPUs (allowing motherboard manufacturers to save on costs) but reversed "upside-down" to prevent users putting in wrong CPUs (as they were completely signal incompatible). The cartridge allowed use of higher speed cache memory than is possible to put on the motherboard. Like Pentium II and the "Katmai"-core Pentium III, Athlon Classic used a 512 KiB secondary cache. This cache, again like its competitors, ran at a fraction of the core clock rate and had its own 64-bit bus, called a "backside bus" that allowed concurrent system front side bus and cache accesses. Initially the L2 cache was set for half of the CPU clock speed, on up to 700 MHz Athlon CPUs. Faster Slot-A processors were forced to compromise with cache clock speed and ran at 2/5 (up to 850 MHz) or 1/3 (up to 1 GHz). The SRAM available at the time was incapable of matching the Athlon's clock scalability, due both to cache chip technology limitations and electrical/cache latency complications of running an external cache at such a high speed.

Monday, March 3, 2008

Windows XP



Windows XP is a line of operating systems developed by Microsoft for use on general-purpose computer systems, including home and business desktops, notebook computers, and media centers. The name "XP" stands for eXPerience. It was codenamed "Whistler", after Whistler, British Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort during its development. Windows XP is the successor to both Windows 2000 Professional and Windows Me, and is the first consumer-oriented operating system produced by Microsoft to be built on the Windows NT kernel (version 5.1) and architecture. Windows XP was first released on October 25, 2001, and over 400 million copies were in use in January 2006, according to an estimate in that month by an IDC analyst. It is succeeded by Windows Vista, which was released to volume license customers on November 8, 2006, and worldwide to the general public on January 30, 2007.The most common editions of the operating system are Windows XP Home Edition, which is targeted at home users, and Windows XP Professional, which has additional features such as support for Windows Server domains and two physical processors, and is targeted at power users and business clients. Windows XP Media Center Edition has additional multimedia features enhancing the ability to record and watch TV shows, view DVD movies, and listen to music. Windows XP Tablet PC Edition is designed to run the ink-aware Tablet PC platform. Two separate 64-bit versions of Windows XP were also released, Windows XP 64-bit Edition for IA-64 (Itanium) processors and Windows XP Professional x64 Edition for x86-64.Windows XP is known for its improved stability and efficiency over the 9x versions of Microsoft Windows. It presents a significantly redesigned graphical user interface, a change Microsoft promoted as more user-friendly than previous versions of Windows. New software management capabilities were introduced to avoid the "DLL hell" that plagued older consumer-oriented 9x versions of Windows. It is also the first version of Windows to use product activation to combat software piracy, a restriction that did not sit well with some users and privacy advocates. Windows XP has also been criticized by some users for security vulnerabilities, tight integration of applications such as Internet Explorer 6 and Windows Media Player, and for aspects of its default user interface. Later versions with Service Pack 2, and Internet Explorer 7 addressed some of these concerns.

Editions

The two major editions are Windows XP Home Edition, designed for home users, and Windows XP Professional, designed for business and power-users. Other builds of Windows XP include those built for specialized hardware and limited-feature versions sold in Europe and select developing economies.

Windows XP for specialized hardware


Microsoft has also customized Windows XP to suit different markets. Six different versions of Windows XP for specific hardware were designed, two of them specifically for 64-bit processors.

System requirements

System requirements for Windows XP Home and Professional editions as follows:

Minimum Recommended
Processor 233 MHz 300 MHz or higher
Memory 64 MB RAM (may limit performance and some features) 128 MB RAM or higher
Video adapter and monitor Super VGA (800 x 600) Super VGA (800 x 600) or higher resolution
Hard drive disk free space 1.5 GB 1.5 GB or higher
Drives CD-ROM CD-ROM or better
Devices Keyboard and mouse Keyboard and mouse
Others Sound card, speakers, and headphones Sound card, speakers, and headphones
In addition to the Windows XP system requirements, Service Pack 2 requires an additional 1.8 GB of free hard disk space during installation.

Service packs

Microsoft occasionally releases service packs for its Windows operating systems to fix problems and add features. Each service pack is a superset of all previous service packs and patches so that only the latest service pack needs to be installed, and also includes new revisions. Older patches need not be removed before application of the most recent one.

Service Pack 1

Service Pack 1 (SP1) for Windows XP was released on September 9, 2002. It contains post-RTM security fixes and hot-fixes, compatibility updates, optional .NET Framework support, enabling technologies for new devices such as Tablet PCs, and a new Windows Messenger 4.7 version. The most notable new features were USB 2.0 support, and a Set Program Access and Defaults utility that aimed at hiding various middleware products. Users can control the default application for activities such as web browsing and instant messaging, as well as hide access to some of Microsoft's bundled programs. This utility was first brought into the older Windows 2000 operating system with its Service Pack 3. The Microsoft Java Virtual Machine, which was not in the RTM version, appeared in this service pack.
On February 3, 2003, Microsoft released Service Pack 1 (SP1) again as Service Pack 1a (SP1a). This release removed Microsoft's Java virtual machine as a result of a lawsuit with Sun Microsystems.

Service Pack 2


Windows Security Center was added in Service Pack 2.
Service Pack 2 (SP2) (codenamed "Springboard") was released on August 6, 2004 after several delays, with a special emphasis on security. Unlike the previous service packs, SP2 adds new functionality to Windows XP, including an enhanced firewall, improved Wi-Fi support, such as WPA encryption compatibility, with a wizard utility, a pop-up ad blocker for Internet Explorer 6, and Bluetooth support. Security enhancements include a major revision to the included firewall which was renamed to Windows Firewall and is enabled by default, advanced memory protection that takes advantage of the NX bit that is incorporated into newer processors to stop some forms of buffer overflow attacks, and removal of raw socket support (which supposedly limits the damage done by zombie machines). Additionally, security-related improvements were made to e-mail and web browsing. Windows XP Service Pack 2 includes the Windows Security Center, which provides a general overview of security on the system, including the state of anti-virus software, Windows Update, and the new Windows Firewall. Third-party anti-virus and firewall applications can interface with the new Security Center.
On August 10, 2007, Microsoft announced a minor update to Service Pack 2, called Service Pack 2c (SP2c). The update fixes the issue of the lowering number of available product keys for Windows XP. This update will be only available to system builders from their distributors in Windows XP Professional and Windows XP Professional N operating systems. SP2c was released in September 2007.

Service Pack 3

Windows XP Service Pack 3 (SP3) is currently in development. As of January 2008, Microsoft's web site indicates a "preliminary" release date to be in the first half of 2008. A feature set overview has been posted by Microsoft and details new features available separately as standalone updates to Windows XP, as well as features backported from Windows Vista, such as black hole router detection, Network Access Protection and Windows Imaging Component.
Microsoft has begun a beta test of SP3. According to a file released with the official beta, and relayed onto the internet, there are a total of 1,073 fixes in SP3.
This update to Windows allows it to be installed without a product key, and be run until the end of the 30-day activation period without a product key.
On December 4, 2007, Microsoft released build 3264 of a release candidate of SP3 to both TechNet and MSDN Subscribers. On December 18, 2007, this version was made publicly available via Microsoft Download Center. The latest release of SP3 is Release Candidate 2, which was released to the private beta-testing group through its connect website on February 6, 2008, with a build number of 3300. On 19 February 2008 build number 3311 of SP3 Release Candidate 2 was released for public beta testing. In order to be able to download and install SP3 Release Candidate 2 via Windows Update or Microsoft Update, a script must be installed and any earlier version of SP3 must first be removed. SP3 Release Candidate 2 can also be downloaded by way of the Microsoft Download Center.

Saturday, March 1, 2008

Computer virus

A computer virus is a computer program that can copy itself and infect a computer without permission or knowledge of the user. However, the term "virus" is commonly used, albeit erroneously, to refer to many different types of malware programs. The original virus may modify the copies, or the copies may modify themselves, as occurs in a metamorphic virus. A virus can only spread from one computer to another when its host is taken to the uninfected computer, for instance by a user sending it over a network or the Internet, or by carrying it on a removable medium such as a floppy disk, CD, or USB drive. Meanwhile viruses can spread to other computers by infecting files on a network file system or a file system that is accessed by another computer. Viruses are sometimes confused with computer worms and Trojan horses. A worm can spread itself to other computers without needing to be transferred as part of a host, and a Trojan horse is a file that appears harmless until executed.
Most personal computers are now connected to the Internet and to local area networks, facilitating the spread of malicious code. Today's viruses may also take advantage of network services such as the World Wide Web, e-mail, and file sharing systems to spread, blurring the line between viruses and worms. Furthermore, some sources use an alternative terminology in which a virus is any form of self-replicating malware.
Some viruses are programmed to damage the computer by damaging programs, deleting files, or reformatting the hard disk. Others are not designed to do any damage, but simply replicate themselves and perhaps make their presence known by presenting text, video, or audio messages. Even these benign viruses can create problems for the computer user. They typically take up computer memory used by legitimate programs. As a result, they often cause erratic behavior and can result in system crashes. In addition, many viruses are bug-ridden, and these bugs may lead to system crashes and data loss.

History

The Creeper virus was first detected on ARPANET, the forerunner of the Internet in the early 1970s. It propagated via the TENEX operating system and could make use of any connected modem to dial out to remote computers and infect them. It would display the message "I'M THE CREEPER : CATCH ME IF YOU CAN.". It is rumored[attribution needed] that the Reaper program, which appeared shortly after and sought out copies of the Creeper and deleted them, may have been written by the creator of the Creeper in a fit of regret.[original research?]
A program called "Elk Cloner" is commonly credited[attribution needed] with being the first computer virus to appear "in the wild" — that is, outside the single computer or lab where it was created, but that claim is false. See the Timeline of notable computer viruses and worms for other earlier viruses. It was however the first virus to infect computers "in the home". Written in 1982 by Richard Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread by floppy disk. This virus was originally a joke, created by a high school student and put onto a game. The disk could only be used 49 times. The game was set to play, but release the virus on the 50th time of starting the game.The first PC virus in the wild was a boot sector virus called (c)Brain[, created in 1986 by the Farooq Alvi Brothers, operating out of Lahore, Pakistan. The brothers reportedly created the virus to deter pirated copies of software they had written. However, analysts have claimed that the Ashar virus, a variant of Brain, possibly predated it based on code within the virus.
Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks.[citation needed] In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk.
Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in BBS and modem use, and software sharing. Bulletin board driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBS's.[citation needed] Within the "pirate scene" of hobbyists trading illicit copies of retail software, traders in a hurry to obtain the latest applications and games were easy targets for viruses.[original research?]
Since the mid-1990s, macro viruses have become common. Most of these viruses are written in the scripting languages for Microsoft programs such as Word and Excel. These viruses spread in Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most of these viruses were able to spread on Macintosh computers as well. Most of these viruses did not have the ability to send infected e-mail. Those viruses which did spread through e-mail took advantage of the Microsoft Outlook COM interface.[citation needed]
Macro viruses pose unique problems for detection software[citation needed]. For example, some versions of Microsoft Word allowed macros to replicate themselves with additional blank lines. The virus behaved identically but would be misidentified as a new virus. In another example, if two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents".
A virus may also send a web address link as an instant message to all the contacts on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating.
The newest species of the virus family is the cross-site scripting virus[citation needed]. The virus emerged from research and was academically demonstrated in 2005. This virus utilizes cross-site scripting vulnerabilities to propagate. Since 2005 there have been multiple instances of the cross-site scripting viruses in the wild, most notable sites affected have been MySpace and Yahoo.

Replication strategies

In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user tries to start an infected program, the virus' code may be executed first. Viruses can be divided into two types, on the basis of their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected, infect these targets, and finally transfer control to the application program they infected. Resident viruses do not search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operating system itself.

Nonresident viruses


Nonresident viruses can be thought of as consisting of a finder module and a replication module. The finder module is responsible for finding new files to infect. For each new executable file the finder module encounters, it calls the replication module to infect that file.

Resident viruses

Resident viruses contain a replication module that is similar to the one that is employed by nonresident viruses. However, this module is not called by a finder module. Instead, the virus loads the replication module into memory when it is executed and ensures that this module is executed each time the operating system is called to perform a certain operation. For example, the replication module can be called each time the operating system executes a file. In this case, the virus infects every suitable program that is executed on the computer.
Resident viruses are sometimes subdivided into a category of fast infectors and a category of slow infectors. Fast infectors are designed to infect as many files as possible. For instance, a fast infector can infect every potential host file that is accessed. This poses a special problem to anti-virus software, since a virus scanner will access every potential host file on a computer when it performs a system-wide scan. If the virus scanner fails to notice that such a virus is present in memory, the virus can "piggy-back" on the virus scanner and in this way infect all files that are scanned. Fast infectors rely on their fast infection rate to spread. The disadvantage of this method is that infecting many files may make detection more likely, because the virus may slow down a computer or perform many suspicious actions that can be noticed by anti-virus software. Slow infectors, on the other hand, are designed to infect hosts infrequently. For instance, some slow infectors only infect files when they are copied. Slow infectors are designed to avoid detection by limiting their actions: they are less likely to slow down a computer noticeably, and will at most infrequently trigger anti-virus software that detects suspicious behavior by programs. The slow infector approach does not seem very successful, however.

Methods to avoid detection

In order to avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the MS-DOS platform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool anti-virus software, however, especially that which maintains and dates Cyclic redundancy check on file changes.Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because those files had many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file.Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them.As computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access.

Avoiding bait files and other undesirable hosts

A virus needs to infect hosts in order to spread further. In some cases, it might be a bad idea to infect a host program. For example, many anti-virus programs perform an integrity check of their own code. Infecting such programs will therefore increase the likelihood that the virus is detected. For this reason, some viruses are programmed not to infect programs that are known to be part of anti-virus software. Another type of host that viruses sometimes avoid is bait files. Bait files (or goat files) are files that are specially created by anti-virus software, or by anti-virus professionals themselves, to be infected by a virus. These files can be created for various reasons, all of which are related to the detection of the virus:Anti-virus professionals can use bait files to take a sample of a virus (i.e. a copy of a program file that is infected by the virus). It is more practical to store and exchange a small, infected bait file, than to exchange a large application program that has been infected by the virus.Anti-virus professionals can use bait files to study the behavior of a virus and evaluate detection methods. This is especially useful when the virus is polymorphic. In this case, the virus can be made to infect a large number of bait files. The infected files can be used to test whether a virus scanner detects all versions of the virus. Some anti-virus software employs bait files that are accessed regularly. When these files are modified, the anti-virus software warns the user that a virus is probably active on the system. Since bait files are used to detect the virus, or to make detection possible, a virus can benefit from not infecting them. Viruses typically do this by avoiding suspicious programs, such as small program files or programs that contain certain patterns of 'garbage instructions'.A related strategy to make baiting difficult is sparse infection. Sometimes, sparse infectors do not infect a host file that would be a suitable candidate for infection in other circumstances. For example, a virus can decide on a random basis whether to infect a file or not, or a virus can only infect host files on particular days of the week.

Friday, February 29, 2008

Firewall

A firewall is a dedicated appliance, or software running on another computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules.

Function

A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels. Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often referred to as a "perimeter network" or Demilitarized zone (DMZ).
A firewall's function within a network is similar to firewalls with fire doors in building construction. In the former case, it is used to prevent network intrusion to the private network. In the latter case, it is intended to contain and delay structural fire from spreading to adjacent structures.
Without proper configuration, a firewall can often become worthless. Standard security practices dictate a "default-deny" firewall ruleset, in which the only network connections which are allowed are the ones that have been explicitly allowed. Unfortunately, such a configuration requires detailed understanding of the network applications and endpoints required for the organization's day-to-day operation. Many businesses lack such understanding, and therefore implement a "default-allow" ruleset, in which all traffic is allowed unless it has been specifically blocked. This configuration makes inadvertent network connections and system compromise much more likely.

History

The term "firewall" originally meant a wall to confine a fire or potential fire within a building, c.f. firewall (construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment.
Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity. The original idea was formed in response to a number of major internet security breaches, which occurred in the late 1980s.

First generation - packet filters

The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what would become a highly evolved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based upon their original first generation architecture.
Packet filters act by inspecting the "packets" which represent the basic unit of data transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source).
This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, which comprises most internet communication, the port number).
Because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports.

Second generation - "stateful" filters

From 1980-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam developed the second generation of firewalls, calling them circuit level firewalls.
Second Generation firewalls do not simply examine the contents of each packet on an individual basis without regard to their placement within the packet series as their predecessors had done, rather they compare some key parts of the trusted database packets. This technology is generally referred to as a 'stateful firewall' as it maintains records of all connections passing through the firewall and is able to determine whether a packet is the start of a new connection or part of an existing connection. Though there is still a set of static rules in such a firewall, the state of a connection can in itself be one of the criteria which trigger specific rules.This type of firewall can help prevent attacks which exploit existing connections, or certain Denial-of-service attacks.

Third generation - application layer

Publications by Gene Spafford of Purdue University, Bill Cheswick at AT&T Laboratories, and Marcus Ranum described a third generation firewall known as an application layer firewall, also known as a proxy-based firewall. Marcus Ranum's work on the technology spearheaded the creation of the first commercial product. The product was released by DEC who named it the DEC SEAL product. DEC’s first major sale was on June 13, 1991 to a chemical company based on the East Coast of the USA.The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect whether an unwanted protocol is being sneaked through on a non-standard port or whether a protocol is being abused in a known harmful way.

Subsequent developments

In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colours and icons, which could be easily implemented to and accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into readily available software known as FireWall-1.
The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS).
Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes.

Types

There are several classifications of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced.

Network layer and packet filters

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator may define the rules; or default rules may apply. The term packet filter originated in the context of BSD operating systems.
Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain context about active sessions, and use that "state information" to speed up packet processing. Any existing network connection can be described by several properties, including source and destination IP address, UDP or TCP ports, and the current stage of the connection's lifetime (including session initiation, handshaking, data transfer, or completion connection). If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing.
Stateless firewalls have packet-filtering capabilities, but cannot make more complex decisions on what stage communications between hosts have reached.

Application-layer

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgement to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines.
On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. In practice, however, this becomes so complex and so difficult to attempt (given the variety of applications and the diversity of content each may allow in its packet traffic) that comprehensive firewall design does not generally attempt this approach.
The XML firewall exemplifies a more recent kind of application-layer firewall.
Companies like SecureComputing (www.securecomputing.com) are a major manufacturer of Application Layer Firewalls.

Proxies

A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets.
Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network.

Network address translation

Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall commonly have addresses in the "private address range", as defined in RFC 1918. Firewalls often have such functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the limited number of IPv4 routable addresses that could be used or assigned to companies or individuals as well as reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an organization. Hiding the addresses of protected devices has become an increasingly important defense against network reconnaissance.

Thursday, February 28, 2008

Internet security

In the computer industry, Internet security refers to techniques for ensuring that data stored in a computer cannot be read or compromised by any individuals without authorization. Most security measures involve data encryption and passwords. Data encryption is the translation of data into a form that is unintelligible without a deciphering mechanism. A password is a secret word or phrase that gives a user access to a particular program or system.

Routers

Network Address Translation (NAT) typically has the effect of preventing connections from being established inbound into a computer, whilst permitting connections out. For a small home network, software NAT can be used on the computer with the Internet connection, providing similar behaviour to a router and similar levels of security, but for a lower cost and lower complexity.

Firewalls

A firewall blocks all "roads and cars" through authorized ports on your computer, thus restricting unfettered access. A stateful firewall is a more secure form of firewall, and system administrators often combine a proxy firewall with a packet-filtering firewall to create a highly secure system. Most home users use a software firewall. These types of firewalls can create a log file where it records all the connection details (including connection attempts) with the PC.

Anti-virus

Some people or companies with malicious intentions write programs like computer viruses, worms, trojan horses and spyware. These programs are all characterised as being unwanted software that install themselves on your computer through deception.
Trojan horses are simply programs that conceal their true purpose or include a hidden functionality that a user would not want.
Worms are characterised by having the ability to replicate themselves and viruses are similar except that they achieve this by adding their code onto third party software. Once a virus or worm has infected a computer, it would typically infect other programs (in the case of viruses) and other computers.
Viruses also slow down system performance and cause strange system behavior and in many cases do serious harm to computers, either as deliberate, malicious damage or as unintentional side effects.
In order to prevent damage by viruses and worms, users typically install antivirus software, which runs in the background on the computer, detecting any suspicious software and preventing it from running.
Some malware that can be classified as trojans with a limited payload are not detected by most antivirus software and may require the use of other software designed to detect other classes of malware, including spyware.

Anti-spyware


Spyware is software that runs on a computer without the explicit permission of its user. It often gathers private information from a user's computer and sends this data over the Internet back to the software manufacturer. Adware is software that runs on a computer without the owner's consent, much like spyware. However, instead of taking information, it typically runs in the background and displays random or targeted pop-up advertisements. In many cases, this slows the computer down and may also cause software conflicts.

Browser choice

Internet Explorer is currently the most widely used web browser in the world, making it the prime target for phishing and many other possible attacks.

Wednesday, February 27, 2008

Call centre



A call centre or call center is a centralised office used for the purpose of receiving and transmitting a large volume of requests by telephone.A call centre is operated by a company to administer incoming product support or information inquiries from consumers. Outgoing calls for telemarketing, clientele, and debt collection are also made. In addition to a call centre, collective handling of letters, faxes, and e-mails at one location is known as a contact centre.A call centre is often operated through an extensive open workspace for call centre agents, with work stations that include a computer for each agent, a telephone set/headset connected to a telecom switch, and one or more supervisor stations. It can be independently operated or networked with additional centres, often linked to a corporate computer network, including mainframes, microcomputers and LANs. Increasingly, the voice and data pathways into the centre are linked through a set of new technologies called computer telephony integration (CTI).Most major businesses use call centres to interact with their customers. Examples include utility companies, mail order catalogue firms, and customer support for computer hardware and software. Some businesses even service internal functions through call centres. Examples of this include help desks and sales support.

Mathematical theory

A call centre can be seen from an operational point of view as a queueing network. The simplest call centre, consisting of a single type of customers and statistically-identical servers, can be viewed as a single-queue. Queueing theory is a branch of mathematics in which models of such queueing systems have been developed. These models, in turn, are used to support work force planning and management, for example by helping answer the following common staffing-question: given a service-level, as determined by management, what is the least number of telephone agents that is required to achieve it. (Prevalent examples of service levels are: at least 80% of the callers are answered within 20 seconds; or, no more than 3% of the customers hang-up due to impatience, before being served.)Queueing models also provide qualitative insight, for example identifying the circumstances under which economies of scale prevail, namely that a single large call centre is more effective at answering calls than several (distributed) smaller ones; or that cross-selling is beneficial; or that a call centre should be quality-driven or efficiency-driven or, most likely, both Quality and Efficiency Driven (abbreviated to QED). Recently, queueing models have also been used for planning and operating skills-based-routing of calls within a call centre, which entails the analysis of systems with multi-type customers and multi-skilled agents.Call centre operations have been supported by mathematical models beyond queueing, with operations research, which considers a wide range of optimisation problems, being very relevant. For example, for forecasting of calls, for determining shift-structures, and even for analysing customers' impatience while waiting to be served by an agent.

Administration of call centres


The centralisation of call management aims to improve a company's operations and reduce costs, while providing a standardised, streamlined, uniform service for consumers. To accommodate large customer bases, large warehouses are often converted to office space to host all call centre operations under one roof.Call centre staff can be monitored for quality control, level of proficiency, and customer service by computer technology that manages, measures and monitors the performance and activities of the workers. Typical contact centre operations focus on the discipline areas of workforce management, queue management, quality monitoring, and reporting. Reporting in a call centre can be further broken down into real time reporting and historical reporting. The types of information collected for a group of call centre agents can include: agents logged in, agents ready to take calls, agents available to take calls, agents in wrap up mode, average call duration, average call duration including wrap-up time, longest duration agent available, longest duration call in queue, number of calls in queue, number of calls offered, number of calls abandoned, average speed to answer, average speed to abandoned and service level, calculated by the percentage of calls answered in under a certain time period.Many Call centres use workforce management software, which is software that uses historical information coupled with projected need to generate automated schedules to meet anticipated staffing level needs.The relatively high cost of personnel and office space as well as need for large manpower and challenges around attrition, hiring and managing a large workforce influences outsourcing in the call centre industry.

Technology

Call centres use a wide variety of different technologies to allow them to manage large volumes of work. These technologies facilitate queueing and processing of calls, maintaining consistent work flow for agents and creating other business cost savings.

Patents

There are a large number of patents covering various aspects of call centre operation, automation, and technology. One of the early inventors in this field, Ronald A. Katz, personally holds over 50 patents covering inventions related to toll free numbers, automated attendant, automated call distribution, voice response unit, computer telephone integration and speech recognition.

Varieties of call centres

Some variations of call centre models are listed below:
Remote Agents – An alternative to housing all agents in a central facility is to use remote agents. These agents work from home and use internet technologies to connect.
Temporary Agents – Temporary agents who are called upon if demand increases more rapidly than planned.
Virtual Call centres – Virtual Call centres are created using many smaller centres in different locations and connecting them to one another. There are two methods used to route traffic around call centres: pre-delivery and post-delivery. Pre-delivery involves using an external switch to route the calls to the appropriate centre and post-delivery enables call centres to route a call they've received to another call centre.
Contact centres – Deal with more media than telephony alone including Email, Web Callback and internet Chat.

Tuesday, February 26, 2008

Network card


A network card, network adapter, LAN Adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It is both an OSI layer 1 (physical layer) and layer 2 (data link layer) device, as it provides physical access to a networking medium and provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.
Although other network technologies exist, Ethernet has achieved near-ubiquity since the mid-1990s. Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique MAC address. No two cards ever manufactured share the same address. This is accomplished by the Institute of Electrical and Electronics Engineers (IEEE), which is responsible for assigning unique MAC addresses to the vendors of network interface controllers.Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset, or implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newer PCI express bus). A separate network card is not required unless multiple interfaces are needed or some other type of network is used. Newer motherboards may even have dual network (Ethernet) interfaces built-in.The card implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or token ring. This provides a base for a full network protocol stack, allowing communication among small groups of computers on the same LAN and large-scale network communications through routable protocols, such as IP.
There are four techniques used to transfer data, the NIC may use one or more of these techniques.
Polling is where the microprocessor examines the status of the peripheral under program control.
Programmed I/O is where the microprocessor alerts the designated peripheral by applying its address to the system's address bus.
Interrupt-driven I/O is where the peripheral alerts the microprocessor that it's ready to transfer data.
DMA is where the intelligent peripheral assumes control of the system bus to access memory directly. This removes load from the CPU but requires a separate processor on the card.
A network card typically has a twisted pair, BNC, or AUI socket where the network cable is connected, and a few LEDs to inform the user of whether the network is active, and whether or not there is data being transmitted on it. The Network Cards are typically available in 10/100/1000 Mbit/s(Mbit/s). This means they can support a transfer rate of 10 or 100 or 1000 Megabits per second.

Monday, February 25, 2008

Microsoft



Microsoft Corporation or often just MS, is an American multinational computer technology corporation with 79,000 employees in 102 countries and global annual revenue of US $51.12 billion as of 2007. It develops, manufactures, licenses, and supports a wide range of software products for computing devices. Headquartered in Redmond, Washington, USA, its best selling products are the Microsoft Windows operating system and the Microsoft Office suite of productivity software. These products have prominent positions in the desktop computer market, with market share estimates as high as 90% or more as of 2003 for Microsoft Office and 2006 for Microsoft Windows. One of Bill Gates' key visions is "to get a workstation running our software onto every desk and eventually in every home".
Founded to develop and sell BASIC interpreters for the Altair 8800, Microsoft rose to dominate the home computer operating system market with MS-DOS in the mid-1980s. The company released an initial public offering (IPO) in the stock market, which, due to the ensuing rise of the stock price, has made four billionaires and an estimated 12,000 millionaires from Microsoft employees. Throughout its history the company has been the target of criticism for various reasons, including monopolistic business practices—both the U.S. Justice Department and the European Commission, among others, brought Microsoft to court for antitrust violations and software bundling.
Microsoft has footholds in other markets besides operating systems and office suites, with assets such as the MSNBC cable television network, the MSN Internet portal, and the Microsoft Encarta multimedia encyclopedia. The company also markets both computer hardware products such as the Microsoft mouse and home entertainment products such as the Xbox, Xbox 360, Zune and MSN TV. Known for what is generally described as a developer-centric business culture, Microsoft has historically given customer support over Usenet newsgroups and the World Wide Web, and awards Microsoft MVP status to volunteers who are deemed helpful in assisting the company's customers. The company's official website is one of the most visited on the Internet, receiving more than 2.4 million unique page views per day according to Alexa.com, who ranked the site 18th amongst all websites for traffic rank on September 12, 2007.

History

1975–1985: Founding

Following the launch of the Altair 8800, Bill Gates called the creators of the new microcomputer, Micro Instrumentation and Telemetry Systems (MITS), offering to demonstrate an implementation of the BASIC programming language for the system. After the demonstration, MITS agreed to distribute Altair BASIC. Gates left Harvard University, moved to Albuquerque, New Mexico where MITS was located, and founded Microsoft there. The company's first international office was founded on November 1, 1978, in Japan, entitled "ASCII Microsoft" (now called "Microsoft Japan"). On January 1, 1979, the company moved from Albuquerque to a new home in Bellevue, Washington. Steve Ballmer joined the company on June 11, 1980, and later succeeded Bill Gates as CEO.
DOS (Disk Operating System) was the operating system that brought the company its first real success. On August 12, 1981, after negotiations with Digital Research failed, IBM awarded a contract to Microsoft to provide a version of the CP/M operating system, which was set to be used in the upcoming IBM Personal Computer (PC). For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products, which IBM renamed to PC-DOS. Later, the market saw a flood of IBM PC clones after Columbia Data Products successfully cloned the IBM BIOS, and by aggressively marketing MS-DOS to manufacturers of IBM-PC clones, Microsoft rose from a small player to one of the major software vendors in the home computer industry. The company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as a publishing division named Microsoft Press.

1985–1995: OS/2 and Windows

In August 1985, Microsoft and IBM partnered in the development of a different operating system called OS/2. On November 20, 1985, Microsoft released its first retail version of Microsoft Windows, originally a graphical extension for its MS-DOS operating system. On March 13, 1986 the company went public with an IPO, with a starting initial offering price of $21.00 and ending at the first day of trading as at US $28.00. In 1987, Microsoft eventually released their first version of OS/2 to OEMs.In 1989, Microsoft introduced its flagship office suite, Microsoft Office. This was a bundle of separate office productivity applications, such as Microsoft Word and Microsoft Excel. On May 22, 1990 Microsoft launched Windows 3.0. The new version of Microsoft's operating system boasted such new features as streamlined user interface graphics and improved protected mode capability for the Intel 386 processor; it sold over 100,000 copies in two weeks. Windows at the time generated more revenue for Microsoft than OS/2, and the company decided to move more resources from OS/2 to Windows. In the ensuing years, the popularity of OS/2 declined, and Windows quickly became the favored PC platform.During the transition from MS-DOS to Windows, the success of Microsoft's product Microsoft Office allowed the company to gain ground on application-software competitors, such as WordPerfect and Lotus 1-2-3. According to The Register, Novell, an owner of WordPerfect for a time, alleged that Microsoft used its inside knowledge of the DOS and Windows kernels and of undocumented Application Programming Interface features to make Office perform better than its competitors. Eventually, Microsoft Office became the dominant business suite, with a market share far exceeding that of its competitors.
In 1993, Microsoft released Windows NT 3.1, a business operating system with the Windows 3.1 user interface but an entirely different kernel. In 1995, Microsoft released Windows 95, a new version of the company's flagship operating system which featured a completely new user interface, including a novel start button; more than a million copies of Microsoft Windows 95 were sold in the first four days after its release. The company also released its web browser, Internet Explorer, with the Windows 95 Plus! Pack in August 1995 and subsequent Windows versions.

1995–2005: Internet and legal issues


In the mid-90s, Microsoft began to expand its product line into computer networking and the World Wide Web. On August 24, 1995, it launched a major online service, MSN (Microsoft Network), as a direct competitor to AOL. MSN became an umbrella service for Microsoft's online services. The company continued to branch out into new markets in 1996, starting with a joint venture with NBC to create a new 24/7 cable news station, MSNBC. Microsoft entered the personal digital assistant (PDA) market in November with Windows CE 1.0, a new built-from-scratch version of their flagship operating system, specifically designed to run on low-memory, low-performance machines, such as handhelds and other small computers. Later in 1997, Internet Explorer 4.0 was released for both Mac OS and Windows, marking the beginning of the takeover of the browser market from rival Netscape. In October, the Justice Department filed a motion in the Federal District Court in which they stated that Microsoft had violated an agreement signed in 1994, and asked the court to stop the bundling of Internet Explorer with Windows.
The year 1998 was significant in Microsoft's history, with Bill Gates appointing Steve Ballmer as president of Microsoft but remaining as Chair and CEO himself. The company released Windows 98, an update to Windows 95 that incorporated a number of Internet-focused features and support for new types of devices. On April 3, 2000, a judgment was handed down in the case of United States v. Microsoft, calling the company an "abusive monopoly" and forcing the company to split into two separate units. Part of this ruling was later overturned by a federal appeals court, and eventually settled with the U.S. Department of Justice in 2001.
In 2001, Microsoft released Windows XP, the first version that encompassed the features of both its business and home product lines. XP introduced a new graphical user interface, the first such change since Windows 95. Later, with the release of the Xbox Microsoft entered the multi-billion-dollar game console market dominated by Sony and Nintendo. Microsoft encountered more turmoil in March 2004 when antitrust legal action was brought against it by the European Union for abusing its market dominance (see European Union Microsoft antitrust case), eventually resulting in a judgement to produce new versions of its Windows XP platform—called Windows XP Home Edition N and Windows XP Professional N—that did not include its Windows Media Player.

2006–present: Vista and other transitions

In 2006, Bill Gates announced a two year transition period from his role as Chief Software Architect, which would be taken by Ray Ozzie, and planned to remain the company's chairman, head of the Board of Directors and act as an adviser on key projects.[46] As of December 2007, Windows Vista, released in January 2007, is Microsoft's latest operating system. Microsoft Office 2007 was released at the same time; its "Ribbon" user interface is a significant departure from its predecessors. On 1st February, 2008, Microsoft made an unsolicited bid to purchase the fully diluted outstanding shares of Yahoo for up to $44.6 billion, though this offer was later rejected on February 10. Microsoft is not privately haggling with Yahoo over the software maker's rejected $31-per-share buyout offer for the Internet pioneer, Bill Gates said on February 19, 2008.Microsoft Corp. told on February 21, 2008 it will share more information about its products and technology. The company wants to make it easier for developers to create software that works with its products.

Product divisions


To be more precise in tracking performance of each unit and delegating responsibility, Microsoft reorganized into seven core business groups—each an independent financial entity—in April 2002. Later, on September 20, 2005, Microsoft announced a rationalization of its original seven business groups into the three core divisions that exist today: the Windows Client, MSN and Server and Tool groups were merged into the Microsoft Platform Products and Services Division; the Information Worker and Microsoft Business Solutions groups were merged into the Microsoft Business Division; and the Mobile and Embedded Devices and Home and Entertainment groups were merged into the Microsoft Entertainment and Devices Division.

Platform Products and Services Division

This division produces Microsoft's flagship product, the Windows operating system. It has been produced in many versions, including Windows 3.1, Windows 95, Windows 98, Windows 2000, Windows Me, Windows Server 2003, Windows XP and Windows Vista. Almost all IBM compatible personal computers come with Windows preinstalled. The current desktop version of Windows is Windows Vista. The online service MSN, the cable television station MSNBC and the Microsoft online magazine Slate are all part of this division. (Slate was acquired by The Washington Post on December 21, 2004.) At the end of 1997, Microsoft acquired Hotmail, the most popular webmail service, which it rebranded as "MSN Hotmail". In 1999, Microsoft introduced MSN Messenger, an instant messaging client, to compete with the popular AOL Instant Messenger. Along with Windows Vista, MSN Messenger became Windows Live Messenger.
Microsoft Visual Studio is the company's set of programming tools and compilers. The software product is GUI-oriented and links easily with the Windows APIs, but must be specially configured if used with non-Microsoft libraries. The current version is Visual Studio 2008. The previous version, Visual Studio 2005 was a major improvement over its predecessor, Visual Studio.Net 2003, named after the .NET initiative, a Microsoft marketing initiative covering a number of technologies. Microsoft's definition of .NET continues to evolve. As of 2004, .NET aims to ease the development of Microsoft Windows-based applications that use the Internet, by deploying a new Microsoft communications system, Indigo (now renamed Windows Communication Foundation). This is intended to address some issues previously introduced by Microsoft's DLL design, which made it difficult, even impossible in some situations, to manage, install multiple versions of complex software packages on the same system (see DLL-hell), and provide a more consistent development platform for all Windows applications (see Common Language Infrastructure). In addition, the Company established a set of certification programs to recognize individuals who have expertise in its software and solutions. Similar to offerings from Cisco, Sun Microsystems, Novell, IBM, and Oracle Corporation, these tests are designed to identify a minimal set of proficiencies in a specific role; this includes developers ("Microsoft Certified Solution Developer"), system/network analysts ("Microsoft Certified Systems Engineer"), trainers ("Microsoft Certified Trainers") and administrators ("Microsoft Certified Systems Administrator" and "Microsoft Certified Database Administrator").
Microsoft offers a suite of server software, entitled Windows Server System. Windows Server 2003, an operating system for network servers, is the core of the Windows Server System line. Another server product, Systems Management Server, is a collection of tools providing remote-control abilities, patch management, software distribution and a hardware/software inventory. Other server products include:
Microsoft SQL Server, a relational database management system;
Microsoft Exchange Server, for certain business-oriented e-mail features;
Small Business Server, for messaging and other small business-oriented features; and
Microsoft BizTalk Server, for employee integration assistance and other functions.

Business Division

The Microsoft Business Division produces Microsoft Office, which is the company's line of office software. The software product includes Word (a word processor), Access (a personal relational database application), Excel (a spreadsheet program), Outlook (Windows-only groupware, frequently used with Exchange Server), PowerPoint (presentation software), and Publisher (desktop publishing software). A number of other products were added later with the release of Office 2003 including Visio, Project, MapPoint, InfoPath and OneNote.The division focuses on developing financial and business management software for companies. These products include products formerly produced by the Business Solutions Group, which was created in April 2001 with the acquisition of Great Plains. Subsequently, Navision was acquired to provide a similar entry into the European market, resulting in the planned release of Microsoft Dynamics NAV in 2006. The group markets Axapta and Solomon, catering to similar markets, which is scheduled to be combined with the Navision and Great Plains lines into a common platform called Microsoft Dynamics.

Entertainment and Devices Division


Microsoft has attempted to expand the Windows brand into many other markets, with products such as Windows CE for PDAs and its "Windows-powered" Smartphone products. Microsoft initially entered the mobile market through Windows CE for handheld devices, which today has developed into Windows Mobile 6. The focus of the operating system is on devices where the OS may not directly be visible to the end user, in particular, appliances and cars. The company produces MSN TV, formerly WebTV, a television-based Internet appliance. Microsoft used to sell a set-top Digital Video Recorder (DVR) called the UltimateTV, which allowed users to record up to 35 hours of television programming from a direct-to-home satellite television provider DirecTV. This was the main competition in the UK for British Sky Broadcasting's (BSkyB) SKY + service, owned by Rupert Murdoch. UltimateTV has since been discontinued, with DirecTV instead opting to market DVRs from TiVo Inc. before later switching to their own DVR brand.
Microsoft sells computer games that run on Windows PCs, including titles such as Age of Empires, Halo and the Microsoft Flight Simulator series. It produces a line of reference works that include encyclopedias and atlases, under the name Encarta. Microsoft Zone hosts free premium and retail games where players can compete against each other and in tournaments. Microsoft entered the multi-billion-dollar game console market dominated by Sony and Nintendo in late 2001, with the release of the Xbox. The company develops and publishes its own video games for this console, with the help of its Microsoft Game Studios subsidiary, in addition to third-party Xbox video game publishers such as Electronic Arts and Activision, who pay a license fee to publish games for the system. The Xbox also has a successor in the Xbox 360, released on 2005-11-22 in North America and other countries. With the Xbox 360, Microsoft hopes to compensate for the losses incurred with the original Xbox. However, Microsoft made some decisions considered controversial in the video gaming community, such as releasing the console with high failure rates, selling two different versions of the system, one without the HDD and providing limited backward compatibility with only particular Xbox titles. . In addition to the Xbox line of products, Microsoft also markets a number of other computing-related hardware products as well, including mice, keyboards, joysticks, and gamepads, along with other game controllers, the production of which is outsourced in most cases. As of 15 November 2007, Microsoft announced the purchase of Musiwave, Openwave's mobile phone music sales business.

Business culture

Microsoft has often been described as having a developer-centric business culture. A great deal of time and money is spent each year on recruiting young university-trained software developers and on keeping them in the company. For example, while many software companies often place an entry-level software developer in a cubicle desk within a large office space filled with other cubicles, Microsoft assigns a private or semiprivate closed office to every developer or pair of developers. In addition, key decision makers at every level are either developers or former developers. In a sense, the software developers at Microsoft are considered the "stars" of the company in the same way that the sales staff at IBM are considered the "stars" of their company.Within Microsoft the expression "eating our own dog food" is used to describe the policy of using the latest Microsoft products inside the company in an effort to test them in "real-world" situations. Only prerelease and beta versions of products are considered dog food. This is usually shortened to just "dogfood" and is used as noun, verb, and adjective. The company is also known for their hiring process, dubbed the "Microsoft interview", which is notorious for off-the-wall questions such as "Why is a manhole cover round?" and is a process often mimicked in other organizations, although these types of questions are rarer now than they were in the past. For fun, Microsoft also hosts the Microsoft Puzzle Hunt, an annual puzzle hunt (a live puzzle game where teams compete to solve a series of puzzles) held at the Redmond campus.As of 2006, Microsoft employees, not including Bill Gates, have given over $2.5 billion dollars to non-profit organizations worldwide, making Microsoft the worldwide top company in per-employee donations. In January 2007, the Harris Interactive/The Wall Street Journal Reputation Quotient survey concluded that Microsoft had the world's best corporate reputation, citing strong financial performance, vision & leadership, workplace environment rankings, and the charitable deeds of the Bill & Melinda Gates Foundation.

User culture

Technical reference for developers and articles for various Microsoft magazines such as Microsoft Systems Journal (or MSJ) are available through the Microsoft Developer Network, often called MSDN. MSDN also offers subscriptions for companies and individuals, and the more expensive subscriptions usually offer access to pre-release beta versions of Microsoft software. In recent years, Microsoft launched a community site for developers and users, entitled Channel9, which provides many modern features such as a wiki and an Internet forum. Another community site that provides daily videocasts and other services, On10.net, launched on March 3, 2006.
Most free technical support available through Microsoft is provided through online Usenet newsgroups (in the early days it was also provided on CompuServe). There are several of these newsgroups for nearly every product Microsoft provides, and often they are monitored by Microsoft employees. People who are helpful on the newsgroups can be elected by other peers or Microsoft employees for Microsoft Most Valuable Professional (MVP) status, which entitles people to a sort of special social status, in addition to possibilities for awards and other benefits.
By 2005, the city of Seattle in the state of Washington had 2,500 users who owned smartphone and desktop computer versions of the JamBayes Traffic Forecasting Service, developed by researchers at Microsoft and the University of Washington. Kleiner Perkins Caufield & Byers, Sequoia Capital, Skymoon Ventures, Crescendo Ventures, ZenShin Capital Partners, Artis Capital, Gold Hill Capital, and several individuals gave Dash USD$45 million for Dash Express which Wired News says "learns from its users". "If a Dash owner is moving 5 miles per hour in a 45 mph (72 km/h) zone, Dash servers will realize he's in traffic and warn other Dash drivers to choose faster routes".

Corporate structure

The company is run by a Board of Directors consisting of ten people, made up of mostly company outsiders (as is customary for publicly traded companies). Current members of the board of directors are: Steve Ballmer, James Cash, Jr., Dina Dublon, Bill Gates, Raymond Gilmartin, Reed Hastings, David Marquardt, Charles Noski, Helmut Panke, and Jon Shirley. The ten board members are elected every year at the annual shareholders' meeting, and those who do not get a majority of votes must submit a resignation to the board, which will subsequently choose whether or not to accept the resignation. There are five committees within the board which oversee more specific matters. These committees include the Audit Committee, which handles accounting issues with the company including auditing and reporting; the Compensation Committee, which approves compensation for the CEO and other employees of the company; the Finance Committee, which handles financial matters such as proposing mergers and acquisitions; the Governance and Nominating Committee, which handles various corporate matters including nomination of the board; and the Antitrust Compliance Committee, which attempts to prevent company practices from violating antitrust laws.
There are several other aspects to the corporate structure of Microsoft. For worldwide matters there is the Executive Team, made up of sixteen company officers across the globe, which is charged with various duties including making sure employees understand Microsoft's culture of business. The sixteen officers of the Executive Team include the Chairman and Chief Software Architect, the CEO, the General Counsel and Secretary, the CFO, senior and group vice presidents from the business units, the CEO of the Europe, the Middle East and Africa regions; and the heads of Worldwide Sales, Marketing and Services; Human Resources; and Corporate Marketing. In addition to the Executive Team there is also the Corporate Staff Council, which handles all major staff functions of the company, including approving corporate policies. The Corporate Staff Council is made up of employees from the Law and Corporate Affairs, Finance, Human Resources, Corporate Marketing, and Advanced Strategy and Policy groups at Microsoft. Other Executive Officers include the Presidents and Vice Presidents of the various product divisions, leaders of the marketing section, and the CTO, among others.

Stock

When the company debuted its IPO in March 13, 1986, the stock price was US $21. By the close of the first trading day, the stock had closed at $28, equivalent to 9.7 cents when adjusted for the company's first nine splits. The initial close and ensuing rise in subsequent years made several Microsoft employees millions. The stock price peaked in 1999 at around US $119 (US $60.928 adjusting for splits). While the company has had nine stock splits, the first of which was in September 18, 1987, the company did not start offering a dividend until January 16, 2003. The dividend for the 2003 fiscal year was eight cents per share, followed by a dividend of sixteen cents per share the subsequent year. The company switched from yearly to quarterly dividends in 2005, for eight cents a share per quarter with a special one-time payout of three dollars per share for the second quarter of the fiscal year.
Around 2003 the stock price began a slow descent. Despite the company's ninth split on February 2, 2003 and subsequent increases in dividend payouts, the price of Microsoft's stock continued to fall for the next several years.

Diversity

In 2005, Microsoft received a 100% rating in the Corporate Equality Index from the Human Rights Campaign, a ranking of companies by how progressive the organization deems their policies concerning LGBT (lesbian, gay, bisexual and transsexual) employees. Partly through the work of the Gay and Lesbian Employees at Microsoft (GLEAM) group, Microsoft added gender expression to its anti-discrimination policies in April 2005, and the Human Rights Campaign upgraded Microsoft's Corporate Equality Index from its 86% rating in 2004 to its current 100% rating.
In April 2005, Microsoft received wide criticism for withdrawing support from Washington state's H.B. 1515 bill that would have extended the state's current anti-discrimination laws to people with alternate sexual orientations. Microsoft was accused of bowing to pressure from local evangelical pastor Ken Hutcherson who met with a senior Microsoft executive and threatened a national boycott of Microsoft's products. Microsoft also revealed they were paying evangelical conservative Ralph Reed's company Century Strategies a $20,000 monthly fee. Over 2,000 employees signed a petition asking Microsoft to reinstate support for the bill. Under harsh criticism from both outside and inside the company's walls, Microsoft decided to support the bill again in May 2005.
Microsoft hires many foreign workers as well as domestic ones, and is an outspoken opponent of the cap on H1B visas, which allow companies in the United States to employ certain foreign workers. Bill Gates claims the cap on H1B visas make it difficult to hire employees for the company, stating "I'd certainly get rid of the H1B cap.