Your Ip is

Sign by Dealighted - Coupons and Deals

Adbrite

Monday, March 10, 2008

List of Intel Pentium 4 microprocessors


The Pentium 4 microprocessor from Intel is a seventh-generation CPU targeted at the consumer market.

Pentium 4

"Willamette" (180 nm)
All models support: MMX, SSE, SSE2
Family 15 model 1


Model Number Frequency L2-Cache Front Side Bus Multiplier Voltage TDP Socket Release Date Part Number(s)
Pentium 4 1.3 1300 MHz 256 KiB 400 MT/s 13× 1.70/1.75 V 48.9/51.6 W Socket 423 January 3, 2001 80528PC013G0K
YD80528PC013G0K
Pentium 4 1.4 1400 MHz 256 KiB 400 MT/s 14× 1.70/1.75 V 51.8/54.7 W Socket 423 November 20, 2000 80528PC017G0K
YD80528PC017G0K
Pentium 4 1.4 1400 MHz 256 KiB 400 MT/s 14× 1.75 V 55.3 W Socket 478 September 2001 RK80531PC017G0K
Pentium 4 1.5 1500 MHz 256 KiB 400 MT/s 15× 1.70/1.75 V 54.7/57.8 W Socket 423 November 20, 2000 80528PC021G0K
YD80528PC021G0K
RN80528PC021G0K
Pentium 4 1.5 1500 MHz 256 KiB 400 MT/s 15× 1.75 V 57.9 W Socket 478 August 2001 RK80531PC021G0K
RK80531PC021256
Pentium 4 1.6 1600 MHz 256 KiB 400 MT/s 16× 1.75 V 61 W Socket 423 July 2, 2001 YD80528PC025G0K
RN80528PC025G0K
Pentium 4 1.6 1600 MHz 256 KiB 400 MT/s 16× 1.75 V 60.8 W Socket 478 August 2001 RK80531PC025G0K
RK80531PC025256
Pentium 4 1.7 1700 MHz 256 KiB 400 MT/s 17× 1.75 V 64 W Socket 423 April 23, 2001 YD80528PC029G0K
RN80528PC029G0K
Pentium 4 1.7 1700 MHz 256 KiB 400 MT/s 17× 1.75 V 63.5 W Socket 478 August 2001 RK80531PC029G0K
RK80531PC029256
Pentium 4 1.8 1800 MHz 256 KiB 400 MT/s 18× 1.75 V 66.7 W Socket 423 July 2, 2001 YD80528PC033G0K
RN80528PC033G0K
Pentium 4 1.8 1800 MHz 256 KiB 400 MT/s 18× 1.75 V 66.1 W Socket 478 August 2001 RK80531PC033G0K
RK80531PC033256
Pentium 4 1.9 1900 MHz 256 KiB 400 MT/s 19× 1.75 V 69.2 W Socket 423 August 2001 RN80528PC037G0K
Pentium 4 1.9 1900 MHz 256 KiB 400 MT/s 19× 1.75 V 72.8 W Socket 478 August 26, 2001 RK80531PC037G0K
RK80531PC037256
Pentium 4 2.0 2000 MHz 256 KiB 400 MT/s 20× 1.75 V 71.8 W Socket 423 August 2001 RN80528PC041G0K
Pentium 4 2.0 2000 MHz 256 KiB 400 MT/s 20× 1.75 V 75.3 W Socket 478 August 26, 2001 RK80531PC041G0K


Northwood" (130 nm)

Family 15 model 2
All models support: MMX, SSE, SSE2
Hyper-Threading: supported by Pentium 4 3.06


Model Number sSpec number Core Stepping Frequency L2-Cache Front Side Bus Multiplier Voltage TDP Socket Release Date Part Number(s)
Pentium 4 1.6A SL62S B0 1600 MHz 512 KiB 400 MT/s 16× 1.475 V 38 W Socket 478 January 2002 RK80534PC025512
Pentium 4 1.6A SL668 B0 1600 MHz 512 KiB 400 MT/s 16× 1.5 V 46.8 W Socket 478 January 2002 BX80532PC1600D
Pentium 4 1.8A SL68Q B0 1800 MHz 512 KiB 400 MT/s 18× 1.475/1.525 V 49.6 W Socket 478 January 2002 RK80532PC033512
Pentium 4 2.0A SL66R B0 2000 MHz 512 KiB 400 MT/s 20× 1.5 V 52.4 W Socket 478 January 7, 2002 RK80532PC041512
BX80532PC2000D
Pentium 4 2.2 SL66S B0 2200 MHz 512 KiB 400 MT/s 22× 1.5 V 55.1 W Socket 478 January 7, 2002 RK80532PC049512
Pentium 4 2.26 2266 MHz 512 KiB 533 MT/s 17× 1.475/1.525 V 58 W Socket 478 May 6, 2002 RK80532PE051512
Pentium 4 2.4 SL6GS 2400 MHz 512 KiB 400 MT/s 24× 1.475/1.525 V 59.8 W Socket 478 April 2, 2002 RK80532PC056512
Pentium 4 2.4B SL6PC, SL6RZ C1, D1 2400 MHz 512 KiB 533 MT/s 18× 1.475/1.525 V 59.8 W Socket 478 May 6, 2002 RK80532PE056512
Pentium 4 2.5 SL6PN 2500 MHz 512 KiB 400 MT/s 25× 1.475/1.525 V 61 W Socket 478 August 25, 2002 RK80532PC060512
Pentium 4 2.53 SL6D8 2533 MHz 512 KiB 533 MT/s 19× 1.475/1.525 V 61.5 W Socket 478 May 6, 2002 RK80532PE061512
Pentium 4 2.6 SL6SB 2600 MHz 512 KiB 400 MT/s 26× 1.475/1.525 V 62.6 W Socket 478 August 25, 2002 RK80532PC064512
Pentium 4 2.66 SL6PE D1 2667 MHz 512 KiB 533 MT/s 20× 1.475/1.525 V 66.1 W Socket 478 August 25, 2002 RK80532PE067512
Pentium 4 2.8 SL6PF 2800 MHz 512 KiB 533 MT/s 21× 1.525 V 68.4 W Socket 478 August 25, 2002 RK80532PE072512
Pentium 4 2.8 SL7EY 2800 MHz 512 KiB 400 MT/s 28× 1.475/1.525 V 68.4 W Socket 478 November 2002 RK80532PC072512
Pentium 4 3.06 SL6SM C1 3066 MHz 512 KiB 533 MT/s 23× 1.55 V 81.8 W Socket 478 November 2002 RK80532PE083512

Thursday, March 6, 2008

XScale


The XScale, a microprocessor core, is Marvell's (formerly Intel's) implementation of the fifth generation of the ARM architecture, and consists of several distinct families: IXP, IXC, IOP, PXA and CE (see more below). The PXA family was sold to Marvell Technology Group in June 2006.
The XScale architecture is based on the ARMv5TE ISA without the floating point instructions. XScale uses a seven-stage integer and an eight-stage memory superpipelined RISC architecture. It is the successor to the Intel StrongARM line of microprocessors and microcontrollers, which Intel acquired from DEC's Digital Semiconductor division as the side-effect of a lawsuit between the two companies. Intel used the StrongARM to replace their ailing line of outdated RISC processors, the i860 and i960.
All the generations of XScale are 32-bit ARMv5TE processors manufactured with a 0.18-µm process and have a 32-KiB data cache and a 32-KiB instruction cache (this would be called a 64-KiB Level 1 cache on other processors). They also all have a 2-KiB mini-data cache.

Processor families

The XScale core is used in a number of microcontroller families manufactured by Intel and Marvell, notably:
Application Processors (with the prefix PXA). There are four generations of XScale Application Processors, described below: PXA210/PXA25x, PXA26x, PXA27x, and PXA3xx.
I/O Processors (with the prefix IOP)
Network Processors (with the prefix IXP)
Control Plane Processors (with the prefix IXC).
Consumer Electronics Processors (with the prefix CE).
There are also standalone processors: the 80200 and 80219 (targeted primarily at PCI applications).

PXA210/PXA25x

The PXA210 was Intel's entry-level XScale targeted at mobile phone applications. It was released with the PXA250 in February 2002 and comes clocked at 133 MHz and 200 MHz.
The PXA25x family consists of the PXA250 and PXA255. The PXA250 was Intel's first generation of XScale processors. There was a choice of three clock speeds: 200 MHz, 300 MHz and 400 MHz. It came out in February 2002. In March 2003, the revision C0 of the PXA250 was renamed to PXA255. The main differences were a doubled bus speed (100 MHz to 200 MHz) for faster data transfer, lower core voltage (only 1.3 V at 400 MHz) for lower power consumption and writeback functionality for the data cache, the lack of which had severely impaired performance on the PXA250.

PXA26x

The PXA26x family consists of the PXA260 and PXA261-PXA263. The PXA260 is a stand-alone processor clocked at the same frequency as the PXA25x, but features a TPBGA package which is about 53% smaller than the PXA25x's PBGA package. The PXA261-PXA263 are the same as the PXA260 but have Intel StrataFlash memory stacked on top of the processor in the same package; 16 MiB of 16-bit memory in the PXA261, 32 MiB of 16-bit memory in the PXA262 and 32 MiB of 32-bit memory in the PXA263. The PXA26x family was released in March 2003.

PXA27x

The PXA27x family (code-named Bulverde) consists of the PXA270 and PXA271-PXA272 processors. This revision is a huge update to the XScale family of processors. The PXA270 is clocked in four different speeds: 312 MHz, 416 MHz, 520 MHz and 624 MHz and is a stand-alone processor with no packaged memory. The PXA271 can be clocked to 312 MHz or 416 MHz and has 32 MiB of 16-bit stacked StrataFlash memory and 32 MiB of 16-bit SDRAM in the same package. The PXA272 can be clocked to 312 MHz, 416 MHz or 520 MHz and has 64 MiB of 32-bit stacked StrataFlash memory.

PXA3xx Monahans

In August 2005 Intel announced the successor to Bulverde, codenamed Monahans. They demoed it showing its capability to play back high definition encoded video on a PDA screen. The new processor was shown clocked at 1.25 GHz but Intel said it only offered a 25% increase in performance (800 MIPS for the 624-MHz PXA270 processor vs. 1000 MIPS for 1.25-GHz Monahans). An announced successor to the 2700G graphics processor, code named Stanwood, has since been canceled. Some of the features of Stanwood are integrated into Monahans. For extra graphics capabilities, Intel recommends third-party chips like the NVIDIA GoForce chip family.
In November of 2006, Marvell Semiconductor officially introduced the Monahans family as Marvell PXA320, PXA300, and PXA310. PXA320 is currently shipping in high volume, and is scalable up to 806 MHz. PXA300 and PXA310 deliver performance "scalable to 624 MHz", and are software-compatible with PXA320.

IXC1100

The IXC1100 processor features clock speeds at 266, 400, and 533 MHz, a 133-MHz bus, 32 KiB of instruction cache, 32 KiB of data cache, and 2 KiB of mini-data cache. It is also designed for low power consumption, using 2.4 W at 533 MHz. The chip comes in the 35-mm PBGA package.

IXP network processor


The XScale core is utilized in the second generation of Intel's IXP network processor line, while the first generation used StrongARM cores. The IXP network processor family ranges from solutions aimed at small/medium office network applications , IXP4XX, to high performance network processors such as the IXP2850, capable of sustaining up to OC-192 line rates. In IXP4XX devices the XScale core is used as both a control and data plane processor, providing both system control and data processing. The task of the XScale in the IXP2XXX devices is typically to provide control plane functionality only, with data processing performed by the microengines, examples of such control plane tasks include routing table updates, microengine control, memory management.

Wednesday, March 5, 2008

Multi-core processors


A multi-core CPU (or chip-level multiprocessor, CMP) combines two or more independent cores into a single package composed of a single integrated circuit (IC), called a die, or more dies packaged together. A dual-core processor contains two cores and a quad-core processor contains four cores. A multi-core microprocessor implements multiprocessing in a single physical package. A processor with all cores on a single die is called a monolithic processor. Cores in a multicore device may share a single coherent cache at the highest on-device cache level (e.g. L2 for the Intel Core 2) or may have separate caches (e.g. current AMD dual-core processors). The processors also share the same interconnect to the rest of the system. Each "core" independently implements optimizations such as superscalar execution, pipelining, and multithreading. A system with N cores is effective when it is presented with N or more threads concurrently. The most commercially significant (or at least the most 'obvious') multi-core processors are those used in computers (primarily from Intel & AMD) and game consoles (e.g., the Cell processor in the PS3). In this context, "multi" typically means a relatively small number of cores. However, the technology is widely used in other technology areas, especially those of embedded processors, such as network processors and digital signal processors, and in GPUs.

Terminology

There is some discrepancy in the semantics by which the terms "multi-core" and "dual-core" are defined. Most commonly they are used to refer to some sort of central processing unit (CPU), but are sometimes also applied to DSPs and SoCs. Additionally, some use these terms only to refer to multi-core microprocessors that are manufactured on the same integrated circuit die. These people generally refer to separate microprocessor dies in the same package by another name, such as "multi-chip module", "double core", or even "twin core". This article uses both the terms "multi-core" and "dual-core" to reference microelectronic CPUs manufactured on the same integrated circuit, unless otherwise noted.
A dual-core processor is a single chip that contains two distinct processors or "execution cores" in the same integrated circuit.
"Multi Core" refers to - two or more CPUs working together on one single chip (like AMD Athlon X2 or Intel Core Duo) in contrast to DUAL CPU, which refers to two separate CPUs working together.

Development

While manufacturing technology continues to improve, reducing the size of single gates, physical limits of semiconductor-based microelectronics have become a major design concern. Some effects of these physical limitations can cause significant heat dissipation and data synchronization problems. The demand for more capable microprocessors causes CPU designers to use various methods of increasing performance. Some instruction-level parallelism (ILP) methods like superscalar pipelining are suitable for many applications, but are inefficient for others that tend to contain difficult-to-predict code. Many applications are better suited to thread level parallelism (TLP) methods, and multiple independent CPUs is one common method used to increase a system's overall TLP. A combination of increased available space due to refined manufacturing processes and the demand for increased TLP is the logic behind the creation of multi-core CPUs.

Commercial incentives

Several business motives drive the development of dual-core architectures. Since symmetric multiprocessing (SMP) designs have been long implemented using discrete CPUs, the issues regarding implementing the architecture and supporting it in software are well known. Additionally, utilizing a proven processing core design (e.g. Freescale's e600 core) without architectural changes reduces design risk significantly. Finally, the terminology "dual-core" (and other multiples) lends itself to marketing efforts.
Additionally, for general-purpose processors, much of the motivation for multi-core processors comes from greatly diminished gains in processor performance from increasing the operating frequency (frequency scaling). The memory wall and the ILP wall are the culprits in why system performance has not gained as much from continued processor frequency increases as was once seen. The memory wall refers to the increasing gap between processor and memory speeds, which pushes cache sizes larger to mask the latency to memory which helps only to the extent that memory bandwidth is not the bottleneck in performance. The ILP wall refers to increasing difficulty to find enough parallelism in the instructions stream of a single process to keep higher performance processor cores busy. Finally, the often cited, power wall refers to the trend of consuming double the power with each doubling of operating frequency (which is possible to contain to just doubling only if the processor is made smaller). The power wall poses manufacturing, system design and deployment problems that have not been justified in the face of the diminished gains in performance due to the memory wall and ILP wall. Together, these three walls combine to motivate multicore processors.
In order to continue delivering regular performance improvements for general-purpose processors, manufacturers such as Intel and AMD have turned to multi-core designs, sacrificing lower manufacturing costs for higher performance in some applications and systems.
Multi-core architectures are being developed, but so are the alternatives. An especially strong contender for established markets is to integrate more peripheral functions into the chip.

Advantages

The proximity of multiple CPU cores on the same die allows the cache coherency circuitry to operate at a much higher clock rate than is possible if the signals have to travel off-chip. Combining equivalent CPUs on a single die significantly improves the performance of cache snoop (alternative: Bus snooping) operations. Put simply, this means that signals between different CPUs travel shorter distances, and therefore those signals degrade less. These higher quality signals allow more data to be sent in a given time period since individual signals can be shorter and do not need to be repeated as often.
Assuming that the die can fit into the package, physically, the multi-core CPU designs require much less Printed Circuit Board (PCB) space than multi-chip SMP designs. Also, a dual-core processor uses slightly less power than two coupled single-core processors, principally because of the increased power required to drive signals external to the chip and because the smaller silicon process geometry allows the cores to operate at lower voltages; such reduction reduces latency. Furthermore, the cores share some circuitry, like the L2 cache and the interface to the front side bus (FSB). In terms of competing technologies for the available silicon die area, multi-core design can make use of proven CPU core library designs and produce a product with lower risk of design error than devising a new wider core design. Also, adding more cache suffers from diminishing returns.

Disadvantages

In addition to operating system (OS) support, adjustments to existing software are required to maximize utilization of the computing resources provided by multi-core processors. Also, the ability of multi-core processors to increase application performance depends on the use of multiple threads within applications. The situation is improving: for example the American PC game developer Valve Corporation has stated that it will use multi core optimizations for the next version of its Source engine, shipped with Half-Life 2: Episode Two, the next installment of its Half-Life franchise, and Crytek is developing similar technologies for CryENGINE2, which powers their game, Crysis. Emergent Game Technologies' Gamebryo engine includes their Floodgate technology which simplifies multicore development across game platforms. See Dynamic Acceleration Technology for the Santa Rosa platform for an example of a technique to improve single-thread performance on dual-core processors.
Integration of a multi-core chip drives production yields down and they are more difficult to manage thermally than lower-density single-chip designs. Intel has partially countered this first problem by creating its quad-core designs by combining two dual-core on a single die with a unified cache, hence any two working dual-core dies can be used, as opposed to producing four cores on a single die and requiring all four to work to produce a quad-core. From an architectural point of view, ultimately, single CPU designs may make better use of the silicon surface area than multiprocessing cores, so a development commitment to this architecture may carry the risk of obsolescence. Finally, raw processing power is not the only constraint on system performance. Two processing cores sharing the same system bus and memory bandwidth limits the real-world performance advantage. If a single core is close to being memory bandwidth limited, going to dual-core might only give 30% to 70% improvement. If memory bandwidth is not a problem, a 90% improvement can be expected. It would be possible for an application that used 2 CPUs to end up running faster on one dual-core if communication between the CPUs was the limiting factor, which would count as more than 100% improvement.

Hardware trend

The general trend in processor development has been from multi-core to many-core: from dual-, quad-, eight-core chips to ones with tens or even hundreds of cores; see manycore processing unit. In addition, multi-core chips mixed with simultaneous multithreading, memory-on-chip, and special-purpose "heterogeneous" cores promise further performance and efficiency gains, especially in processing multimedia, recognition and networking applications. There is also a trend of improving energy efficiency by focusing on performance-per-watt with advanced fine-grain or ultra fine-grain power management and dynamic voltage and frequency scaling (DVFS).

Software impact

Software benefits from multicore architectures where code can be executed in parallel. Under most common operating systems this requires code to execute in separate threads or processes. Each application running on a system runs in its own process so multiple applications will benefit from multicore architectures. Each application may also have multiple threads but, in most cases, it must be specifically written to utilize multiple threads. Operating system software also tends to run many threads as a part of its normal operation. Running virtual machines will benefit from adoption of multiple core architectures since each virtual machine runs independently of others and can be executed in parallel.
Most application software is not written to use multiple concurrent threads intensively because of the challenge of doing so. A frequent pattern in multithreaded application design is where a single thread does the intensive work while other threads do much less. For example, a virus scan application may create a new thread for the scan process, while the GUI thread waits for commands from the user (e.g. cancel the scan). In such cases, multicore architecture is of little benefit for the application itself due to the single thread doing all heavy lifting and the inability to balance the work evenly across multiple cores. Programming truly multithreaded code often requires complex co-ordination of threads and can easily introduce subtle and difficult-to-find bugs due to the interleaving of processing on data shared between threads (thread-safety). Consequently, such code is much more difficult to debug than single-threaded code when it breaks. There has been a perceived lack of motivation for writing consumer-level threaded applications because of the relative rarity of consumer-level multiprocessor hardware. Although threaded applications incur little additional performance penalty on single-processor machines, the extra overhead of development has been difficult to justify due to the preponderance of single-processor machines.

Tuesday, March 4, 2008

AMD Athlon


Athlon is the brand name applied to a series of different x86 processors designed and manufactured by AMD. The original Athlon, or Athlon Classic, was the first seventh-generation x86 processor and, in a first, retained the initial performance lead it had over Intel's competing processors for a significant period of time. AMD has continued the Athlon name with the Athlon 64, an eighth-generation processor featuring AMD64 (later renamed x86-64) technology.
The Athlon made its debut on June 23, 1999. Athlon was the ancient Greek word for "Champion/trophy of the games".

Background

AMD ex-CEO and founder Jerry Sanders developed strategic partnerships during the late 1990s to improve AMD's presence in the PC market based on the success of the K6 architecture. One major partnership announced in 1998 paired AMD with semiconductor giant Motorola. In the announcement, Sanders referred to the partnership as creating a "virtual gorilla" that would enable AMD to compete with Intel on fabrication capacity while limiting AMD's financial outlay for new facilities. This partnership also helped to co-develop copper-based semiconductor technology, which would become a cornerstone of the K7 production process.
In August 1999, AMD released the Athlon (K7) processor. Notably, the design team was led by Dirk Meyer, one of the lead engineers on the DEC Alpha project. Jerry Sanders had approached many of the engineering staff to work for AMD as DEC wound the project down, and brought in a near-complete team of engineering experts. The balance of the Athlon design team comprised AMD K5 and K6 veterans.By working with Motorola, AMD was able to refine copper interconnect manufacturing to the production stage about one year before Intel. The revised process permitted 180-nanometer processor production. The accompanying die-shrink resulted in lower power consumption, permitting AMD to increase Athlon clock-speeds to the 1 gigahertz range. AMD found processor yields on the new process exceeded expectations, and delivered high speed chips in volume in March 2000.

General architecture

Internally, the Athlon is a fully seventh generation x86 processor, the first of its kind. Like the AMD K5 and K6, the Athlon is a RISC microprocessor which decodes x86 instructions into its own internal instructions at runtime. The CPU is an out-of-order design, again like previous post-5x86 AMD CPUs. The Athlon utilizes the DEC Alpha EV6 bus architecture with double data rate (DDR) technology. This means that at 100 MHz the Athlon front side bus actually transfers at a rate similar to a 200 MHz single data rate bus (referred to as 200 MT/s), which was superior to the method used on Intel's Pentium III (with SDR bus speeds of 100 and 133 MHz).
AMD designed the CPU with more robust x86 instruction decoding capabilities than that of K6, to enhance its ability to keep more data in-flight at once. Athlon's CISC to RISC decoder triplet could potentially decode 6 x86 operations per clock, although this was somewhat unlikely in real-world use. The critical branch predictor unit, essential to keeping the pipeline busy, was enhanced compared to what was onboard the K6. Deeper pipelining with more stages allowed higher clock speeds to be attained. Whereas the AMD K6-III+ topped out at 570 MHz due to its short pipeline, even when built on the 180 nm process, the Athlon was capable of going much higher.
AMD ended its long-time handicap with floating point x87 performance by designing an impressive super-pipelined, out-of-order, triple-issue floating point unit.[3] Each of its 3 units were tailored to be able to calculate an optimal type of instructions with some redundancy. By having separate units, it was possible to operate on more than one floating point instruction at once.This FPU was a huge step forward for AMD. While the K6 FPU had looked anemic compared to the Intel P6 FPU, with Athlon this was no longer the case.The 3DNow! floating point SIMD technology, again present, received some revisions and a name change to "Enhanced 3DNow!". Additions included DSP instructions and an implementation of the extended-MMX subset of Intel SSE.
CPU Caching onboard Athlon consisted of the typical two levels. Athlon was the first x86 processor with a 128 KiB split level 1 cache; a 2-way associative, later 16-way, cache separated into 2×64 KiB for data and instructions (Harvard architecture). This cache was double the size of K6's already large 2×32 KiB cache, and quadruple the size of Pentium II and III's 2×16 KiB L1 cache. The initial Athlon (Slot A, later renamed Athlon Classic) used 512 KiB of level 2 cache separate from the CPU, on the processor cartridge board, running at 50% to 33% of core speed. This was done because the 250 nm manufacturing processes was too large to allow for on-die cache while maintaining cost-effective die size. Later Athlon CPUs, afforded greater transistor budgets by smaller 180 nm and 130 nm process nodes, moved to on-die L2 cache at full CPU clock speed.

Athlon Classic

Athlon Classic launched on June 23, 1999. It showed superior performance compared to the reigning champion, Pentium III, in every benchmark.
Athlon Classic is a cartridge-based processor. The design, called Slot A, was quite similar to Intel's Slot 1 cartridge used for Pentium II and Pentium III; actually it used mechanically the same slot part as competing Intel CPUs (allowing motherboard manufacturers to save on costs) but reversed "upside-down" to prevent users putting in wrong CPUs (as they were completely signal incompatible). The cartridge allowed use of higher speed cache memory than is possible to put on the motherboard. Like Pentium II and the "Katmai"-core Pentium III, Athlon Classic used a 512 KiB secondary cache. This cache, again like its competitors, ran at a fraction of the core clock rate and had its own 64-bit bus, called a "backside bus" that allowed concurrent system front side bus and cache accesses. Initially the L2 cache was set for half of the CPU clock speed, on up to 700 MHz Athlon CPUs. Faster Slot-A processors were forced to compromise with cache clock speed and ran at 2/5 (up to 850 MHz) or 1/3 (up to 1 GHz). The SRAM available at the time was incapable of matching the Athlon's clock scalability, due both to cache chip technology limitations and electrical/cache latency complications of running an external cache at such a high speed.

Monday, March 3, 2008

Windows XP



Windows XP is a line of operating systems developed by Microsoft for use on general-purpose computer systems, including home and business desktops, notebook computers, and media centers. The name "XP" stands for eXPerience. It was codenamed "Whistler", after Whistler, British Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort during its development. Windows XP is the successor to both Windows 2000 Professional and Windows Me, and is the first consumer-oriented operating system produced by Microsoft to be built on the Windows NT kernel (version 5.1) and architecture. Windows XP was first released on October 25, 2001, and over 400 million copies were in use in January 2006, according to an estimate in that month by an IDC analyst. It is succeeded by Windows Vista, which was released to volume license customers on November 8, 2006, and worldwide to the general public on January 30, 2007.The most common editions of the operating system are Windows XP Home Edition, which is targeted at home users, and Windows XP Professional, which has additional features such as support for Windows Server domains and two physical processors, and is targeted at power users and business clients. Windows XP Media Center Edition has additional multimedia features enhancing the ability to record and watch TV shows, view DVD movies, and listen to music. Windows XP Tablet PC Edition is designed to run the ink-aware Tablet PC platform. Two separate 64-bit versions of Windows XP were also released, Windows XP 64-bit Edition for IA-64 (Itanium) processors and Windows XP Professional x64 Edition for x86-64.Windows XP is known for its improved stability and efficiency over the 9x versions of Microsoft Windows. It presents a significantly redesigned graphical user interface, a change Microsoft promoted as more user-friendly than previous versions of Windows. New software management capabilities were introduced to avoid the "DLL hell" that plagued older consumer-oriented 9x versions of Windows. It is also the first version of Windows to use product activation to combat software piracy, a restriction that did not sit well with some users and privacy advocates. Windows XP has also been criticized by some users for security vulnerabilities, tight integration of applications such as Internet Explorer 6 and Windows Media Player, and for aspects of its default user interface. Later versions with Service Pack 2, and Internet Explorer 7 addressed some of these concerns.

Editions

The two major editions are Windows XP Home Edition, designed for home users, and Windows XP Professional, designed for business and power-users. Other builds of Windows XP include those built for specialized hardware and limited-feature versions sold in Europe and select developing economies.

Windows XP for specialized hardware


Microsoft has also customized Windows XP to suit different markets. Six different versions of Windows XP for specific hardware were designed, two of them specifically for 64-bit processors.

System requirements

System requirements for Windows XP Home and Professional editions as follows:

Minimum Recommended
Processor 233 MHz 300 MHz or higher
Memory 64 MB RAM (may limit performance and some features) 128 MB RAM or higher
Video adapter and monitor Super VGA (800 x 600) Super VGA (800 x 600) or higher resolution
Hard drive disk free space 1.5 GB 1.5 GB or higher
Drives CD-ROM CD-ROM or better
Devices Keyboard and mouse Keyboard and mouse
Others Sound card, speakers, and headphones Sound card, speakers, and headphones
In addition to the Windows XP system requirements, Service Pack 2 requires an additional 1.8 GB of free hard disk space during installation.

Service packs

Microsoft occasionally releases service packs for its Windows operating systems to fix problems and add features. Each service pack is a superset of all previous service packs and patches so that only the latest service pack needs to be installed, and also includes new revisions. Older patches need not be removed before application of the most recent one.

Service Pack 1

Service Pack 1 (SP1) for Windows XP was released on September 9, 2002. It contains post-RTM security fixes and hot-fixes, compatibility updates, optional .NET Framework support, enabling technologies for new devices such as Tablet PCs, and a new Windows Messenger 4.7 version. The most notable new features were USB 2.0 support, and a Set Program Access and Defaults utility that aimed at hiding various middleware products. Users can control the default application for activities such as web browsing and instant messaging, as well as hide access to some of Microsoft's bundled programs. This utility was first brought into the older Windows 2000 operating system with its Service Pack 3. The Microsoft Java Virtual Machine, which was not in the RTM version, appeared in this service pack.
On February 3, 2003, Microsoft released Service Pack 1 (SP1) again as Service Pack 1a (SP1a). This release removed Microsoft's Java virtual machine as a result of a lawsuit with Sun Microsystems.

Service Pack 2


Windows Security Center was added in Service Pack 2.
Service Pack 2 (SP2) (codenamed "Springboard") was released on August 6, 2004 after several delays, with a special emphasis on security. Unlike the previous service packs, SP2 adds new functionality to Windows XP, including an enhanced firewall, improved Wi-Fi support, such as WPA encryption compatibility, with a wizard utility, a pop-up ad blocker for Internet Explorer 6, and Bluetooth support. Security enhancements include a major revision to the included firewall which was renamed to Windows Firewall and is enabled by default, advanced memory protection that takes advantage of the NX bit that is incorporated into newer processors to stop some forms of buffer overflow attacks, and removal of raw socket support (which supposedly limits the damage done by zombie machines). Additionally, security-related improvements were made to e-mail and web browsing. Windows XP Service Pack 2 includes the Windows Security Center, which provides a general overview of security on the system, including the state of anti-virus software, Windows Update, and the new Windows Firewall. Third-party anti-virus and firewall applications can interface with the new Security Center.
On August 10, 2007, Microsoft announced a minor update to Service Pack 2, called Service Pack 2c (SP2c). The update fixes the issue of the lowering number of available product keys for Windows XP. This update will be only available to system builders from their distributors in Windows XP Professional and Windows XP Professional N operating systems. SP2c was released in September 2007.

Service Pack 3

Windows XP Service Pack 3 (SP3) is currently in development. As of January 2008, Microsoft's web site indicates a "preliminary" release date to be in the first half of 2008. A feature set overview has been posted by Microsoft and details new features available separately as standalone updates to Windows XP, as well as features backported from Windows Vista, such as black hole router detection, Network Access Protection and Windows Imaging Component.
Microsoft has begun a beta test of SP3. According to a file released with the official beta, and relayed onto the internet, there are a total of 1,073 fixes in SP3.
This update to Windows allows it to be installed without a product key, and be run until the end of the 30-day activation period without a product key.
On December 4, 2007, Microsoft released build 3264 of a release candidate of SP3 to both TechNet and MSDN Subscribers. On December 18, 2007, this version was made publicly available via Microsoft Download Center. The latest release of SP3 is Release Candidate 2, which was released to the private beta-testing group through its connect website on February 6, 2008, with a build number of 3300. On 19 February 2008 build number 3311 of SP3 Release Candidate 2 was released for public beta testing. In order to be able to download and install SP3 Release Candidate 2 via Windows Update or Microsoft Update, a script must be installed and any earlier version of SP3 must first be removed. SP3 Release Candidate 2 can also be downloaded by way of the Microsoft Download Center.

Saturday, March 1, 2008

Computer virus

A computer virus is a computer program that can copy itself and infect a computer without permission or knowledge of the user. However, the term "virus" is commonly used, albeit erroneously, to refer to many different types of malware programs. The original virus may modify the copies, or the copies may modify themselves, as occurs in a metamorphic virus. A virus can only spread from one computer to another when its host is taken to the uninfected computer, for instance by a user sending it over a network or the Internet, or by carrying it on a removable medium such as a floppy disk, CD, or USB drive. Meanwhile viruses can spread to other computers by infecting files on a network file system or a file system that is accessed by another computer. Viruses are sometimes confused with computer worms and Trojan horses. A worm can spread itself to other computers without needing to be transferred as part of a host, and a Trojan horse is a file that appears harmless until executed.
Most personal computers are now connected to the Internet and to local area networks, facilitating the spread of malicious code. Today's viruses may also take advantage of network services such as the World Wide Web, e-mail, and file sharing systems to spread, blurring the line between viruses and worms. Furthermore, some sources use an alternative terminology in which a virus is any form of self-replicating malware.
Some viruses are programmed to damage the computer by damaging programs, deleting files, or reformatting the hard disk. Others are not designed to do any damage, but simply replicate themselves and perhaps make their presence known by presenting text, video, or audio messages. Even these benign viruses can create problems for the computer user. They typically take up computer memory used by legitimate programs. As a result, they often cause erratic behavior and can result in system crashes. In addition, many viruses are bug-ridden, and these bugs may lead to system crashes and data loss.

History

The Creeper virus was first detected on ARPANET, the forerunner of the Internet in the early 1970s. It propagated via the TENEX operating system and could make use of any connected modem to dial out to remote computers and infect them. It would display the message "I'M THE CREEPER : CATCH ME IF YOU CAN.". It is rumored[attribution needed] that the Reaper program, which appeared shortly after and sought out copies of the Creeper and deleted them, may have been written by the creator of the Creeper in a fit of regret.[original research?]
A program called "Elk Cloner" is commonly credited[attribution needed] with being the first computer virus to appear "in the wild" — that is, outside the single computer or lab where it was created, but that claim is false. See the Timeline of notable computer viruses and worms for other earlier viruses. It was however the first virus to infect computers "in the home". Written in 1982 by Richard Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread by floppy disk. This virus was originally a joke, created by a high school student and put onto a game. The disk could only be used 49 times. The game was set to play, but release the virus on the 50th time of starting the game.The first PC virus in the wild was a boot sector virus called (c)Brain[, created in 1986 by the Farooq Alvi Brothers, operating out of Lahore, Pakistan. The brothers reportedly created the virus to deter pirated copies of software they had written. However, analysts have claimed that the Ashar virus, a variant of Brain, possibly predated it based on code within the virus.
Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks.[citation needed] In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk.
Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in BBS and modem use, and software sharing. Bulletin board driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBS's.[citation needed] Within the "pirate scene" of hobbyists trading illicit copies of retail software, traders in a hurry to obtain the latest applications and games were easy targets for viruses.[original research?]
Since the mid-1990s, macro viruses have become common. Most of these viruses are written in the scripting languages for Microsoft programs such as Word and Excel. These viruses spread in Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most of these viruses were able to spread on Macintosh computers as well. Most of these viruses did not have the ability to send infected e-mail. Those viruses which did spread through e-mail took advantage of the Microsoft Outlook COM interface.[citation needed]
Macro viruses pose unique problems for detection software[citation needed]. For example, some versions of Microsoft Word allowed macros to replicate themselves with additional blank lines. The virus behaved identically but would be misidentified as a new virus. In another example, if two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents".
A virus may also send a web address link as an instant message to all the contacts on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating.
The newest species of the virus family is the cross-site scripting virus[citation needed]. The virus emerged from research and was academically demonstrated in 2005. This virus utilizes cross-site scripting vulnerabilities to propagate. Since 2005 there have been multiple instances of the cross-site scripting viruses in the wild, most notable sites affected have been MySpace and Yahoo.

Replication strategies

In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user tries to start an infected program, the virus' code may be executed first. Viruses can be divided into two types, on the basis of their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected, infect these targets, and finally transfer control to the application program they infected. Resident viruses do not search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operating system itself.

Nonresident viruses


Nonresident viruses can be thought of as consisting of a finder module and a replication module. The finder module is responsible for finding new files to infect. For each new executable file the finder module encounters, it calls the replication module to infect that file.

Resident viruses

Resident viruses contain a replication module that is similar to the one that is employed by nonresident viruses. However, this module is not called by a finder module. Instead, the virus loads the replication module into memory when it is executed and ensures that this module is executed each time the operating system is called to perform a certain operation. For example, the replication module can be called each time the operating system executes a file. In this case, the virus infects every suitable program that is executed on the computer.
Resident viruses are sometimes subdivided into a category of fast infectors and a category of slow infectors. Fast infectors are designed to infect as many files as possible. For instance, a fast infector can infect every potential host file that is accessed. This poses a special problem to anti-virus software, since a virus scanner will access every potential host file on a computer when it performs a system-wide scan. If the virus scanner fails to notice that such a virus is present in memory, the virus can "piggy-back" on the virus scanner and in this way infect all files that are scanned. Fast infectors rely on their fast infection rate to spread. The disadvantage of this method is that infecting many files may make detection more likely, because the virus may slow down a computer or perform many suspicious actions that can be noticed by anti-virus software. Slow infectors, on the other hand, are designed to infect hosts infrequently. For instance, some slow infectors only infect files when they are copied. Slow infectors are designed to avoid detection by limiting their actions: they are less likely to slow down a computer noticeably, and will at most infrequently trigger anti-virus software that detects suspicious behavior by programs. The slow infector approach does not seem very successful, however.

Methods to avoid detection

In order to avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the MS-DOS platform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool anti-virus software, however, especially that which maintains and dates Cyclic redundancy check on file changes.Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because those files had many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file.Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them.As computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access.

Avoiding bait files and other undesirable hosts

A virus needs to infect hosts in order to spread further. In some cases, it might be a bad idea to infect a host program. For example, many anti-virus programs perform an integrity check of their own code. Infecting such programs will therefore increase the likelihood that the virus is detected. For this reason, some viruses are programmed not to infect programs that are known to be part of anti-virus software. Another type of host that viruses sometimes avoid is bait files. Bait files (or goat files) are files that are specially created by anti-virus software, or by anti-virus professionals themselves, to be infected by a virus. These files can be created for various reasons, all of which are related to the detection of the virus:Anti-virus professionals can use bait files to take a sample of a virus (i.e. a copy of a program file that is infected by the virus). It is more practical to store and exchange a small, infected bait file, than to exchange a large application program that has been infected by the virus.Anti-virus professionals can use bait files to study the behavior of a virus and evaluate detection methods. This is especially useful when the virus is polymorphic. In this case, the virus can be made to infect a large number of bait files. The infected files can be used to test whether a virus scanner detects all versions of the virus. Some anti-virus software employs bait files that are accessed regularly. When these files are modified, the anti-virus software warns the user that a virus is probably active on the system. Since bait files are used to detect the virus, or to make detection possible, a virus can benefit from not infecting them. Viruses typically do this by avoiding suspicious programs, such as small program files or programs that contain certain patterns of 'garbage instructions'.A related strategy to make baiting difficult is sparse infection. Sometimes, sparse infectors do not infect a host file that would be a suitable candidate for infection in other circumstances. For example, a virus can decide on a random basis whether to infect a file or not, or a virus can only infect host files on particular days of the week.

Friday, February 29, 2008

Firewall

A firewall is a dedicated appliance, or software running on another computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules.

Function

A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels. Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often referred to as a "perimeter network" or Demilitarized zone (DMZ).
A firewall's function within a network is similar to firewalls with fire doors in building construction. In the former case, it is used to prevent network intrusion to the private network. In the latter case, it is intended to contain and delay structural fire from spreading to adjacent structures.
Without proper configuration, a firewall can often become worthless. Standard security practices dictate a "default-deny" firewall ruleset, in which the only network connections which are allowed are the ones that have been explicitly allowed. Unfortunately, such a configuration requires detailed understanding of the network applications and endpoints required for the organization's day-to-day operation. Many businesses lack such understanding, and therefore implement a "default-allow" ruleset, in which all traffic is allowed unless it has been specifically blocked. This configuration makes inadvertent network connections and system compromise much more likely.

History

The term "firewall" originally meant a wall to confine a fire or potential fire within a building, c.f. firewall (construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment.
Firewall technology emerged in the late 1980s when the Internet was a fairly new technology in terms of its global use and connectivity. The original idea was formed in response to a number of major internet security breaches, which occurred in the late 1980s.

First generation - packet filters

The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation (DEC) developed filter systems known as packet filter firewalls. This fairly basic system was the first generation of what would become a highly evolved and technical internet security feature. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin were continuing their research in packet filtering and developed a working model for their own company based upon their original first generation architecture.
Packet filters act by inspecting the "packets" which represent the basic unit of data transfer between computers on the Internet. If a packet matches the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source).
This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, which comprises most internet communication, the port number).
Because TCP and UDP traffic by convention uses well known ports for particular types of traffic, a "stateless" packet filter can distinguish between, and thus control, those types of traffic (such as web browsing, remote printing, email transmission, file transfer), unless the machines on each side of the packet filter are both using the same non-standard ports.

Second generation - "stateful" filters

From 1980-1990 three colleagues from AT&T Bell Laboratories, Dave Presetto, Janardan Sharma, and Kshitij Nigam developed the second generation of firewalls, calling them circuit level firewalls.
Second Generation firewalls do not simply examine the contents of each packet on an individual basis without regard to their placement within the packet series as their predecessors had done, rather they compare some key parts of the trusted database packets. This technology is generally referred to as a 'stateful firewall' as it maintains records of all connections passing through the firewall and is able to determine whether a packet is the start of a new connection or part of an existing connection. Though there is still a set of static rules in such a firewall, the state of a connection can in itself be one of the criteria which trigger specific rules.This type of firewall can help prevent attacks which exploit existing connections, or certain Denial-of-service attacks.

Third generation - application layer

Publications by Gene Spafford of Purdue University, Bill Cheswick at AT&T Laboratories, and Marcus Ranum described a third generation firewall known as an application layer firewall, also known as a proxy-based firewall. Marcus Ranum's work on the technology spearheaded the creation of the first commercial product. The product was released by DEC who named it the DEC SEAL product. DEC’s first major sale was on June 13, 1991 to a chemical company based on the East Coast of the USA.The key benefit of application layer filtering is that it can "understand" certain applications and protocols (such as File Transfer Protocol, DNS, or web browsing), and it can detect whether an unwanted protocol is being sneaked through on a non-standard port or whether a protocol is being abused in a known harmful way.

Subsequent developments

In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colours and icons, which could be easily implemented to and accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into readily available software known as FireWall-1.
The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS).
Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes.

Types

There are several classifications of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced.

Network layer and packet filters

Network layer firewalls, also called packet filters, operate at a relatively low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set. The firewall administrator may define the rules; or default rules may apply. The term packet filter originated in the context of BSD operating systems.
Network layer firewalls generally fall into two sub-categories, stateful and stateless. Stateful firewalls maintain context about active sessions, and use that "state information" to speed up packet processing. Any existing network connection can be described by several properties, including source and destination IP address, UDP or TCP ports, and the current stage of the connection's lifetime (including session initiation, handshaking, data transfer, or completion connection). If a packet does not match an existing connection, it will be evaluated according to the ruleset for new connections. If a packet matches an existing connection based on comparison with the firewall's state table, it will be allowed to pass without further processing.
Stateless firewalls have packet-filtering capabilities, but cannot make more complex decisions on what stage communications between hosts have reached.

Application-layer

Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgement to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines.
On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. In practice, however, this becomes so complex and so difficult to attempt (given the variety of applications and the diversity of content each may allow in its packet traffic) that comprehensive firewall design does not generally attempt this approach.
The XML firewall exemplifies a more recent kind of application-layer firewall.
Companies like SecureComputing (www.securecomputing.com) are a major manufacturer of Application Layer Firewalls.

Proxies

A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets.
Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders may hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network.

Network address translation

Firewalls often have network address translation (NAT) functionality, and the hosts protected behind a firewall commonly have addresses in the "private address range", as defined in RFC 1918. Firewalls often have such functionality to hide the true address of protected hosts. Originally, the NAT function was developed to address the limited number of IPv4 routable addresses that could be used or assigned to companies or individuals as well as reduce both the amount and therefore cost of obtaining enough public addresses for every computer in an organization. Hiding the addresses of protected devices has become an increasingly important defense against network reconnaissance.