Tuesday, 25 January 2011

virtual keyboard

INTRODUCTION
Virtual Keyboard is just another example of today’s computer trend of ‘smaller and faster’. Computing is now not limited to desktops and laptops, it has found its way into mobile devices like palm tops and even cell phones. But what has not changed for the last 50 or so odd years is the input device, the good old QWERTY keyboard and the virtual keyboard technology is latest development.
The new virtual keyboard technology uses sensor technology and artificial intelligence to let users work on any surface as if it were a keyboard. Virtual Keyboards lets you easily create multilingual text content on almost any existing platform and output it directly to PDAs or even web pages. Virtual Keyboard, being a small, handy, well-designed and easy to use application, turns into a perfect solution for cross platform multilingual text input.The main features are: platform-independent multilingual support for keyboard text input, built-in language layouts and settings, copy/paste etc. operations support just as in a regular text editor, already existing system language settings remain intact, easy and user-friendly interface and design, and small file size.
QWERTY KEYBOARDS 
Inside the keyboard
The processor in a keyboard has to understand several things that are important to the utility of the keyboard, such as:
· Position of the key in the key matrix.
· The amount of bounce and how to filter it.
· The speed at which to transmit the typematics.
The microprocessor and controller circuitry of a keyboard.
The key matrix is the grid of circuits underneath the keys. In all keyboards except for capacitive ones, each circuit is broken at the point below a specific key. Pressing the key, bridges the gap in the circuit, allowing a tiny amount of current to flow through. The processor monitors the key matrix for signs of continuity at any point on the grid. When it finds a circuit that is closed, it compares the location of that circuit on the key matrix to the character map in its ROM. The character map is basically a comparison chart for the processor that tells it what the key at x,y coordinates in the key matrix represents. If more than one key is pressed at the same time, the processor checks to see if that combination of keys has a designation in the character map. For example, pressing the ‘a’ key by itself would result in a small letter "a" being sent to the computer. If you press and hold down the Shift key while pressing the ‘a’ key, the processor compares that combination with the character map and produces a capital letter "A."
A different character map provided by the computer can supersede the character map in the keyboard. This is done quite often in languages whose characters do not have English equivalents. Also, there are utilities for changing the character map from the traditional QWERTY to DVORAK or another custom version.
Keyboards rely on switches that cause a change in the current flowing through the circuits in the keyboard. When the key presses the keyswitch against the circuit, there is usually a small amount of vibration between the surfaces, known as bounce. The processor in a keyboard recognizes that you pressing the key repeatedly do not cause this very rapid switching on and off. Therefore, it filters all of the tiny fluctuations out of the signal and treats it as a single keypress.
If you continue to hold down a key, the processor determines that you wish to send that character repeatedly to the computer. This is known as typematics. In this process, the delay between each instance of a character can normally be set in software, typically ranging from 30 characters per second (cps) to as few as two cps.
DIFFERENT TYPES 
Keyboards have changed very little in layout since their introduction. In fact, the most common change has simply been the natural evolution of adding more keys that provide additional functionality.
The most common keyboards are:
· 101-key Enhanced keyboard
· 104-key Windows keyboard
· 82-key Apple standard keyboard
· 108-key Apple Extended keyboard
Portable computers such as laptops quite often have custom keyboards that have slightly different key arrangements than a standard keyboard. Also, many system manufacturers add specialty buttons to the standard layout. A typical keyboard has four basic types of keys:
· Typing keys
· Numeric keypad
· Function keys
· Control keys
The typing keys are the section of the keyboard that contains the letter keys, generally laid out in the same style that was common for typewriters. The numeric keypad is a part of the natural evolution mentioned previously. Since a large part of the data was numbers, a set of 17 keys was added to the keyboard. These keys are laid out in the same configuration used by most adding machines and calculators, to facilitate the transition to computer for clerks accustomed to these other machines. In 1986, IBM extended the basic keyboard with the addition of function and control keys. The function keys, arranged in a line across the top of the keyboard, could be assigned specific commands by the current application or the operating system. Control keys provided cursor and screen control. Four keys arranged in an inverted T formation between the typing keys and numeric keypad allows the user to move the cursor on the display in small increments.
Keyboard Technologies 
Keyboards use a variety of switch technologies. It is interesting to note that we generally like to have some audible and tactile response to our typing on a keyboard. We want to hear the keys "click" as we type, and we want the keys to feel firm and spring back quickly as we press them. Let's take a look at these different technologies:
· Rubber dome mechanical
· Capacitive non-mechanical
· Metal contact mechanical
· Membrane mechanical
· Foam element mechanical
From the Keyboard to the Computer
As you type, the processor in the keyboard is analyzing the key matrix and determining what characters to send to the computer. It maintains these characters in a buffer of memory that is usually about 16 bytes large. It then sends the data in a stream to the computer via some type of connection.
The most common keyboard connectors are:
· 5-pin DIN (Dutch Industries Norm) connector
· 6-pin IBM PS/2 mini-DIN connector
· 4-pin USB (Universal Serial Bus) connector
· Internal connector (for laptops)
Normal DIN connectors are rarely used anymore. Most computers use the mini-DIN PS/2 connector; but an increasing number of new systems are dropping the PS/2 connectors in favor of USB. No matter which type of connector is used, two principal elements are sent through the connecting cable. The first is power for the keyboard. Keyboards require a small amount of power, typically about 5 volts, in order to function. The cable also carries the data from the keyboard to the computer. The other end of the cable connects to a port that is monitored by the computer's keyboard controller.
This is an integrated circuit (IC) whose job is to process all of the data that comes from the keyboard and forward it to the operating system.
Difficulties and alternatives 
It is now recognized that it is important to be correctly seated while using a computer. A comfortable working position will help with concentration, quality of work, and reduce the risk of long-term problems. This is important for all who use computers, and especially so for those with disabilities.
The increased repetitive motions and awkward postures attributed to the use of computer keyboards have resulted in a rise in cumulative trauma disorders (CTDs) that are generally considered to be the most costly and severe disorders occurring in the office. Lawsuits for arm, wrist, and hand injuries have been filed against keyboard manufacturers allege that keyboarding equipment is defectively designed and manufacturers fail to provide adequate warnings about proper use to avoid injury.
As early as1926, Klockenberg described how the keyboard layout required the typist to assume body postures that were unnatural, uncomfortable and fatiguing. For example, standard keyboard design forces operators to place their hands in a flat, palm down position called forearm pronation. The compact, linear key arrangement also causes some typists to place their wrist in a position that is skewed towards the little fingers, called ulnar deviation. These awkward postures result in static muscle loading, increased muscular energy expenditure, reduced muscular waste removal, and eventual discomfort or injury. Researchers also noted that typing on the QWERTY keyboard is poorly distributed between the hands and fingers, causing the weaker ring and little fingers to be overworked.
Alternatives 
When a standard keyboard does not meet the needs of the user, several alternatives can be found. Keyboards come in a variety of sizes with different layouts. The four alternatives described below are considered "plug and play" keyboards, as they require no special interface. Just plug them into the existing keyboard port and use them.
Ergonomic Keyboards: 
These keyboards are designed to ensure safe and comfortable computer use by providing additional supports to prevent repetitive muscular injuries. Many offer flexible positioning options (Comfort Keyboard), while others use "wells" for support (ergonomic), or chords instead of keys (BAT Keyboard), or require minimal finger/hand movements (Data Hand).
Compact or Reduced Keyboards: 
These keyboards are designed with keys in closely arranged order. These compact or reduced keyboards offer options for students with a limited range of motion in their hands or arms and can be accessed with head or mouth pointers. Examples of these are TASH mini keyboards (WinMini, MacMini), or the Magic Wand Keyboard; both provide for keyboard and mouse control.
Enlarged Keyboards: 
These keyboards are a larger version of the standard keyboard, in whole or in part. Larger keys may provide an easier target, as fewer key choices with clear key labels can provide a successful input method for many. The IntelliKeys keyboard is one example; it comes with 6 keyboard overlays and varying key layout designs and can be further customized with the use of Overlay Maker software.
Portable Keyboards : 
The last type of keyboard is one which addresses the portability needs of individuals with disabilities. A portable keyboard is one which can be used as a not-taker when battery-powered and then connected to a computer to download the information. The AlphaSmarta is an example of a portable keyboard. It connects to the Apple, Mac, and IBM computers and can be used as the computer keyboard when it is connected to the computer.
VIRTUAL KEYBOARD TECHNOLOGY 
Virtual Keyboard is just another example of today’s computer trend of "smaller and faster". Computing is now not limited to desktops and laptops, it has found its way into mobile devices like palm tops and even cell phones. But what has not changed for the last 50 or so odd years is the input device, the good old QWERTY keyboard. Alternatives came in the form of handwriting recognition, speech recognition, abcd input (for SMS in cell phones) etc. But they all lack the accuracy and convenience of a full-blown keyboard. Speech input has an added issue of privacy. Even folded keyboards for PDAs are yet to catch on. Thus a new generation of virtual input devices is now being paraded, which could drastically change the way we type.
Virtual Keyboard uses sensor technology and artificial intelligence to let users work on any surface as if it were a keyboard. Virtual Devices have developed a flashlight-size gadget that projects an image of a keyboard on any surface and let’s people input data by typing on the image.
This system comprises of three modules,
1. The sensor module,
2. IR-light source and
3. The pattern projector.
The device detects movement when fingers are pressed down. Those movements are measured and the device accurately determines the intended keystrokes and translates them into text. There is a set of clips that fit into your hand and try to sense the motion of the fingers and the hands (wrist) and translate them into keystrokes. The translation process also uses artificial intelligence. Once the keystroke has been decoded, it is sent to the portable device either by cable or via wireless. The Virtual Keyboard uses light to project a full-sized computer keyboard onto almost any surface, and disappears when not in use. Used with Smart Phones and PDAs, the Vkeyä provides a practical way to do email, word processing and spreadsheet tasks, allowing the user to leave the laptop computer at home. VKeyä technology has many applications in various high-tech and industrial Sectors. These include data entry and control panel applications in hazardous and harsh environments and medical markets.
Projection key boards or virtual key boards claim to provide the convenience of compactness with the advantages of a full-blown QWERTY keyboard. An interesting use of such keyboards would be in sterile environments where silence or low noise is essential like operation theaters. The advantage of such a system is that you do not need a surface for typing, you can even type in plain air. The company's Virtual Keyboard is designed for anyone who's become frustrated with trying to put information into a handheld but doesn't want to carry a notebook computer around. There is also the provision for a pause function to avoid translating extraneous hand movements function, so that users can stop to eat, drink etc …
It is also a superior desktop computer keyboard featuring dramatically easier to learn touch-typing and leaving one hand free for mouse or phone. Combination key presses ("chords") of five main and two extra control keys allow users to type at 25-60 words per minute, with possibly greater speeds achieved through the use of abbreviation expansion software. Most users, however, will find memorizing the chords easy and fun, with the included typing tutorial. The scanner can keep up with the fastest typist, scanning the projected area over 50 times a second. The keyboard doesn't demand a lot of force, easing strain on wrists and digits. virtual keyboards solve the problem of sore thumbs that can be caused by typing on the tiny keyboards of various gadgets like PDAs and cell phones. They are meant to meet the needs of mobile computer users struggling with cumbersome, tiny, or nonexistent keyboards. It might help to prevent RSI injuries.
An infrared adapter allows PC usage without any driver software being necessary. The standard coin-sized lithium battery lasts about eight months before needing to be replaced.
The Virtual Keyboard uses an extremely durable material which is extremely easy to clean. The Virtual Keyboard is not restricted to the QWERTY touch-typing paradigm , adjustments can be done to the software to fit other touch-typing paradigms as well, such as the DVORAK keyboard. It will work with all types of Bluetooth enabled devices such as PDAs and smart phones, as well as wearable computers. Applications include computer/PDA input, gaming control, TV remote control, and musical applications.Thus virtual keyboards will make typing easier, faster, and almost a pleasure.
VIRTUAL DEVICES 
Just like every conventional loudspeaker can also be used as a microphone, for some input devices there is a complimentary form where they can also be displays. However, just as few loudspeakers are used as microphones (so few, in fact, that most people forget - if they even knew - that this was possible), very few input devices incorporate this duality into their design. Force feedback devices are one exception. With them, the "display" is felt rather than seen. Touch screens and other direct input devices appear to have this property, but in fact, this is appearance only, since their input/output duality is accomplished by designing two separate technologies into one integrated package. The acoustic analogy would be integrating a microphone and speaker into one package, a bit like a telephone handset, rather than using the same transducer for both the microphone and speaker functions. It is interesting to note that this is not the case with force feedback devices since with them, the same motors that generate the force output also serve as the encoders that capture the actions of the user.
Recently a new class of device has started to emerge which is conceptually rooted in exploiting this input/output duality. They can be called Projection/Vision systems, and/or Projection/Scanning or Projection/Camera technologies. In the "pure" case, these are devices that use a laser, for example, to project an image of the input controller - such as a slider or keypad - onto a surface. In doing so, they are performing a function analogous to an LCD displaying the image of a virtual device under a touch screen. However, in this case, the laser is also used to scan the same surface that it projecting onto, thereby enabling the device to "see" how your fingers, for example, are interacting with the projected virtual device.
In a slightly less pure "hybrid" form, the projection and scanning functions can be performed by two separate, but integrated technologies. For example, instead of a laser projector, a conventional video or data projector could be used, and an integrated video camera (supported by vision software) used for input.
Both the "pure" and "hybrid" classes of device have been used and have strengths and weaknesses. Since laser projection is far less advanced than conventional data projection, the hybrid solution sometimes has advantages on the display side. However, 2D and 3D scanning using lasers is far more developed than 2D and 3D vision using video based vision techniques. This is partially due to the degree to which the laser technology can extract 3D information. Going forward, one can expect laser projection technology to advance extremely quickly, especially in its ability to deliver extremely small, low power, bright, relatively high resolution projection capability. This will likely have a strong impact on how we interact with small portable devices, such as PDAs, mobile phones and even wristwatches. Not only does this technology provide a means to couple large (virtual) I/O transducers with small devices, it provides the potential for sharing and interacting with others, despite using devices as small as a wrist watch.
On the other hand, these technologies have strong potential on the other side of the scale, in large-scale interaction, where what is scanned are bodies in a room, rather than fingers on a surface, and the projection surface may be the floor or ceiling of a room, rather than a desktop.
Besides the obvious, there are a couple of interesting challenges with this type of system. First, it is generally not sufficient to simply know where the fingers are over the display. One has to be able to distinguish the difference between pointing or hovering, versus activating. This must be reliable, and responsive. The system and the user must agree as to if and when activation takes place. Also, since the device is virtual, a means (acoustic of visual) is likely needed to provide some form of feedback at the device level. Since, especially in the mobile case, the projection surface, and hence the input control surface, is arbitrary, so there would be no opportunity for any tactile feedback, vertical or lateral. Of course, if the projector was fixed, then there are a range of techniques that could be used to provide tactile feedback.
Electronic whiteboards that use projection technologies coupled with touch screens, such as those available from Smart Technologies, and 3Com, for example, are related to this class of device. However, they differ in that the input transducer is integrated with the projection surface, rather than with the projector. This is a significant technological difference (but one which may be transparent to a user). The same could be said of touch screens; especially in the future as touch screens become thinner and more inobtrusive, such as if/when they are made with OLEDs, for example. That is, they could appear the same to the user as "pure" projection vision systems. However, I treat touch screens and this latter class of projection boards separately.
What is unique, distinct, or new, from the usage/user perspective of the type of projection/vision systems that I highlight in this section is that they are not fixed in position. The same unit may project/sense in different locations, on different surfaces, and in many cases be mobile. That is, there is no specific surface, other than the (perhaps) arbitrary surface on which one is projecting, on which the system operates. This is especially true of the miniature laser projector/scanner systems. But it is even true of installed systems, such as the IBM steerable projection/vision system. In this later case, while the projector and vision systems are fixed in architural space, they can be directed to work on different surfaces/areas in the room.
Projection/Vision systems constitute an area where products are beginning to emerge. Below is a listing of some of the companies who are playing in this field. As well, there is a body of work emerging from the research community around this type of interaction.
DIFFERENT VIRTUAL KEYBOARDS
Developer VKBä
Its full-size keyboard also can be projected onto any surface and uses laser technology to translate finger movements into letters. Working with Siemens Procurement Logistics Services Rechargeable batteries similar to those in cell phones power the compact unit .The keyboard is full size and the letters are in a standard format. As a Class 1 laser, the output power is below the level at which eye injury can occur.
Canestaä
The Canesta Keyboard, which is a laser projected keyboard with which the same laser is also used to scan the projection field and extract 3D data. Hence, the user sees the projected keyboard, and the device "sees" the position of the fingers over the projected keys. They also have a chip set, Electronic Perception Technology, which they supply for 3rd parties to develop products using the projection/scanning technology. Canesta appears to be the most advanced in this class of technology and the only one who is shipping product. They have a number of patents pending on their technology.
Sense board Technologies
The Senseboard SB 04 technology is an extreme case of a hybrid approach. The sensing transducer is neither a laser scanner nor a camera. Rather, it is a bracelet-like transducer that is worn on the hands which captures hand and finger motion. In fact, as demonstrated, the technology does not incorporate a projection component at all; rather, it relies on the user's ability to touch type, and then infers the virtual row and key being typed by sensing relative hand and finger movement. The system obviously could be augmented to aid non-touch typists, for example, by the inclusion of a graphic representation of the virtual keyboard under the hands/fingers. In this case, the keyboard graphically represented would not be restricted to a conventional QWERTY keyboard, and the graphical representation could be projected or even on a piece of paper. I include it here, as it is a relevant related input transducer that could be used with a projection system. The technology has patents pending, and is currently in preproduction proof of Concept form.
Sensors made of a combination of rubber and plastic are attached to the user's palms in such a way that they do not interfere with finger motions. Through the use of Bluetooth technology, the "typed" information is transferred wirelessly to the computer, where a word processing program analyzes and interprets the signals into readable text. The device is currently usable via existing ports on personal digital assistants (PDAs) from Palm and other manufacturers. Senseboard officials say it eventually will be compatible with most brands of pocket PCs, mobile phones and laptop computers. 
KITTYä
KITTY, a finger-mounted keyboard for data entry into PDA's, Pocket PC's and Wearable Computers which has been developed here at the University of California in Irvine.
KITTY, an acronym for Keyboard-Independent Touch-Typing, is a Finger mounted keyboard that uses touch typing as a method of data entry. The device targets the portable computing market and in particular its wearable computing systems which are in need of a silent invisible data entry system based on touch typing .the new device combines the idea of a finger mounted coding device with the advantages of a system that uses touch typing.
InFocusä
InFocus is one of the leading companies in providing video and data projectors. Their projectors are conventional, in that they do not use laser technology. This has that advantage of delivering high quality colour images with a mature technology. However, it has the disadvantage of larger size, lower contrast, and higher power requirements, compared to laser projection systems. In 2000, InFocus merged with Proxima, which had been one of its competitors. I include InFocus/Proxima in this survey not only because they make projectors. In their early days, Proxima developed one of the first commercially available projection/vision systems. It was called Cyclops, and they still hold a patent on the technology. Cyclops augmented the projector by adding a video camera that was registered to view the projection area. The video camera had a band pass filter over the lens, which passed only the wavelength of a laser pointer. The system, therefore, enabled the user to interact with the projected image, using a provided laser pointer as the input device. The camera detected the presence of the laser pointer on the surface, and calculated its coordinates relative to the currently projected image. Furthermore, the laser pointer had two intensity levels which enabled the user to not only point, but to have the equivalent of a mouse button, by the vision system interpreting the two levels as distinguishing button up and down events.
ADVANTAGES 
1. It can be projected on any surface or you can type in the plain air.
2. It can be usefull in places like operation theaters where low noise is essential.
3. The typing does not require a lot of force. So easing the strain on wrists and digits.
4. The Virtual Keyboard is not restricted to the QWERTY touch-typing paradigm, adjustments can be done to the software to fit other touch-typing paradigms as well.
5. No driver software necessary, It can be used as a plug and play device.
6. High battery life.
DRAWBACKS 
· Virtual keyboard is hard to get used to. Since it involves typing in thin air, it requires a little practice. Only people who are good at typing can use a virtual keyboard efficiently.
· It is very costly ranging from 150 to 200 dollars.
· The room in which the projected keyboard is used should not be very bright so that the keyboard is properly visible.
APPLICATIONS 
· High-tech and industrial Sectors
· Used with Smart phones, PDAs, email, word processing and spreadsheet tasks.
· As computer/PDA input.
· Gaming control.
· TV remote control.
CONCLUSION 
Virtual Keyboard uses sensor technology and artificial intelligence to let users work on any surface as if it were a keyboard. Projection key boards or virtual key boards claim to provide the convenience of compactness with the advantages of a full-blown QWERTY keyboard. The company's Virtual Keyboard is designed for anyone who's become frustrated with trying to put information into a handheld but doesn't want to carry a notebook computer around.
Canesta appears to be the most advanced in this class of technology and the only one who is shipping product. Other products are KITTY, a finger-mounted keyboard for data entry into PDA's, Pocket PC's and Wearable Computers and KITTY, a finger-mounted keyboard for data entry into PDA's, Pocket PC's and Wearable Computers.
Thus virtual keyboards will make typing easier, faster, and almost a pleasure.

Thermomechanical data storage

1 .INTRODUCTION
In the 21st century, the nanometer will very likely play a role similar to the one played by the micrometer in the 20th century The nanometer scale will presumably pervade the field of data storage. In magnetic storage today, there is no clear-cut way to achieve the nanometer scale in all three dimensions. The basis for storage in the 21st century might still be magnetism. Within a few years, however, magnetic storage technology will arrive at a stage of its exciting and successful evolution at which fundamental changes are likely to occur when current storage technology hits the well-known superparamagnetic limit. Several ideas have been proposed on how to overcome this limit. One such proposal involves the use of patterned magnetic media. Other proposals call for totally different media and techniques such as local probes or holographic methods. Similarly, consider Optical lithography. Although still the predominant technology, it will soon reach its fundamental limits and be replaced by a technology yet unknown. In general, if an existing technology reaches its limits in the course of its evolution and new alternatives are emerging in parallel, two things usually happen: First, the existing and well-established technology will be explored further and everything possible done to push its limits to take maximum advantage of the considerable investments made. Then, when the possibilities for improvements have been exhausted, the technology may still survive for certain niche applications, but the emerging technology will take over, opening up new perspectives and new directions.
Today we are witnessing in many fields the transition from structures of the micrometer scale to those of the nanometer scale, a dimension at which nature has long been building the finest devices with a high degree of local functionality. Many of the technologies we use today are not suitable for the coming nanometer age; some will require minor or major modifications, and others will be partially or entirely replaced. It is certainly difficult to predict which techniques will fall into which category. For key areas in information-technology hardware it is not yet obvious which technology and materials will be used for nanoelectronics and data storage.
In any case, an emerging technology being considered as a serious candidate to replace an existing but limited technology must offer long-term perspectives. For instance, the silicon microelectronics and storage industries are huge and require correspondingly enormous investments, which makes them long-term oriented by nature, The consequence for storage is that any new technique with better areal storage density than today’s magnetic recording should have long term potential for further scaling, desirably down to the nanometer or even atomic scale.
The only available tool known today that is simple and yet provides these very long term perspectives is a nanometer sharp tip. Such tips are now being used in every atomic force microscope (AFM) and scanning tunneling microscope (STM) for imaging and structuring down to the atomic scale. The simple tip is a very reliable tool that concentrates on one functionality: the ultimate local confinement of interaction.
In the early 90's, Mamin and Rugar at the IBM Almaden Research Center pioneered the possibility of using an AFM tip for read back and writing of topographic features for the purposes of data storage. In one scheme developed by them, reading and writing were demonstrated with a single AFM tip in contact with a rotating polycarbonate substrate. The writing was done thermomechanically via heating of the tip. In this way, storage densities of up to 30Gb/in2 were achieved, representing a significant advance compared to the densities of that day. Later refinements included increasing readback speeds up to a data rate of 10 Mb/s, and implementation of track servoing.
In making use of single tips in AFM or STM operation for storage, one has to deal with their fundamental limits for high data rates. The mechanical resonant frequencies of the AFM cantilevers limit the data rates of a single cantilever to a few Mb/s for AFM data storage, and the feedback speed and low tunneling currents limit STM-based storage & approaches to even lower data rates. Currently a single AFM operates at best on the microsecond time scale. Conventional magnetic storage, however, operates at best on the nanosecond time scale, making it clear that AFM data rates have to be improved by at least three orders of magnitudes to be competitive with current and future magnetic recording. Later, it was found that by operating the AFM tips in parallel, data storage with areal storage densities far beyond the expected superparamagnetic limit (~100 Gb/in2) and data rates comparable to those of today's magnetic recording can be achieved.
The "Millipede” concept which will be discussed here is a new approach for storing data at high speed and with an ultrahigh density. It is not a modification of an existing storage technology, although the use of magnetic materials as storage medium is not excluded. The ultimate locality is given by a tip, and high data rates are a result of massive parallel operation of such tips. Using this Millipede concept areal densities up to 0.5-1 Tb/in2 can be achieved by the parallel operation of very large 2D (32 x 32) AFM cantilever arrays with integrated tips and write/read storage functionality.
The fabrication and integration of such a large number of mechanical devices (cantilever beams) will lead to what we envision as the VLSI age of micro/ nanomechanics. It is our conviction that VLSI micro/nanomechanics will greatly complement future micro and nanoelectronics (integrated or hybrid) and may generate applications of VLSI-MEMS (VLSI-MicroElectroMechanical Systems) not conceived of today.
2 .THERMOMECHANICAL AFM DATA STORAGE
In recent years, AFM thermomechanical recording in polymer storage media has undergone extensive modifications mainly with respect to the integration of sensors and heaters designed to enhance simplicity and to increase data rate and storage density. Using these heater cantilevers, high storage density and data rates have been achieved. Let us now describe the storage operations in detail.
2. 1. DATA WRITING
Thermomechanical writing is a combination of applying a local force by the cantilever/tip to the polymer layer, and softening it by local heating. Initially, the heat transfer from the tip to the polymer through the small contact area is very poor and improves as the contact area increases. This means the tip must be heated to a relatively high temperature (about 400oC) to initiate the softening. Once softening has commenced, the tip is pressed into the polymer, which increases the heat transfer to the polymer, increases the volume of softened polymer, and hence increases the bit size. Our rough estimates indicate that at the beginning of the writing process only about 0.2% of the heating power is used in the very small contact zone (10-40 nm2) to soften the polymer locally, whereas about 80% is lost through the cantilever legs to the chip body and about 20% is radiated from the heater platform through the air gap to the medium/substrate. After softening has started and the contact area has increased, the heating power available for generating the indentations increases by at least ten times to become 2% or more of the total heating power.
With this highly nonlinear heat transfer mechanism it is very difficult to achieve small tip penetration and hence small bit sizes as well as to control and reproduce the thermomechanical writing process. This situation can be improved if the thermal conductivity of the substrate is increased, and if the depth of tip penetration is limited. These characteristics can be improved by the use of very thin polymer layers deposited on Si substrates as shown in figure 1.The hard Si substrate prevents the tip from penetrating farther than the film thickness, and it enables more rapid transport of heat away from the heated region, as Si is a much better conductor of heat than the polymer. By coating Si substrates with a 40-nm film of polymethylmethacrylate (PMMA) bit sizes ranging between 10 and 50 nm is achieved. However, this causes increased tip wear, probably caused by the contact between Si tip and Si substrate during writing. Therefore a 70-nm layer of cross linked photoresist (SU-8) was introduced between the Si substrate and the PMMA film to act as a softer penetration stop that avoids tip wear, but remains thermally stable.
Using this layered storage medium, data bits 40 nm in diameter have been written as shown in Fig. 2. These results were performed using a 1-pm-thick, 70-pm-long, two-legged Si cantilever. The cantilever legs are made highly conducting by high-dose ion implantation, whereas the heater region remains low-doped. Electrical pulses 2s in duration were applied to the cantilever with a period of 50 s. Figure 2a demonstrates that 40-nm bits can be written with 120-nm pitch or very close to each other without merging (Fig. 2b), implying a potential bit areal density of 400 Gb/in.2 By using a single cantilever areal densities up to I Tb/in2 has been achieved as illustrated in fig.2.c.
2.2 DATA READING
Imaging and reading are done using a new thermomechanical sensing concept. The heater cantilever originally used only for writing was given the additional function of a thermal readback sensor by exploiting its temperature-dependent resistance. 'The resistance (R) increases nonlinearly with heating power/temperature from room temperature to a peak value of 500-7000oC. The peak temperature is determined by the doping concentration of the heater platform, which ranges from 1x1017 to 2x1018.Above the peak temperature, the resistance drops as the number of intrinsic carriers increases because of thermal excitation. For sensing, the resistor is operated at about 350oC, a temperature that is not high enough to soften the polymer as is the case for writing. The principle of thermal sensing is based on the fact that the thermal conductance between the heater platform and the storage substrate changes according to the distance between them. The medium between a cantilever and the storage substrate -in our case air- transports heat from one side to the other. When the distance between heater and sample is reduced as the tip moves into a bit indentation, the heat transport through air will be more efficient, and the heater's temperature and hence its resistance will decrease. Thus, changes in temperature of the continuously heated resistor are monitored while the cantilever is scanned over data bits, providing a means of detecting the bits. Figure3 illustrates this concept.

Under typical operating conditions, the sensitivity of thermomechanical sensing is even better than that of piezoresistive -strain sensing, which is not surprising because thermal effects in semiconductors are stronger than strain effects. The good R/R sensitivity of about 10-5/nm is demonstrated by tile images of the 40 nm size bit indentations in Fig. 2 which have been obtained using the described thermal-sensing technique.
The data erasing operation will be discussed while explaining the polymer material used as the storage medium.
3. THE MILLIPEDE CONCEPT
The 2D AFM cantilever array storage technique called "Millipede" .
Millipede uses thousands of nano-sharp tips to punch indentations representing individual bits into a thin plastic film.
Millipede is comprised of two postage stamp sized chips – a stationary chip and a movable chip as shown in figure 5.The stationary chip is an array of read /write probes. From above, each probe looks like a rounded “v”, is attached by its arms, and has a point at the end like a phonograph needle. The chip also contains read and write circuits for probe tips, position sensors, permanent magnets, and coils. The sensors, magnets ,and coils are part of the electromagnetic actuator circuits that move the second chip in the x,y,z and tilt directions.
The movable chip is the storage medium. The electromagnetic actuators move the storage medium over the stationary medium, which contains the cantilevers for write/read operations, in x and y directions. The storage area looks like a 32x32 checkerboard where a square in the checkerboard contains a million bits. Each probe reads and writes only in its square.
Millipede is based on a mechanical parallel x/y scanning of either the entire cantilever array chip or the storage medium. In addition, a feedback-controlled z-approaching and leveling scheme brings the entire cantilever array chip into contact with the storage medium. This tip-medium contact is maintained and controlled while x/y scanning is performed for write/read. It is important to note that the Millipede approach is not based on individual z-feedback for each cantilever; rather, it uses a feedback control for the entire chip, which greatly simplifies the system. However, this requires very good control and uniformity of tip height and cantilever bending. Chip approach/leveling makes use of additionally integrated approaching cantilever sensors in the corners of the array chip to control the approach of the chip to the storage medium. Signals from these sensors provide feedback signals to adjust the z-actuators until contact with the medium is established. The system operates similarly to an antivibration table. Feedback loops maintain the chip leveled and in contact with the surface while x/y scanning is performed for write/read operations. This basic concept of the entire chip approach/leveling has been tested and demonstrated for the first time by parallel imaging with a 5x5 array chip. These parallel imaging results have shown that all 25 cantilever tips have approached the substrate within less than 1 m of z-activation. This promising result convinced us that chips tip-apex height control of less than 500 nm is feasible. This stringent requirement for tip-apex uniformity over the entire chip is determined by the uniform force required to minimize/eliminate tip and medium wear due to large force variations resulting from large tip-height nonuniformities.
During the storage operation, the chip is raster-scanned over an area called the storage field by a magnetic x/y scanner. The scanning distance is equivalent to the cantilever x/y pitch, which is currently 92 pm. Each cantilever/tip of the array writes and reads data only in its own storage field. This eliminates the need for lateral positioning adjustments of the tip to offset lateral position tolerances in tip fabrication. Consequently, a 32x32 array chip will generate 32x32 (1024) storage fields on an area of less than 3x3 mm2 Assuming an areal density of 500 Gb/in2, one storage field of 92x92 m2 has a capacity of 0.875 MB and the entire 32x32 array with 1024 storage fields has a capacity of 0.9 GB 3 x 3 mm2. The storage capacity of the system scales with the areal density, the cantilever pitch (storage-field size), and the number of cantilevers in the array. Although not yet investigated in detail, lateral tracking will also be performed for the entire chip with integrated tracking sensors at the chip periphery. This assumes and requires very good temperature1 control of the array chip and the medium substrate between write and read cycles. For this reason the array chip and medium substrate should be held within about 1oC operating temperature for bit sizes of 30 to 40 nm and array chip sizes of a few millimeters. This will be achieved by using the same material (silicon) for both the array chip and the medium substrate in conjunction with four integrated heat sensors that control four heaters on the chip to maintain a constant array chip temperature during operation. True parallel operation of large 2D arrays results in very large chip sizes because of the space required for the individual write/read wiring to each cantilever and the many I/O pads. The row/column time-multiplexing addressing scheme implemented successfully in every DRAM is a very elegant solution to this issue. In the case of Millipede, the time-multiplexed addressing scheme is used to address the array row by row with full parallel write/read operation within one row.
The current Millipede storage approach is based on a new thermomechanical write/read process in nanometer thick polymer films, but thermomechanical writing in polycarbonate films and optical readback was first investigated and demonstrated by Mamin and Rugar. Although the storage density of 30 Gb/in.2 obtained originally was not overwhelming, the results encouraged us to use polymer films as well to achieve density improvements.
4. CANTILEVER DESIGN AND FABRICATION
The cantilever chip (see Fig. 6) consists of a chip body with large metal pads for electrical contact, thick and mechanically stiff cantilever legs, and the thin cantilever itself, which corresponds to a cantilever as found in a Millipede array. The main part that influences power consumption and data rate of the cantilever is the heater / tip area. For this study, the heater dimensions as well as those of the two thermal constrictions on both heater sides will be varied. The thick legs allow some clearance to the chip body to facilitate the approaching procedure as well as a well-defined cantilever anchor position. Their stiffness is 30 times that of the cantilever, thus they are considered a perfect anchor.
The entire cantilever structure consists of monocrystalline silicon, ensuring thermal and mechanical stability, which is crucial for a cantilever used for thermomechanical writing / reading.
The border between the two types of lithography is between the thick legs and the thin cantilever, as shown in Fig. 6 a: the thick lever parts, the metal pads and chip delineation were made using optical lithography, whereas the tip, the thin cantilever and the heater were made using e-beam lithography. A scanning electron microscope (SEM) image of a finished chip is shown in Fig. 6 b. For the alignment between heater and tip, a strategy of local alignment at each cantilever cell has been used. Note that the small size of the lever structure allows its fabrication in one e-beam writing field (200 m), eliminating the stitching issue. Heaters with dimensions ranging from 200 nm to 3 mm with different constriction lengths have been designed. Because the structures made with e-beam represent only a small fraction of the wafer area, negative-tone ma-N 2410 resist from Microresist Technology has been used Achievable resist thicknesses range from 0.8 to 1.5 m, which is suitable for pattern transfer with dry etching as well as masking for dopant implantation, and still allows fine structuring. The e-beam lithography was carried out at 10 kV.
        The starting substrate is a 4-inch silicon on insulator (SOI) wafer with a 1.5-m m thick epitaxial grown n-doped silicon membrane and a 0.4-m thick buried oxide (BOX). The membrane is doped with phosphorous at a concentration of 5x1017 at/cm3, which is the doping required for the heater platform. The first step consists of thermally growing a 700-nm layer of oxide (Fig. 7a), which is later used as mask material when etching the thick legs and the tip. Then an optical lithography step is performed, and the pattern transferred into half of the oxide thickness by CHF –O -based reactive ion etching (RIE) to delineate the thick lever part (Fig. 7b) and the alignment marks needed for the optical and e-beam lithography steps. Next, the first e-beam lithography is performed (Fig. 7c) to delineate the 2m diameter tip mask, and the mask is transferred into the remaining half of the oxide thickness using the same RIE process as before. Note that this also thins the oxide film on the thick lever part, and that, at this stage, the masks for both the tip and the thick legs consist of a 350-nm thick oxide film. Alignment is done by first using the global and then the local alignment marks at each cantilever writing field with structures made in Step 7b. The silicon tip is etched by an isotropic SF6 –Ar RIE process (Fig. 7d). An oxidation sharpening technique is used to finalize the tip shape (Fig. 7e) .With this technique, tip apex radii can be achieved that are well below 20 nm. Then the lever part is patterned by e-beam lithography and transferred into the remaining silicon membrane using an anisotropic SF6 –C4F8 based RIE process (Fig. 7f). A 50-nm thick capping oxide layer is thermally grown, and the implantation mask is patterned using e-beam lithography. The resist mask protects the heater zone during the 80-keV, 1x1016 ions/cm2 phosphorous implantation (Fig. 7g). Dopant activation is performed with a 1150oC, 20 s heating pulse, using a rapid thermal annealing system. Such a high-temperature, short-time pulse provides good dopant activation without significantly broadening the implantation zone by lateral dopant diffusion, which is crucial to prevent heater-length shortening. Once the thin capping oxide has been wet-etched, the metal pads are structured using a lift-off technique (Fig. 7h). Prior to etching the back side of the chip body, the front side is protected with resist. A deep reactive-ion-etching (DRIE) system is used to etch through the wafer thickness with the BOX as etch stop. Next the BOX is wet-etched (Fig. 7i). Finally the protection of the resist front side is removed using a solvent-based stripper (Fig. 7j). Fabricated cantilevers are 50 m long and 100 nm thick, corresponding to a resonant frequency of 86 kHz and a spring constant of 10 mN/ m. Fig. 8 shows close-ups of the heater / tip area of different design as well as a high magnification view of a tip. The heater / constriction width ranges from 180 nm to 3 m m, and the tip–heater alignment accuracy is about 50 nm.
5. ARRAY DESIGN, TECHNOLOGY, FABRICATION.
After the cantilevers have been fabricated, they have to be arranged in an array for parallel operation. This process is explained here: Cantilevers are released from the crystalline Si substrate by surface micromachining using either plasma or wet chemical etching to form a cavity underneath the cantilever. Compared to a bulk- micromachined through wafer cantilever-release process as done for our 5 x 5 array, the surface micromachining technique allows an even higher array density and yields better mechanical chip stability and heat sinking. As the Millipede tracks the entire array without individual lateral cantilever positioning, thermal expansion of the array chip has to be small or well controlled. Because of thermal chip expansion, the lateral tip position must be controlled with better precision than the bit size, which requires array dimensions as small as possible and a well-controlled chip temperature For a 3x3 mm2 silicon array area and 10-nm tip-position accuracy the chip temperature has to be controlled to about 1oC. This is ensured by four temperature sensors in the corners of the array and heater elements on each side of the array. Thermal expansion considerations were a strong argument for the 2D array arrangement instead of 1D, which would have made the chip 32 times longer for the same number of cantilevers.
      The 32x32 array section of the chip with the independent approach/heat sensors in the four corners and the heaters on each side of the array as well as zoomed scanning electron micrographs (SEMs) of an array section, a single cantilever, and a tip apex. The tip height is 1.7 m and the apex radius is smaller than 20 nm, which is achieved by oxidation sharpening. The cantilevers are connected to the column and row address lines using integrated Schottky diodes in series with the cantilevers. The diode is operated in reverse bias (high resistance) if the cantilever is not addressed, thereby greatly reducing crosstalk between cantilevers.
6. ARRAY CHARECTERIZATION
The array's independent cantilevers, which are located in the four corners of the array and used for approaching and leveling of chip and storage medium , are used to initially characterize the interconnected array cantilevers. Additional cantilever test structures are distributed over the wafer; they are equivalent to but independent of the array cantilevers
The cantilevers within the array are electrically isolated from one another by integrated Schottky diodes. As every parasitic path in the array to the cantilever addressed contains a reverse biased diode, the crosstalk current is drastically reduced. Thus, the current response to an addressed cantilever in an array is nearly independent of the size of the array. Hence, the power applied to address a cantilever is not shunted by other cantilevers, and the reading sensitivity is not degraded-not even for very large arrays (32 x32). The introduction electrical isolation using integrated Schottky diodes turned out to be crucial for the successful operation of interconnected cantilever arrays with a simple time-multiplexed addressing scheme.
The tip-apex height uniformity within an array is very important, because it determines the force of each cantilever while in contact with the medium and hence influences write/read performance as well as medium and tip wear. Wear investigations suggest that a tip apex height uniformity across the chip of less than 500 nm is required with the exact number depending on the spring constant of the cantilever. In the case of the Millipede, the tip-apex height is determined by the tip height and the cantilever bending.
7. THE POLYMER MEDIUM
The polymer storage medium plays a crucial role in Millipede like thermo-mechanical storage systems. The thin-film-sandwich structure with PMMA as active layer (see Fig. 1) is not the only choice possible, considering the almost unlimited range of polymer materials available. The ideal medium should be easily deformable for bit writing, yet written bits should be stable against tip wear and thermal degradation. Finally, one would also like to be able to repeatedly erase and rewrite bits .In order to be able to scientifically address all important aspects, some understanding of the basic physical mechanism of thermomechanical bit writing and erasing is required.
In a gedanken experiment we visualize bit writing as the motion of a rigid body, the tip, in a viscous medium, the polymer melt. For the time being, the polymer, i.e.PMMA. is assumed to behave like a simple liquid after it has been treated above the glass-transition temperature in a small volume around the tip. As viscous drag forces must not exceed the loading force applied to the tip during indentation, we can estimate an upper bound for the viscosity of the polymer melt using Stokes' equation
 In actual Millipede bit writing, the tip loading force is on the order F = 50 nN, and the radius of curvature at the apex of the tip is typically R = 20 nm. Assuming a depth of the indentation of, say, h = 50 nm and a heat pulse of h=10s duration, the mean velocity during indentation is on the order of =h/h=5mms-1 (note that thermal relaxation times are of the order of microseconds ,and hence the heating time can be equated to the time it takes to form an indentation).With these parameters we obtain < 25 Pa s, whereas typical values for the shear viscosity of PMMA are at least 7 orders of magnitude larger even at temperatures well above the glass-transition point.
This apparent contradiction can be resolved by considering that polymer properties are strongly dependent on the time scale of observation. At time scales of the order of 1ms and below, entanglement motion is in effect frozen in and the PMMA molecules form a relatively static network. Deformation of the PMMA flow proceeds by means of uncorrelated deformations of short molecular segments rather than by a flow mechanism involving the coordinated motion of entire molecular chains. The price one has to pay is that elastic stress builds up in the molecular network as a result of the deformation (the polymer is in a so-called rubbery state). On the other hand, corresponding relaxation times are orders of magnitude smaller1 giving rise to an effective viscosity at millipede time scales of the order of 10 Pa s as required by our simple argument. [See Eq. (1)].Note that, unlike the normal viscosity, this high-frequency viscosity is basically independent of the detailed molecular structure of the PMMA, i.e. chain length, tacticity, poly dispersity, etc. In fact, we can even expect that similar high-frequency viscous properties are found in a large class of other polymer materials, which makes thermomechanical writing a rather robust process in terms of material selection.
We have argued above that elastic stress builds up in the polymer film during indentation, creating a corresponding reaction force on the tip of the order of Fr =2GR2, where G denotes the elastic shear modulus of the polymer. An important property for Millipede operation is that the shear modulus drops by orders of magnitude in the glass-transition regime, i.e. for PMMA from ~1 GPa below Tg to ~0.5... 1 MPa above Tg. (The bulk modulus, on the other hand, retains its low-temperature value of several GPa. Hence, in this elastic regime, formation of an indentation above 1; constitutes a volume preserving deformation.) For proper bit writing, the tip load must be balanced between the extremes of the elastic reaction force Fr for temperatures below and above Tg i.e. for PMMA F « 2.5 1N to prevent indentation of the polymer in the cold state and F»2.5 nN to overcome the elastic reaction force in the hot state. Unlike the deformation of a simple liquid, the indentation represents a metastable state of the entire deformed volume, which is under elastic tension. Recovery of the unstressed initial state is prevented by rapid quenching of the indentation below the glass temperature with the tip in place. As a result, the deformation is frozen in because below Tg motion of molecular-chain segments is effectively inhibited
This mechanism also allows local erasing of bits it suffices to locally heat the deformed volume above Tg where upon the indented volume reverts to its unstressed flat state drivers by internal elastic stress. In addition, erasing is promoted by surface tension forces, which give rise to a restoring surface pressure on the order of (/R)2h ~25 MPa,
where ~ 0.02 Nm-1 denotes the polymer-air surface tension.
One question immediately arises from these speculations: If the polymer behavior can he determined from the macroscopic characteristics of the shear modulus as a function of time, temperature, and pressure, cars then the time-temperature superposition principle also be applied in our case? The time temperature superposition principle is a very successful concept of polymer physics .It basically says that the time scale and the temperature are interdependent variables that determine the polymer behavior such as the shear modulus. A simple transformation can be used to translate time-dependent into temperature dependent data and vice versa. It is not clear, however, whether this principle can be applied in our case. i.e. under such extreme conditions (high pressures, short time scales and nanometer-sized volumes, which are clearly below the radius of gyration of individual polymer molecules).
One of the most striking conclusions of our model of the bit-writing process is that it should in principle work for most polymer materials. The general behavior of the mechanical properties as a function of temperature and frequency is similar for all polymers. The glass-transition temperature Tg would then be one of the main parameters determining the threshold writing temperature.
A verification of this was found experimentally by comparing various polymer films. The samples were prepared in the same way as the PMMA samples discussed earlier by spin casting thin films (10-30 nm) onto a silicon wafer with a photo-resist buffer. Then threshold measurements were done by applying heat pulses with increasing current (or temperature) to the tip while the load and the heating time were held constant (load about 10 nN and heating time 10 s). Examples of such measurements are shown in Fig. 12, where the increasing sire and depth of bits can be seen for different heater temperatures. A threshold can be defined based on such data and compared with the glass-transition temperature of these materials. The results show a clear correlation between the threshold heater temperature and the glass-transition temperature .
It is worth looking at the detailed shapes of the written bits. The polymer material around an indentation appears piled-up as can be seen, in Fig. 14. This is not only material that was pushed aside during indentation formation as a result of volume conservation. Rather, the flask heating by the tip and subsequent rapid cooling result in an increase of the specific volume of the polymer. This phenomenon that the specific volume of a polymer can be increased by rapidly cooling a sample through the glass transition is well known. Our system allows a cooling time of the order of microseconds, which is much faster than the fastest rates that can be achieved with standard polymer-analysis tools. However, a quantitative measurement of the specific volume change cannot be easily done in our type of experiments. On the other hand, the pile-up effect serves as a convenient threshold thermometer. The outer perimeter of the donuts surrounding the indentations corresponds to the Tg isotherm, and the temperature in the enclosed area has certainly reached values larger than Tg during the indentation process Based on our visco-elastic model, one would thus conclude that previously written bits that overlap with the piled up region of a subsequently written bit should be erased.
With our simple visco -elastic model of bit writing we are able to formulate a set of requirements that potential candidate materials for Millipede data storage have to fulfill. First, the material should ideally exhibit a well-defined glass-transition point with a large drop of the shear modulus at Tg. Second, a rather high value of Tg on the order of 1500 oC is preferred to facilitate thermal read-back of the data without destroying the information. We have investigated a number of materials to explore the Tg parameter space. The fact that all polymer types tested are suitable for writing small bits allows us to exercise the freedom of choice of polymer type to optimize in terms or the technical requirements for a device, such as lifetime of bits, polymer endurance of the read and write process, power consumption, etc. These are fields of ongoing, research.
Based upon the pile-up effect, erasing of data bits may be explained.
8. DATA ERASING
The pile-up phenomenon turns out to be particularly beneficial for data-storage applications. The following example demonstrates the effect. If we look at the sequence of images in Fig. 15 taken on a standard PMMA sample, we find that the piled-up regions can overlap each other without disturbing the indentation. If tile piled-up region of an individual bit-writing event, however, extends over the indented area of a previously written hit, tire depth of the corresponding indentation decreases markedly.
This can he used for erasing written bits. However; if the pitch between two successive bits is decreased even further, this erasing process will no longer work. Instead a broader indentation is formed. Hence, to exclude mutual interference, the minimum pitch between successive bits must be larger than the radius of the piled-up area around an indentation.
         Indentations in a PMMA film is at several distances. The depth of the indentations is ~15 nm , about the thickness of the PMMA layer. The indentations on the left hand side were written first , then a second series of indentations were made decreasing distance to the first series in going from a to e.
In the example shown in Fig. 15 the temperature was chosen so high that the ring around the indentations was very large, whereas the depth of the bit was limited by the stop layer underneath the PMMA material. Clearly, here the temperature was too high to form small bits, the minimum pitch being around 250 nm. However, by carefully optimizing all parameters it is possible to achieve areal densities of up to 1Tb/in2 as demonstrated .
The new erasing scheme based on this volume effect switches from writing to erasing merely by decreasing the pitch of writing indentations. This can, be done in, a very controlled fashion as shown in Fig. 16, where individual lines or predefined sub-areas are erased. Hence, this new erasing scheme can be made to work in a way that is controlled on the scale of individual bits. Compared with earlier global erasing schemes, this simplifies erasing significantly.
9. ADVANTAGES
The Advantages of this technology are :
1. Ultrahigh Density.
2. Terabit Capacity.
3. Small Form Factor.
4. High Data Rates.
5. Not affected by electric or magnetic fields.
10. CONCLUSION
Day by day, the need for more storage capacity is going on increasing. Six or seven years back, the maximum hard disk capacity available was about 2GB.But today hard disks of 80 GB and 100GB are very common. The external size of the hard disk is almost the same seven years back and today. It is the storage density that is being increased. After some years, the current method of magnetically storing data may reach its limit of maximum achievable density. Beyond this super paramagnetic limit, the capacity of magnetic storage cannot be increased. Hence there is a strong need for a new storage technique. The Thermomechanical storage concept described above may be considered as a good alternative. The millipede concept, which operates thousands of cantilevers for write/read operation can provide ultra high storage capacity at very high data rates. The Millipede project could bring tremendous data capacity to mobile devices such as personal digital assistants, cellular phones, digital cameras and multifunctional watches. In addition, the use of this concept may be explored in a variety of other applications, such as large-area microscopic imaging, nanoscale lithography or atomic and molecular manipulation. Research is going on to find new storage mediums and to construct yet smaller cantilever tips , so that the storage capacity can be increased further. In future we can expect a storage device of the size of a button with storage capacity of trillions of bits.

Sunday, 23 January 2011

Single Photon Emission Computed Tomography

1. INTRODUCTION 
          Emission Computed Tomography is a technique where by multi cross sectional images of tissue function can be produced , thus removing the effect of overlying and underlying activity. The technique of ECT is generally considered as two separate modalities. SINGLE PHOTON Emission Computed Tomography involves the use single gamma ray emitted per nuclear disintegration. Positron Emission Tomography makes use of radio isotopes such as gallium-68, when two gamma rays each of 511KeV, are emitted simultaneously where a positron from a nuclear disintegration annihilates in tissue. 
          SPECT, the acronym of Single Photon Emission Computed Tomography is a nuclear medicine technique that uses radiopharmaceuticals, a rotating camera and a computer to produce images which allow us to visualize functional information about a patient’s specific organ or body system. SPECT images are functional in nature rather than being purely anatomical such as ultrasound, CT and MRI. SPECT, like PET acquires information on the concentration of radio nuclides to the patient’s body.         
          SPECT dates from the early 1960 are when the idea of emission traverse section tomography was introduced by D.E.Kuhl and R.Q.Edwards prior to PET, X-ray, CT or MRI. THE first commercial Single Photon- ECT or SPECT imaging device was developed by Edward and Kuhl and they produce tomographic images from emission data in 1963. Many research systems which became clinical standards were also developed in 1980’s. 

2. Single photon emission computed tomography (SPECT) 
What is SPECT?
          SPECT is short for single photon emission computed tomography. As its name suggests (single photon emission) gamma rays are the sources of the information rather than X-ray emission in the conventional CT scan. 
Why SPECT?
          Similar to X-ray, CT, MRI, etc SPECT allows us to visualize functional information about patient’s specific organ or body system. 
How does SPECT manage us to give functional information?
          Internal radiation is administrated by means of a pharmaceutical which is labeled with a radioactive isotope. This pharmaceutical isotope decays, resulting in the emission of gamma rays. These gamma rays give us a picture of what’s happening inside the patient’s body. 
But how do these gamma rays allow us to see inside?
          By using the most essential tool in Nuclear Medicine-the Gamma Camera. The Gamma Camera can be used in planner imaging to acquire a 2-D image or in SPECT imaging to acquire a 3-D image. 
How are these Gamma rays collected?
          The Gamma Camera collects the gamma rays emitted from the patient, enabling to reconstruct a picture of where the gamma rays originated. From this we can how a patient’s organ or system is functioning.
3. THEORY AND INSTRMENTATION 
          Single –photon Emission Computed tomography or what the medical world refers to as SPECT is a technology used in nuclear medicine where the patient is injected with a radiopharmaceutical which will emit gamma rays. We seek the position and concentration of radionuclide distribution by the rotation of a photon detector array around the body which acquires data from multiple angles. The radiopharmaceutical may be delivered by 1V catheter, inhaled aerosol etc. The radio activity is collected by an instrument called a gamma camera. Images are formed from the 3-D distribution of the radiopharmaceutical with in the body. 
          Because the emission sources are inside the body cavity, this task is for more difficult than for X-ray, CT, where the source position and strength are known at all times. 
          i.e.   In X-ray, CT, the attenuation is measured not the transmission source. To compensate for the attenuation experienced by emission photons from injected tracers in the body, contemporary SPECT machines use mathematical reconstruction algorithms to increase resolution. 
          The gamma camera is made up of two or three massive cameras opposite to each other which rotate around a centre axis, thus each camera moving 180 or 120 degrees respectively. Each camera is lead-encased and weighs about 500 pounds .The camera has three basic layers –the collimator (which only allows the gamma rays which are perpendicular to the plane of the camera to enter), the crystal and the detectors. Because only a single photon is emitted from the radionuclide used for SPECT, a special lens known as a collimator is used to acquire the image from multiple views around the body .The collimation of the rays facilitates the reconstruction since we will be dealing with data that comes in only perpendicular .At each angle of projection, the data will be back projected only in one direction. 
          When the gamma camera rotates around the supine body, it stops at interval angles to collect data. Since it has two or three heads, it needs to only to rotate 180 or 120 degrees to collect data around the entire body .The collected data is planar. Each of the cameras collects a matrix of values which correspond to the number of gamma counts detected in that direction at the one angle.
           Images can be reprojected into a three dimensional one that can be viewed in a dynamic rotating format on computer monitors, facilitating the demonstration of pertinent findings to the referring physicians.
4. THE GAMMA CAMERA 
          Once a radiopharmaceutical has been administered, it is necessary to detect the gamma ray emissions in order to attain the functional information. The instrument used in nuclear medicine for the detection of gamma rays is known as gamma camera.
The components making up the gamma camera are
1.  Camera Collimator
2.  Scintillation Detector
3.  Photomultiplier Tube
4.  Positron Circuitry
5.  Data Analysis Computer 
4.1       Camera Collimator                          
          The first object that an emitted gamma photon encounters after exiting the body is the collimator. The collimator is a pattern of holes through gamma ray absorbing material, usually lead or tungsten that allows the projection of the gamma ray onto the detector crystal. The collimator achieves this by only allowing those gamma rays traveling along certain direction to reach the detector; this ensures that the position on the detector accurately depicts the originating location of the gamma ray. 
4.2     Scintillation Detector 
          In order to detect the gamma photon we use scintillation detectors. A Thallium-activated Sodium Iodide [NaI (TI)] detector crystal is generally used in Gamma cameras. This is due to this crystal’s optimal detection efficiency for the gamma ray energies of radionuclide emission common to Nuclear Medicine. A detector crystal may be circular or rectangular. It is typically 3/8” thick and has dimensions of 30-50 cm. A gamma ray photon interacts with the detector by means of the Photoelectric Effect or Compton Scattering with the iodide ions of the crystal. This interaction causes the release of electrons which in turn interact with the crystal lattice to produce light, in a process known as scintillation. Thus, a scintillation crystal is a material that has the ability to convert energy lost by radiations into pulses of light. 
The basic scintillation system consists of:
1.     Scintillator
2.     Light Guide
3.     Photo Detector

4.3           Photomultiplier Tube 
          Only a small amount of light is given off from the scintillation detector. Therefore, photomultiplier tubes are attached to the back of the crystal. At the face of a Photomultiplier tube (PMT) is a photocathode which, when stimulated by light photons, ejects electrons. The PMT is an instrument that detects and amplifies the electrons that are produced by the photocathode. For every 7 to 10 photons incident on the photocathode, only one electron is generated. This electron from the cathode is focused on a dynode which absorbs this electron and re-emits many more electrons. These new electrons are focused on the next dynode and the process is repeated over and over in an array of dynodes. At the base of the photomultiplier tube is an anode which attracts the final large cluster of electrons and converts them into an electrical pulse. 
          Each gamma camera has several photomultiplier tubes arranged in a geometrical array. The typical camera has 37 to 91 PMT’s.
4.4           Positron Circuitry            
          The positron logic circuits immediately follow the photomultiplier tube array and they receive the electrical impulses from the tubes in the summing matrix circuits (SMC). This allows the position circuits to determine where each scintillation event occurred in the detector crystal. 
4.5           Data Analysis Computer         
          Finally in order to deal with the incoming projection data and to process it into a readable image of the 3D spatial distribution of activity with in the patient, a processing computer is used. The computer may use various different methods to reconstruct an image, such as filtered back projection or iterative construction.    
5. SPECT IMAGE ACQUSITIONAND PROCESSING 
          SINGLE photon emission computer tomography has its goal determination of the regional concentration of radionuclide with in a specific organ as a function of time. The introduction of radio isotope TC-99m by Harpen ,which emits a single gamma ray photon of energy 140 KeV & has a half life of about six hours signaled a great step forward for SPECT since this photon is easily detected by gamma cameras . However, a critical engineering problem involving the collimation of this gamma rays prior to entering the gamma camera have to be solved before SPECT could establish itself as a viable imaging modality 
          Single photon emission computed tomography requires collimation of gamma rays emitted by the radiopharmaceutical distribution within the body Collimators for SPECT imaging are typically made of lead. They are about 4 to 5 cms thick and 20 by 40 cm on its side. The collimators contain thousands of square, round or hexagonal parallel channels through which – gamma rays are allowed to pass. Typical low-energy collimators for SPECT weigh about 50 lbs, but high – energy models can weigh above over 200 lbs. Although quiet heavy, these collimators are placed directly on top of a very delicate single crystal of a NaI contain within every gamma camera. Any gamma camera so occupied with a collimator is called an angle camera after it is invented. Gamma rays traveling along a path that coincides with one of the collimator channels will pass through the collimator unabsorbed and interact with the NaI crystal creating light. Behind the crystal, a grid of photo multiplier tubes collects the light for processing. It is from the analysis of this light signals that SPECT  images are produced .Depending on the size of anger cameras whole organs such as heart and liver can be imaged. Large anger cameras are capable of imaging the entire body and are used, for example, for bone scans. 
          For the gamma rays emitted by radiopharmaceuticals typical for SPECT, there are two important interactions with matter. The first involves scattering of the gamma ray off electrons in the atoms and molecules (DNA) within the body. This scattering process is called Compton scattering. Some Compton scattered photons are deflected outside the Anger cameras field of view and are lost to the detection process. The second interaction consists of a photon being absorbed by an atom in the body with an associated jump in energy level (or release) of an electron in the same atom. This process is called the photoelectric effect and was discovered for the interaction of photons with metals by Einstein, who received the Nobel Prize for this discovery. Both processes result in a loss or degradation of information about the distribution of the radiopharmaceutical within the body. The second process falls under the general medical imaging concept of attenuation and is an active research area. 
          Attenuation results in a reduction in the number of photons reaching the Anger camera. The amount of attenuation experienced by any one photon depends on its path through the body and its energy. Photons which experience Compton scattering loose energy to the scatterer and are therefore more likely to be scattered additional times and eventually absorbed by the body or wide-angle scattered outside the camera’s field of view. In either case, the photon (and the information it carries about the distribution of the radiopharmaceutical in the body) is not going to be detected and is thus considered lost due to attenuation. At 14OKeV, Compton scattering is the most probable interaction of a gamma ray photon with water or body tissue. A much smaller percentage of photons are lost through the photoelectric interaction. It is possible for a Compton scattered photon to be scattered into the Anger camera’s field of view. Such photons however do not carry directly useful information about the distribution of the radiopharmaceutical within the body since they do not indicate from where within the body they originated. As a result, the detection of scattered photons in SPECT leads to loss of image contrast and a technically inaccurate image. 
          Acquiring and processing a SPECT image, when done correctly, involves compensating for and adjusting many physical and system parameters. A selection of these include: attenuation, scatter, uniformity and linearity of detector response, geometric spatial resolution and sensitivity of the collimator, intrinsic spatial resolution and sensitivity of the Anger camera, energy resolution of the electronics, system sensitivity, image truncation, mechanical shift of the camera or gantry, electronic shift, axis-of-rotation calibration, image noise, image slice thickness, reconstruction matrix size and filter, angular and liner sampling intervals, statistical variations in detected counts, changes in Anger camera field of view with distance from the source, and system dead time. Calibrating and monitoring many of these parameters fall under the general heading of Quality Control and are usually performed by a Certified Nuclear Medicine Technician or a medical physicist. Among this list, collimation has the greatest effect on determining SPECT system spatial resolution and sensitivity, where sensitivity relates to how many photons per second are detected. System resolution and sensitivity are the most important physical measures of how well a SPECT system performs. Improvement in these parameters is a constant goal of the SPECT researcher. Improvement in both of these parameters simultaneously is rarely achieved in practice.
5.1 COLLIMATION         
          Since the time a patient spends in a Nuclear Medicine department relates directly to patient comfort, there exists pressure to perform all nuclear medicine scans within an acceptable time frame. For SPECT, this can result in relatively large statistical image noise due to a limited number of photons detected within the scan time. This fact does not hinder our current clinical ability to prognosticate the diseased state using SPECT, but does raise interesting research questions. For example, a typical Anger camera equipped with a low-energy collimator detects roughly one in every ten-thousand gamma ray photons emitted by the source in the absence of attenuation. This number depends on the type of collimator used. The system spatial resolution also depends on the type of collimator and the intrinsic (built in) resolution of the Anger camera. A typical modem Anger camera has an intrinsic resolution of three to nine millimeters. Independent of the collimator, system resolution cannot get any better than intrinsic resolution. The same ideas also apply to sensitivity: system sensitivity is always worse than - and at best equal to intrinsic sensitivity. 
          A collimator with thousands of straight parallel lead channels is called a parallel-hole collimator, and has a geometric or collimator resolution that increases with distance from the gamma ray source. Geometric resolution can be made better (worse) by using smaller (larger) channels. The geometric sensitivity, however, is inversely related to geometric resolution, which means improving collimator resolution decreases collimator sensitivity, and vice versa. Of course, high resolution and great sensitivity are two paramount goals of SPECT. Therefore, the SPECT researcher must always consider this trade-off when working on new collimator designs. There have been several collimator designs in the past ten years which optimized the resolution/sensitivity inverse relation for their particular design. 
          Converging hole collimators, for example fan-beam and cone-beam have been built which improve the trade-off between resolution and sensitivity by increasing the amount of the Anger camera that is exposed to the radionudide source. This increases the number of counts which improves sensitivity. More modem collimator designs, such as half-cone beam and astigmatic, have also been conceived. Sensitivity has seen an overall improvement by the introduction of multi-camera SPECT systems. A typical triple-camera SPECT system equipped with ultra-high resolution parallel-hole collimators can achieve a resolution (measured at full-width half-maximum (FWHM) of from four to seven millimeters. Other types of collimators with only one or a few channels, called pin-hole collimators, have been designed to image small organs and human extremities, such as the wrist and thyroid gland, in addition to research animals such as rats. 
5.2 COMPUTERS IN RADIOLOGY AND NUCLEAR MEDICINE 
          Nuclear medicine relies on computers to acquire, store, process and transfer image information. The history of computers in radiology and nuclear medicine is however relatively short. In the 1960s and early 1970s, CT and digital subtraction angiography where introduced into clinical practice for the first time. Digital subtraction angiography used computers to digitally subtract from a standard angiogram the effects of surrounding soft-tissue and bone, thus improving the image for diagnosis. Computed tomography relied on computers to digitally reconstruct sectional data using various reconstruction algorithms such as filtered back projection. The work horse of the CT unit was the computer; without it CT was impossible. SPECT and MRI first began to appear in the late 1970s. Both of these new imaging modalities required a computer. In the case of MRI, the computer played a major role in controlling the gantry and related mechanical equipment. In the SPECT case, as in CT, image reconstruction had to be done by computer. Nuclear medicine’s reliance on computers also has its roots in high-energy particle physics and nuclear physics. Both of these disciplines rely on statistical analysis of large numbers of photon (or other particle) counts, collected and processed by a computer. 
5.3 IMAGE ACQUISITION 
          Nuclear medicine images can be acquired in digital format using a SPECT scanner. The distribution of radionudide in the patient’s body corresponds to the analog image. An analog image is one that has a continuous distribution of density representing the continuous distribution of radionuclide amassed in a particular organ. The gamma ray counts coming from the patient’s body are digitized and stored in the computer in an array or image matrix. Typical matrix sizes used in SPECT imaging are 256x256, 128x128, 128x64 or 64x64. The third dimension in the array corresponds to the number of transaxial, coronal or sagittal slices used to represent the organ being imaged. A typical SPECT scanner has a storage limit of 16 bits per pixel. 
          Once a SPECT scan has been completed, the raw data image matrix is called projection data and is ready to be reconstructed. The reconstruction process puts the data in its final digital form ready for transmission to another computer system for display and physician analysis. 
6. RECONSTRUCTION         
          The most common algorithm used in the tomographic reconstruction of clinical data is the filtered back projection method. Other methods also exist.                
1.     Data Projection
2.     Fourier Transform of Data
3.     Data filtering
4.     Inverse transform of the Data
5.     Back projection 
6.1           Data  Projection 
          As the spect camera rotates around a patient, it creates a series of planar images called projections. At each stop, only photons moving perpendicular to the camera face pass through the collimator. As many of these photons originate from various depths in the patient, the result is an overlapping of all tracer emitting organs along a specified path. 
          A SPECT study consists of many planar images acquired at various angles. The picture below displays a set of projections taken of a patient’s bone scan.
          After all projections are acquired, they are subdivided by taking all the projections for a single, thin slice of the patient at a time. All the projections for each slice are then ordered into an image called a ‘sinogram’ as shown below. It represents the projection of the tracer distribution in the body into a single slice on the camera at every angle of the acquisition.
          The aim of reconstruction process is to retrieve the radiotracer spatial distribution from the projection data.
6.2           Fourier Transform of Data 
          If the projection sonogram data were reconstructed at this point, artifacts would appear in the reconstructed images due to the nature of the subsequent back projection operation. Additionally, due to the random nature of the radioactivity. There is an inherent noise in the data that tends to make the reconstructed image rough. In order to account for both of these effects, it is necessary to filter the data. We can filter it directly in the projection space, which means that we convolute the data by some sort of smoothing kernel.
          Convolution is computationally intensive.  Convolution in tyhr spatial domain is equivalent to a multiplication in the frequency domain. This means that any filtering done by the convolution operation in the normal spatial domain can be performed by a simple multiplication when transformed into the frequency domain. 
          Thus we transform the projection data into the frequency space where by we can more efficiently filter the data. 
6.3           Data filtering 
          Once the data has been transformed to the frequency domain, it is then filtered in order to smooth out the statistical noise. There are many different filters available to filter the data and they all have slightly different characteristics. For instance, some will smooth very heavily so that there are not any sharp edges, and hence will degrade the final image resolution .other filters will maintain a high resolution while only smoothing slightly .some typical filters used are Hanning filter, Butter worth filter, Low pass cosine filter, ZWeiner filter etc .Regardless of the filter used, the end result is to display a final image that is relatively free from noise and is pleasing to the eye. The next figure depicts three objects reconstructed without a filter true (left), without a filter noisy (middle) and with a Hanning filter (right). 
6.4           Inverse transform of data

          As the newly smoothed data is now in the frequency domain, we must transform it back into the spatial domain in order to get out the x, y, z information regarding spatial distribution. This is done in the same type of manner as the original transformation is done, expect we use what is called the one dimensional inverse Fourier transform. Data at this point is similar to the original (left) sonogram expect it is smoothed as seen below (right).
6.5  Back Projection
        
          The main reconstruction step involves a process known as ‘Back Projection’. As the original data was collected by only allowing photons emitted perpendicular to the camera face to enter the camera, back projection smears the camera bin data from the filtered sonogram back along the same lines from where the photon was emitted from. Regions where back projection lines from different angles intersect represent areas which contain higher concentration of radiopharmaceutical,      
7. ADVANTAGES OF SPECT 
1.     Better detailed resolution:  superimposition of overlying structures is removed. 
2.     Lesion contrast higher:  small deep lesions may be seen as small differences in radiopharmaceutical distribution and can be detected. Hence resolution is improved. 
3.     Localization of defects is more precise and more clearly seen by the inexperienced eye. 
4.     Extend and size of defects is better defined. 
5.     Images free of background.       
8. DISADVANTAGES OF SPECT  
1.  Since lead collimator is used, it introduces defects in scanning.Only 1out of 1000 photons emitted hits the detector and contributes to image reconstruction. 
2.  A blurring effect is caused due to the gamma particles penetrating the collimator walls and opaque objects. 
3. Spatial resolution is limited 
4. Attenuation compensation is not possible due to multiple scattering of electrons     





9 .SPECT APPLICATIONS 
1.    Heart Imaging 
          SPECT has been applied to the heart for myocardial perfusion imaging. The following figure is a myocardial MIBI scan taken under stress conditions. Regions of the heart that are not being per fused will display as cooler regions. 
2.    Brain Imaging 
          This figure is a transverse SPECT image of the brain. The hot spots present in the right posterior region are seen clearly using SPECT. SPECT examines cerebral function by documenting regional blood flow and metabolism. The SPECT and PET imaging modalities are especially valuable in brain imaging as they make it possible to visualize and quantify the density of different types of receptors and transporters. The accurate assessment of the density of receptors or transporters in the brain structure is quite challenging because of the small size of these structures. 
3.     SPECT imaging is specially used to differentiate between infarct and ischemic. Infarct is an area of necrosis in the tissue or the organ resulting from obstruction of the local circulation by a thrombus or embolus. Ischemic is a condition of the localized anemia due to an obstructed circulation. Clinical studies indicate that SPECT is more accurate at detecting acute ischemia than CT scan. 
4.    Tumor detection  
          SPECT can be used to detect tumors in cancer patients in the early stages itself. Using this slicing method, we can remove any interference from the surrounding area and detect disfuntionality of organs pretty easily. The radioactive chemicals will distribute through the body. The distributions can be traced and compared to that of a normal healthy body. Since this method is so precise, doctors can detect abnormalities in the early stages of disease development when it is more curable. SPECT has been proven alternative to PET in distinguishing recurrent brain tumor from radiation necrosis. 
5. Bone Scans 
          Bone scans are typically performed in order to assess bone growth and to look for brain tumors. The tumors are the dark areas seen in the picture below. The development of SPECT has enhanced the contrast resolution of bone scans by screening out overlying and underlying tissue. This results in increased detection and localization of small abnormalities especially in the spine, pelvis and knees. A bone scan typically costs about one third to half as much as a CT or MRI. 
6.  SPECT is superior to other imaging modalities in detecting subtle instances of Spondylolysis and assessing the degree of injury activity. SPECT is also used in diagnosing Alzheimer’s disease, for performing lung perfusion, abdominal and pelvic scanning and in diagnosing epilepsy. Radionuclide scans with increased imaging techniques such as SPECT have become safe well- established and highly effective diagnostic tools in sports medicine.       
10. POSITRON EMISSION TOMOGRAPHY (PET) 
          The distribution of activity in slices of organs can be obtained in a more accurate way using PET. In the simplest PET camera two modified sophisticated cameras called Anger cameras are placed on opposite sides of the patient. This increases the collection angle and reduces the collection times which are the limitations of SPECT .In PET, radiopharmaceuticals are labeled with positron emitting isotopes. A positron combines rather quickly with an electron. As a result the two gamma quanta are emitted almost in opposite directions .In PET scanners, rings of gamma ray of gamma ray detectors surrounding the patient are used. Each detector interacts electronically with the other detectors in the field of view. When a photon arrives within a short time frame, it is clear that a pair of quanta was generated and that these were created somewhere along the path between the detectors. Conventional PET tomography makes use of standard filtered back projection techniques used in computed tomography and SPECT. Three dimensional PET scanning has increased sensitivity but also noise. But since higher sensitivity permits lower radiation doses, the use is justified. 
          PET is used to study the dynamic properties of biochemical processes. A large part of the biological system consists of hydrogen, carbon, nitrogen and oxygen. With the help of a cyclotron it is possible to produce short –lived isotopes of carbon, nitrogen and oxygen that emit positrons. Examples of these isotopes are 0-15, N-13, and C-11 with half – lives of 2, 10, and 13 minutes respectively. PET uses electron collimation instead of lead collimation. Attenuation correction can be more accurately done in case of PET. The resolution of PET is much better and uniform than SPECT. 
11. COMPARISON OF PET AND SPECT
        SPECT imaging is inferior to PET because of attainable resolution and sensitivity. Different radionuclide is used for SPECT imaging that emits a single photon rather than positron emission as in PET. Because a single photon is emitted from the radio nuclides used for SPECT, a special lens known as a collimator is used to acquire the image data from multiple views around the body. The use of collimator results in a tremendous decrease in the detection efficiency as compared to PET. For Positron Emission Tomography, collimation is achieved naturally by the fact that a pair of detected photons (gamma rays) can be traced back to their origin since they travel along the same line after being produced. In PET, there might be as many as 500 detectors that could ‘see’ a PET isotope at any one time where as in SPECT; there may be only one or three collimators. New collimators are designed planar in one direction and concave in other which improves the spatial resolution  and reduces the non – isotropic blur in SPECT , So that the resolution and sensitivity can be improved much to that of PET .
          Although SPECT imaging resolution is not that of PET, the availability of new SPECT radiopharmaceuticals, particularly for the brain and head, and the practical and economical aspects of SPECT instrumentation make this mode of emission tomography attractive for clinical studies of the brain. The cost of SPECT imaging is very low comparing to PET.


PROS
CONS
SPECT
1. Afford able Price

2. Large clinical practice
1. Limitation of spatial resolution

2. Blurring effect with higher energy tracers
PET
1. Good spatial resolution
1. Costly

2. Tracers required are of short half-lives, hence requires cyclotrons and particle generators nearby itself
      
12. CONCLUSION            
          It is reasonable to speculate about a constant by perhaps a slower rate of increase of clinical applications of SPECT. It is safe to conclude that SPECT has reached the stage where it will be a valuable and also an unavoidable asset to the medical world. 
          SPECT being a nuclear medicine imaging modality , it has all the advantages and disadvantages of nuclear medicine can be highly beneficial or dangerous on the application , so is SPECT .In spite of this , Today , nearly all cardiac patients receive a planar ECT or SPECT as part of their work-up to detect and stage coronary artery disease . Brain and Liver SPECT scans are also a leading application of SPECT. SPECT is used routinely to help diagnose and stage cancer, stroke, liver disease, lungs disease and a host of other physiological (functional) abnormalities. 
Engineering Projects and Seminar Topics © 2008. Design by :Yanku Templates Sponsored by: Tutorial87 Commentcute
This template is brought to you by : allblogtools.com Blogger Templates