Rabu, 31 Desember 2008
Other extensions are provided to allow IP packets that are sent by a correspondent node with an out-of-date stored binding, or in transit, to be forwarded
directly to the new COA of the MN. The authentication mechanisms used in route optimization are the same as those used in the basic version of Mobile IP. This authentication generally relies on a mobility security association established in advance between the
sender and receiver of such messages. The route optimization protocol operates in four steps:
1. A binding warning control message may be sent to the HA indicating
that the correspondent node is unaware of the new COA of the mobile
2. A binding request message is sent by a correspondent node to the HA
when it determines that its binding should be refreshed.
3. An authenticated binding update message is sent by the HA to those
correspondent nodes that require them, containing the current COA
of the mobile node.
4. When smooth handoffs occur, the mobile node transmits a binding
update and has to be sure that the update has been received. Thereby,
it can request a binding acknowledgment from the correspondent
The procedure of handoff in Mobile IPv4 When a mobile node attempts at undertaking a handoff from one foreign domain to another, it sends a deregistration message to the previous foreign agent (e.g., FA1). The mobile node can send a deregistration message to FA1 or
just make a handoff and let its connection with FA1 time out. After the mobile enters a new foreign network, it waits for an agent advertisement from a FA. As soon as the mobile node receives the advertisement, it sends registration request to the home agent using the address of the new foreign agent (FA2) as care-of address. The HA processes the request and sends back a registration reply.
Rabu, 24 Desember 2008
All are possible with a new process for strengthening glass and ceramics developed by an Alfred University researcher.
Alfred University has signed a royalty agreement with Santanoni Glass and Ceramics, Inc., of Alfred Station, NY, for proprietary technology related to the strengthening of glass.
The process allows Santanoni to produce “unbreakable” glassware such as wine glasses, canning jars, bottles, tumblers, goblets and mugs at a cost that allows the products to be competitive with normal, un-strengthened glassware.
Dr. William LacCourse, a professor of Glass Science at the New York State College of Ceramics at Alfred University, and president of the company, located in the Ceramics Corridor Innovation Center in Alfred, has researched processes for strengthening glasses for more than 30 years.
“No glass is unbreakable, but our process produces the highest strength glassware available today, and at price that makes it affordable,” said LaCourse. “It has the potential to save restaurants, catering services and families up to 80 percent, and perhaps more, on their glassware costs. We have dropped glass bottles from 10 feet high onto a concrete floor, and the glass simply bounces.”
Under the agreement, Santanoni will have access to the technology developed by LaCourse and his graduate students. The glassware will be processed in Alfred Station, NY at the Sugar Hill Industrial Park, and will be marketed nationally.
“We are working with a couple of distributors for some specialty products, but will do the majority of consumer marketing through gift shops and the Internet. We are also contacting various food service companies where we believe the products can save them thousands of dollars per year due to reduced breakage and lower inventory costs.”
Alfred University President Charles Edmondson heralded the agreement with Santanoni Glass, calling it “significant for Alfred University and the Southern Tier. It is an indication of how our high-tech materials research can generate job creation and economic growth.”
Over the years the research was partially funded by Alfred’s Center for Advanced Ceramic Technology (CACT), as well as Santanoni. “The help of our CACT was critical in getting the company started. We could not have done it with out its constant support. I owe a lot to the CACT and especially to Alfred University for providing the laboratories, equipment and financial support,” said LaCourse. “It is time to pay back.”
Santanoni’s Ultra-HS glass products are now available in limited quantities as the company prepares to ramp up production levels.
Mention optical communication and most people think of fiber optics. But light travels through air for a lot less money. So it is hardly a surprise that clever entrepreneurs and technologists are borrowing many of the devices and techniques developed for fiber-optic systems and applying them to what some call fiber-free optical communication. Although it only recently, and rather suddenly, sprang into public awareness, free-space optics is not a new idea. It has roots that go back over 30 years--to the era before fiber-optic cable became the preferred transport medium for high-speed communication. In those days, the notion that FSO systems could provide high-speed connectivity over short distances seemed futuristic, to say the least. But research done at that time has made possible today's free-space optical systems, which can carry full-duplex (simultaneous bidirectional) data at gigabit-per-second rates over metropolitan distances of a few city blocks to a few kilometers.
FSO first appeared in the 60's, for military applications. At the end of 80's, it appeared as a commercial option but technological restrictions prevented it from success. Low reach transmission, low capacity, severe alignment problems as well as vulnerability to weather interferences were the major drawbacks at that time. The optical communication without wire, however, evolved! Today, FSO systems guarantee 2.5 Gb/s taxes with carrier class availability. Metropolitan, access and LAN networks are reaping the benefits.
The use of free space optics is particularly interesting when we perceive that the majority of customers does not possess access to fibers as well as fiber installation is expensive and demands long time. Moreover, right-of-way costs, difficulties in obataining government licenses for new fiber installation etc. are further problems that has turned FSO into the option of choice for short reach applications.
FSO uses lasers, or light pulses, to send packetized data in the terahertz (THz) spectrum range. Air, ot fiber, is the transport medium. This means that urban businesses needing fast data and Internet access have a significantly lower-cost option.
FSO technology is implemented using a laser device .These laser devices or terminals can be mounted on rooftops ,Corners of buidings or even inside offices behind windows. FSOdevices look like security video cameras.
Low-power infrared beams, which do not harm the eyes, are the means by which free-space optics technology transmits data through the air between transceivers, or link heads, mounted on rooftops or behind windows. It works over distances of several hundred meters to a few kilometers, depending upon atmospheric conditions.
Commercially available free-space optics equipment provides data rates much higher than digital subscriber lines or coaxial cables can ever hope to offer. And systems even faster than the present range of 10 Mb/s to 1.25 Gb/s have been announced, though not yet delivered.
There are two main limitations of using conventional x-rays to examine internal structures of the body. Firstly superimpositions of the 3-dimensional information onto a single plane make diagnosis confusing and often difficult. Secondly the photographic film usually used for making radiographs has a limited dynamic range and therefore only object that have large variation in the x-ray absorption relative to their surroundings will cause sufficient contrast differences on the film to be distinguished by the eye. Thus the details of bony structures can be seen, it is difficult to discern the shape and composition of soft tissue organ accurately.
CT uses special x-ray equipment to obtain image data from different angles around a body and then shows a cross section of body tissues and organs. i.e., it can show several types of tissue-lung,bone,soft tissue and blood vessel with great clarity. CT of the body is a patient friendly exam that involves little radiation exposure.
In CT scanning, the image is reconstructed from a large number of absorption profiles taken at regular angular intervals around a slice, each profile being made up from a parallel set of absorption values through the object. ie, CT also passes x-rays through the body of the patient but the detection method is usually electronic in nature, and the data is converted from analog signal to digital impulses in an AD converter. This digital representation of the x-ray intensity is fed in to a computer, which then reconstruct an image.
The method of doing of tomography uses an x-ray detector which translates which translates linearly on a track across the x-ray beam, and when the end of the scan is reached the x-ray tube and the detector are rotated to a new angle and the linear motion is repeated. The latest generation of CT machines use a 'fan-beam' geometry with an array of detectors which simultaneously detect x-rays on a number of different paths through the patient.
CT scanner is a large square machine with a hole in the centre, something like a doughnut. The patient lies still on a table that can move up/down and slide in to and out from the centre of hole. With in the machine an X-ray tube on a rotating gantry moves around the patient's body to produce the images.
In CT the film is replaced by an array of detectors which measures X-ray profile. Inside the scanner, a rotating gantry that has an X-ray tube mounted on one side an arc -shaped detector mounted on opposite side. An X-ray beam is emitted in a fan beam as the rotating frame spins the X-ray tube and detector around the patient. Each time the X-ray tube and detector make a 360 degree rotation and X-ray passes through the patient's body the image of a thin section is acquired. During each rotation the detector records about 1000 images (profiles) of the expanded X-ray beam. Each profile is then reconstructed by a dedicated computer into two time.
As a rule of thumb, it's not nice to make light of the misfortunes of others, especially when physical harm comes into play. There are certainly numerous jokes that involve exploding PCs, especially where gaming, overclocking, and porn is concerned. Many of them have no tact for obvious reasons, and there's no doubt that readers may recall one or two by the end of this article. But truth be told, news surrounding a burned up man sitting in front of smoking debris that once served as a PC only conjures up comical images seen in cartoons.
So what happened to Vijayakumar? According to local police, the PC exploded and burned it user alive. “We are yet to ascertain the cause of the blast," a police officer told Times of India. "The computer was completely damaged and the deceased was charred." The officer also went on to say that the case has baffled the investigating officers, sounding rather unbelievable, and something they had never seen before.
"But the scene of the accident seems to suggest that the youth was killed in an accident as his body was in the sitting position in front of the burnt computer,” the official added.
For now, the police have not offered any other information. Certainly many factors could have caused the PC to short circuit: faulty wiring, spilled liquids, maybe even a jolt of lightning crashing through the power outlet. Perhaps the power supply arced and burned him up on the spot, or the PC had faulty power cables. It would be understandable had Vijayakumar's demise been the direct result of an exploding battery in a laptop. But an exploding PC? That remains questionable. Still, because Vijayakumar was found burned in a sitting position, its easy to assume that whatever happened was close to instantaneous.
And where was Vignesh during the entire incident? No, something about the entire incident sounds fishy. With Prasad the only other individual in the house, the grim setting sounds like a plot yanked straight out of a thriller movie. Hopefully, more information will surface soon because, quite frankly, if there are faulty parts out there on the market, then we as consumers need to know. Period.
Finnish scientists have created a vibrating touch screen phone, for the visually challenged, that can simulate Braille characters. A Nokia 770 mobile Internet tablet was the main research tool used, and since it already has haptic feedback built in to the screen, it's relatively easy to develop and test the technique. Instead of recreating the 2 x 3 matrix of raised spots that represents a Braille character, the new system just vibrates the screen using the transducers. As a reading finger is touched to the screen, its position is logged relative to the conventional text character beneath: The Braille is then emulated as a Morse code-like chain of intense and weak vibrations of the screen. A strong one relates to a Braille dot, and a weak one represents a Braille space--it's incredibly simple.Volunteers involved in the research have been able to transition between conventional Braille and the new technique without too much difficulty, reading single characters in around 1.25 seconds.
undetectable from the ground, unmanned aerial vehicles (UAVs) are widely used by the military to scan terrain for possible threats and intelligence. Now, fuel cell powered UAVs are taking flight as an Office of Naval Research (ONR)-sponsored program to help tactical decision-makers gather critical information more efficiently… and more quietly.
Piloted remotely or autonomously, UAVs have long provided extra "eyes in the sky" especially for missions that are too dangerous for manned aircraft. This latest technology is showcased by Ion Tiger, a UAV research program at the Naval Research Laboratory (NRL) that merges two separate efforts — UAV technology and fuel cell systems.
In particular, the Ion Tiger UAV tests a hydrogen-powered fuel cell design, which can travel farther and carry heavier payloads than earlier battery-powered designs. Ion Tiger employs stealthy characteristics due to its small size, reduced noise, low heat signature and zero emissions.
"Pursuing energy efficiency and energy independence are core to ONR's Power and Energy Focus Area," said Rear Admiral Nevin Carr, Chief of Naval Research. "ONR's investments in alternative energy sources, like fuel cell research, have application to the Navy and Marine Corps mission in future UAVs and vehicles. These investments also contribute directly to solving some of the same technology challenges faced at the national level."
Fuel cells create an electrical current when they convert hydrogen and oxygen into water and are pollution-free. A fuel cell propulsion system can also deliver potentially twice the efficiency of an internal combustion engine — while running more quietly and with greater endurance.
"In this size range, we are hopefully able to conduct very productive surveillance missions at low cost with a relatively small vehicle, and a high-quality electric payload," says NRL Principal Investigator Dr. Karen Swider-Lyons.
This spring, Ion Tiger's flight trial is expected to exceed the duration of previous flights seven-fold.
"This will really be a 'first of its kind' demonstration for a fuel cell system in a UAV application for a 24-hour endurance flight, with a 5 pound payload," says ONR Program Manager Dr. Michele Anderson. "That's something nobody can do right now."
In 2005, NRL backed initial research in fuel cell technologies for UAVs. Today, says Swider-Lyons, it's paying off with a few lessons learned from the automotive industry.
"With UAVs, we are dealing with relatively small fuel cells of 500 watts," she explains. "It is hard to get custom, high-quality fuel cell membranes built just for this program. So we are riding along with this push for technology from the automotive industry."
"What's different with fuel cell cars is that developers are focused on volume…so they want everything very compact," adds Swider-Lyons. "Our first issue is weight, our second issue is weight and our third issue is weight!"
Besides delivering energy savings and increased power potential, fuel cell technology spans the operational spectrum from ground vehicles to UAVs, to man-portable power generation for Marine expeditionary missions to meeting power needs afloat. In fact, it's technology that Marines at Camp Pendleton are using today to power their General Motors fuel cell vehicles.
Across the board, the Navy and Marine Corps are seeking more efficient sources of energy. ONR has been researching and testing power and energy technology for decades. Often the improvements to power generation and fuel efficiency for ships, aircraft, vehicles and installations yield a direct benefit to the public.
"ONR has been a visionary in terms of providing support for this program," says Swider-Lyons.
Other Ion Tiger partners include Protonex Technology Corporation and the University of Hawaii. NRL's work on UAVs also leverages funding from the Office of the Secretary of Defense.Info: Naval Research Laboratory.
Photolithography uses light to deposit or remove material and create patterns on a surface. There is usually a direct relationship between the wavelength of light used and the feature size created. Therefore, nanofabrication has depended on short wavelength ultraviolet light to generate ever smaller features.
"The RAPID lithography technique we have developed enables us to create patterns twenty times smaller than the wavelength of light employed,"explains Dr. Fourkas, "which means that it streamlines the nanofabrication process. We expect RAPID to find many applications in areas such as electronics, optics, and biomedical devices."
"If you have gotten a filling at the dentist in recent years,"says Fourkas, "you have seen that a viscous liquid is squirted into the cavity and a blue light is then used to harden it. A similar process of hardening using light is the first element of RAPID. Now imagine that your dentist could use a second light source to sculpt the filling by preventing it from hardening in certain places. We have developed a way of using a second light source to perform this sculpting, and it allows us to create features that are 2500 times smaller than the width of a human hair."
Both of the laser light sources used by Fourkas and his team were of the same color, the only difference being that the laser used to harden the material produced short bursts of light while the laser used to prevent hardening was on constantly. The second laser beam also passed through a special optic that allowed for sculpting of the hardened features in the desired shape.
"The fact that one laser is on constantly in RAPID makes this technique particularly easy to implement,"says Fourkas, "because there is no need to control the timing between two different pulsed lasers."
Fourkas and his team are currently working on improvements to RAPID lithography that they believe will make it possible to create features that are half of the size of the ones they have demonstrated to date.
Achieving lambda/20 Resolution by One-Color Initiation and Deactivation of Polymerization was written by Linjie Li, Rafael R. Gattass, Erez Gershgorem, Hana Hwang and John T. Fourkas.
- - - -
CONTACTS: Kelly Blake, 301-405-8203, http://chemlife.umd.edu
Lee Tune, 301 405 4679 or email@example.com
PHOTO AVAILABLE: Contact Lee Tune, above.
PHOTO CAPTION: Schematic depictions of RAPID lithography,the technique developed by John Fourkas and colleagues which enables the creation of features 2500 times smaller than the width of a human hair.
This University of Maryland News Release is available at: http://www.newsdesk.umd.edu/scitech/release.cfm"ArticleID=1862
Opportunities and Challenges in Wireless Sensor Networks
Due to advances in wireless communications and electronics over the last few years, the development of networks of low-cost, low-power, multifunctional sensors has received increasing attention. These sensors
are small in size and able to sense, process data, and communicate with each other, typically over an RF (radio frequency) channel. A sensor network is designed to detect events or phenomena, collect and
process data, and transmit sensed information to interested users. Basic features of sensor networks are:
• Self-organizing capabilities
• Short-range broadcast communication and multihop routing
• Dense deployment and cooperative effort of sensor nodes
• Frequently changing topology due to fading and node failures
• Limitations in energy, transmit power, memory, and computing power
These characteristics, particularly the last three, make sensor networks different from other wireless ad hoc or mesh networks.
Clearly, the idea of mesh networking is not new; it has been suggested for some time for wireless Internet access or voice communication. Similarly, small computers and sensors are not innovative
per se. However, combining small sensors, low-power computers, and radios makes for a new technological platform that has numerous important uses and applications, as will be discussed in the next section.
Growing Research and Commercial Interest
Research and commercial interest in the area of wireless sensor networks are currently growing exponentially, which is manifested in many ways:
• The number of Web pages (Google: 26,000 hits for sensor networks; 8000 for wireless sensor networks in August 2003)
• The increasing number of
• Dedicated annual workshops, such as IPSN (information processing in sensor networks); SenSys; EWSN (European workshop on wireless sensor networks); SNPA (sensor network protocols and applications); and WSNA (wireless sensor networks and applications)
• Conference sessions on sensor networks in the communications and mobile computing communities (ISIT, ICC, Globecom, INFOCOM, VTC, MobiCom, MobiHoc)
• Research projects funded by NSF (apart from ongoing programs, a new specific effort now focuses on sensors and sensor networks) and DARPA through its SensIT (sensor information Technology), NEST (networked embedded software technology), MSET (multisensor exploitation), UGS (unattended ground sensors), NETEX (networking in extreme environments),
ISP (integrated sensing and processing), and communicator programs
Special issues and sections in renowned journals are common, e.g., in the
 and signal processing, communications, and networking magazines. Commercial interest is reflected in investments by established companies as well as start-ups that offer general and specific hardware and software
Compared to the use of a few expensive (but highly accurate) sensors, the strategy of deploying a large Number of inexpensive sensors has significant advantages, at smaller or comparable total system cost:
Much higher spatial resolution; higher robustness against failures through distributed operation; uniform Coverage; small obtrusiveness; ease of deployment; reduced energy consumption; and, consequently,
Increased system lifetime. The main point is to position sensors close to the source of a potential problem Phenomenon, where the acquired data are likely to have the greatest benefit or impact.
Pure sensing in a fine-grained manner may revolutionize the way in which complex physical systems are understood. The addition of actuators, however, opens a completely new dimension by permitting
management and manipulation of the environment at a scale that offers enormous opportunities for Almost every scientific discipline. Indeed, Business 2.0 (http://www.business2.com/) lists sensor robots
as one of “six technologies that will change the world,” and
at MIT and Global future Identify WSNs as one of the “10 emerging technologies that will change the world” (http://www.globalfuture. com/mit-trends2003.htm). The combination of sensor network technology with MEMS and nanotechnology
Will greatly reduce the size of the nodes and enhance the capabilities of the network. The remainder of this chapter lists and briefly describes a number of applications for wireless sensor
Networks, grouped into different categories. However, because the number of areas of application is Growing rapidly, every attempt at compiling an exhaustive list is bound to fail.
The first definition of nanotechnology to achieve some degree of international acceptance was developed after consultation with experts in over 20 countries in 1987–1898 (Siegel et al., 1999; Roco et al., 2000). However, despite its importance, there is no globally recognized definition. Any nanotechnology definition would include three elements:
1. The size range of the material structures under consideration — the intermediate length scale between a single atom or molecule, and about 100 molecular diameters or about 100 nm. Here
we have the transition from individual to collective behavior of atoms. This length scale condition alone is not sufficient because all natural and manmade systems have a structure at the nanoscale.
2. The ability to measure and restructure matter at the nanoscale; without it we do not have new understanding and a new technology; such ability has been reached only partially so far, but
significant progress was achieved in the last five years.
3. Exploiting properties and functions specific to nanoscale as compared to the macro- or microscales; this is a key motivation for researching nanoscale.
According to the National Science Foundation and NNI, nanotechnology is the ability to understand, control, and manipulate matter at the level of individual atoms and molecules, as well as at the “supramolecular” level involving clusters of molecules (in the range of about 0.1 to 100 nm), in order to create materials, devices, and systems with fundamentally new properties and functions because of their small structure. The definition implies using the same principles and tools to establish a unifying platform for science and engineering at the nanoscale, and employing the atomic and molecular interactions to develop efficient manufacturing methods.
There are at least three reasons for the current interest in nanotechnology. First, the research is helping us fill a major gap in our fundamental knowledge of matter. At the small end of the scale — single atoms and molecules — we already know quite a bit from using tools developed by conventional physics and chemistry. And at the large end, likewise, conventional chemistry, biology, and engineering have taught us about the bulk behavior of materials and systems. Until now, however, we have known much less about the intermediate nanoscale, which is the natural threshold where all living and manmade systems work. The basic properties and functions of material structures and systems are defined here and, even more importantly, can be changed as a function of the organization of matter via ‘‘weak” molecular interactions (such as hydrogen bonds, electrostatic dipole, van der Waals forces, various surface forces, electro-fluidic forces, etc.). The intellectual drive toward smaller dimensions was accelerated by the discovery of size-dependentnovel properties and phenomena. Only since 1981 have we been able to measure the size of a cluster of atoms on a surface (IBM, Zurich), and begun to provide better models for chemistry and biology selforganization and self-assembly. Ten years later, in 1991, we were able to move atoms on surfaces (IBM, Almaden). And after ten more years, in 2002, we assembled molecules by physically positioning the component atoms. Yet, we cannot visualize or model with proper spatial and temporal accuracy a chosen domain of engineering or biological relevance at the nanoscale. We are still at the beginning of this road. A second reason for the interest in nanotechnology is that nanoscale phenomena hold the promise for fundamentally new applications. Possible examples include chemical manufacturing using designed molecular
assemblies, processing of information using photons or electron spin, detection of chemicals or bioagents using only a few molecules, detection and treatment of chronic illnesses by subcellular interventions, regenerating tissue and nerves, enhancing learning and other cognitive processes by understanding the “society” of neurons, and cleaning contaminated soils with designed nanoparticles. Using input from industry and academic experts in the U.S., Asia Pacific countries, and Europe between 1997 and 1999, we have projected that $1 trillion in products incorporating nanotechnology and about 2 million jobs worldwide will be affected by nanotechnology by 2015 (Roco and Bainbridge, 2001). Extrapolating from information technology, where for every worker, another 2.5 jobs are created in related areas, nanotechnology has the potential to create 7 million jobs overall by 2015 in the global market. Indeed, the first generation of nanostructured metals, polymers, and ceramics have already entered the commercial marketplace.
The online magazine's original task was to publish an article about future user interfaces. However, after extensive research into multi-touch applications such as Apple's iPhone and Microsoft Surface, the staff at Maximum PC uncovered a whole community of DIY engineers "perfecting the art" of creating homemade multi-touch surfaces. Home-built multi-touch surfaces should come as no surprise: there are websites dedicated to hands-on construction of unique technologies such as a Commodore 64 laptop, a speech-controlled trash can, and even a lemon-charged battery. Needless to say, if the industry can build it, then the online community will find a way to build even it better... and cheaper.
With that said, Maximum PC decided to create a multi-touch surface computer using methods found online at the Natural User Interface Group. Ultimately, the online magazine didn't go out and spend $12,000, but rather just $350. Out of various processes used to construct the homemade multi-touch surface, the staff decided to use the FTIR (Frustrated Total Internal Reflection) screen setup. This consists of a sheet of transparent acrylic, a chain of infrared LEDs, and a camera with an IR sensor. According to the site, the LEDs are arranged around the outside of the acrylic sheet so that they shine directly into the side. The IR light thus shoots into the acrylic, reflecting off the top and bottom of the material, remaining contained within.
When a finger presses against the sheet, the reflecting light hits the spot and bounces downward into the cabinet mounted underneath. A modified webcam mounted in the cabinet--altered to detect only infrared light--views the finger touch as white spots, and then sends the image to software running on a connected PC. The software maps the movements and applies the coordinates to whatever application is running. The PC thus transmits the on-screen image via a projector back onto the surface using a mirror and a piece of heat-absorbing glass. Granted this brief overview sounds rather simple, the process of creating the multi-touch surface PC takes a bit of work, from polishing the sides of the acrylic sheet to altering the webcam.
But wait... Maximum PC didn't just use any webcam; the site implemented the $35 PlayStation 3 Eye, using a rectangular razor blade to gain access to the poor camera's IR filter. As with the rest of the article, the site shows the step-by-step process of removing the unwanted filter. "The infrared sensor is the innermost piece of glass on the lens assembly," the site reads. "When it catches the light, it looks ruby red – a dead giveaway that this is the piece filtering out infrared light. In order to remove it we simply used a razor blade to gouge out the plastic in a circle around the filter, allowing us to easily pop it out." Why remove the filter? So that the PlayStation 3 Eye can pick up infrared light.
As for the connected computer, the staff didn't use anything meaty, only a PC containing a Core 2 Duo and 2 GB of memory. With that said, DIY builders won't need anything outrageously fast, but more than likely a rig that hit the market within the last few years. Additionally, the camera and PC don't necessarily need to be within the cabinet; the cables for the PS3 Eye and projector can run out of the cabinet and hook up to a laptop if needed.
Ultimately, the actual multi-touch screen was 24-inches by 30-inches, with the acrylic sheet 3/8-inches thick. The IR LEDs lining along each side were 1-inch apart, however the staff wired the LEDs together the hard way, soldering the leads together rather than just using a wire-wrap gun (that would make the task quicker and more environmentally safe... meaning no lead). The cabinet itself was constructed from 3/8-inch MDF, with a stained hardwood frame on top, standing waist high. To get the entire contraption to work, the team installed Touchlib on the PC, an open source library that takes the visual data received by the camera and parses it into touch events. Someone even wrote a driver that enables the PS3 Eye to work on the PC.
"We completed this project over the course of about two weeks' work," the article reads. "All said and done, everything worked out pretty well. We ended up with a fully functional, highly responsive multi-touch surface."
For a meager $350, the DIY multi-touch project sounds like great fun, and may end up as something we do here at Tom's just for kicks. After all, many of us don't have a whopping $12,000 stored in the underwear drawer (well, maybe Tuan). Still, this example definitely proves that anything is possible on a small budget. All it takes is a little patience, a little research, and a dedicated community to help along the way.
IBM and its semiconductor technology alliance partners are announcing the availability of 28-nanometer (nm) chip technology, a little more than a generation beyond the 45nm technologies currently used by Intel and Advanced Micro Devices in their latest chips.
The first products using chips based on this technology are expected in the second half of 2010, an IBM spokesman said. Devices will include smartphones and consumer electronics products.
The largest, single countervailing force to the IBM-led group is Intel. The Santa Clara, Calif.-based chip giant's chief executive, Paul Otellini, said Tuesday in a first-quarter earnings conference call that Intel is "pulling in" the release of "Westmere" chips based on 32nm technology and will ship silicon later this year.
Generally, the smaller the geometry, the faster and more power efficient the chip is.
The IBM alliance--which also includes the AMD manufacturing spin-off Globalfoundries, Chartered Semiconductor, and Infineon Technologies--are jointly developing the 28nm chipmaking process based on the partners' "high-k metal gate" (which minimizes current leakage), low-power complementary metal oxide semiconductor (CMOS) process technology.
The technology "can provide a 40 percent performance improvement and a more than 20 percent reduction in power, in a chip that is half the size, compared with 45nm technology," IBM said in a statement. "These improvements enable microchip designs with outstanding performance, smaller feature sizes and low standby power, contributing to faster processing speed and longer battery life in next-generation mobile Internet devices and other systems."
IBM said customers can begin their designs now using 32nm technology and then transition to 28nm for density and power advantages without the need for a major redesign.
One prominent customer is U.K.-based ARM, whose basic chip design has been used in billions of devices all over the world. ARM is collaborating with the IBM alliance to develop a design platform for 32nm and 28nm technology and is tuning its Cortex processor family and future processors to exploit the technology's capabilities, IBM said.
Which of these fuels will play a major role in our future? The answer is not clear, as factors such as land availability, future technical innovation, environmental policy regulating greenhouse gas emissions, governmental subsidies for fossil fuel extraction/ processing, implementation of net metering, and public support for alternative fuels will all affect the outcome. A critical point is that as research and development continue to improve the efficiency of bio fuel production processes, economic feasibility will continue to improve.
Bio fuel production is best evaluated in the context of a bio refinery. In a bio refinery, agricultural feedstock and by-products are processed through a series of biological, chemical, and physical
processes to recover biofuels, biomaterials, nutraceuticals, polymers, and specialty chemical compounds.2,3 This concept can be compared to a petroleum refinery in which oil is processed to produce fuels, plastics, and petrochemicals. The recoverable products in a biorefinery range from basic food ingredients to complex pharmaceutical compounds and from simple building materials to complex industrial composites and polymers. Biofuels, such as ethanol, hydrogen, or biodiesel, and biochemicals, such as xylitol, glycerol, citric acid, lactic acid, isopropanol, or vitamins, can be produced for use in the energy, food, and nutraceutical/pharmaceutical industries. Fibers, adhesives, biodegradable plastics such as polylactic acid, degradable surfactants, detergents, and enzymes can be recovered for industrial use. Many biofuel compounds may only be economically feasible to produce when valuable coproducts are also recovered and when energyefficient processing is employed. One advantage of microbial conversion processes over chemical processes is that microbes are able to select their substrate among a complex mixture of compounds, minimizing the need for isolation and purification of substrate prior
to processing. This can translate to more complete use of substrate and lower chemical requirements for processing.
Early proponents of the biorefinery concept emphasized the zeroemissions goal inherent in the plan—waste streams, water, and heat from one process are utilized as feed streams or energy to another, to fully recover all possible products and reduce waste with maximized efficiency.2,3 Ethanol and biodiesel production can be linked effectively in this way. In ethanol fermentation, 0.96 kg of CO2 is produced per kilogram of ethanol formed. The CO2 can be fed to algal bioreactors to produce oils used for biodiesel production. Approximately 1.3 kg CO2 is consumed per kilogram of algae grown, or 0.5 kg algal oil produced by oleaginous strains. Another example is the potential application of microbial fuel cells to generate electricity by utilizing waste organic compounds in spent fermentation media from biofuel production processes.
Also encompassed in a sustainable biorefinery is the use of “green” processing technologies to replace traditional chemical processing. For example, supercritical CO2 can be used to extract oils and nutraceutical compounds from biomass instead of using toxic would allow for replacement of petroleum-derived products with sustainable, carbon-neutral, low-polluting alternatives. In addition to environmental benefits of biorefining, there are economic benefits as new industries grow in response to need.2,3 A thorough economic analysis, including ecosystem and environmental impact, harvest, transport, processing, and storage costs must be considered. The R&D Act of 2000 and the Energy Policy Act of 2005 recommend increasing biofuel production from 0.5 to 20 percent and biobased chemicals and materials from 5 to 25 percent,5 a goal that may best be reached through a biorefinery model. organic solvents such as hexane.4 Ethanol can be used in biodiesel production from biological oils in place of toxic petroleum-based methanol traditionally used.Widespread application of biorefineries
A comparison of biofuel energy contents reveals that hydrogen gas has the highest energy density of common fuels expressed on a mass basis. For liquid fuels, biodiesel, gasoline, and diesel have energy densities in the 40 to 46 kJ/g range. Biodiesel fuel contains 13 percent lower energy density than petroleum diesel fuel, but combusts more completely and has greater lubricity.7 The infrastructure for transportation, storage, and distribution of hydrogen is lacking, which is a significant advantage for adoption of biodiesel.
Another measure of energy content is energy yield (YE), the energy produced per unit of fossil fuel energy consumed. YE for biodiesel from soybean oil is 3.2 compared to 1.5 for ethanol from
corn and 0.84 and 0.81 for petroleum diesel and gasoline, respectively.8 Even greater YE values are achievable for biodiesel created from algal sources or for ethanol from cellulosic sources.9 The high net energy gain for biofuels is attributed to the solar energy captured compared to an overall net energy loss for fossil fuels.
So the question remains: What could anyone possibly find on your computer or
home network that would be of value to him or her? The answer might surprise you.
For example, they might want to:
1 Steal your Microsoft Money and Quicken files, where you store personal
2 Get their hands on your personal saving and checking account numbers.
3 Search for your personal pin numbers.
4 Steal electronic copies of your taxes that have been prepared using desktop
tax reporting applications.
5 Steal your credit card numbers or any other financial information that is of
6 Steal important business information on your computer that might be of
value to a competitor.
7 Launch distributed denial of service attacks against other Internet computers
and Web sites.
All these types of information can easily be captured and sent to the hacker using a
worm program, as depicted in Figure 1.4. A worm can be initially implanted on your
computer by hiding inside an e-mail attachment which, when double-clicked,
silently installs the worm on your hard drive. The worm then goes to work searching
your hard disk for valuable information that it can relay back to its creator.
Money and personal secrets might not be the only things of value your computer
can provide to hackers. Some people simply delight in causing trouble or playing
practical jokes. It is not fun to find out that somebody has hacked on to your computer
and deleted important files or filled up your hard drive with useless garbage,
but to some crackers this is a form of amusement.
A cracker can also take control of your computer without your knowledge and use it
and thousands of other computers to launch attacks on commercials Web sites and
other corporate communications systems. Crackers achieve this task by breaking
into individual computer systems and planting Trojan horses that, after installed,
communicate back to the cracker’s computer and perform whatever instructions they
are told to do. To prevent this sort of silent hostile takeover, you need to install a personal
firewall and configure it to block all unapproved outgoing traffic from your
computer. As you will see in Chapter 3 you can configure your firewall with a list of
approved Internet applications such as Internet Explorer and Outlook Express. Your
personal firewall will then deny access to the Internet to any application that is not
on this list, including any Trojan horse applications.
The term Trojan horse comes from the trick that the Greek attackers used to penetrate the defenses of the city of Troy. It describes a program that sneaks onto your computer by hiding within a seemingly legitimate piece of software. The horse later begins to run amuck. Back Orifice made the Trojan horse software attack famous. Back Orifice is a Trojan horse program whose name mimics the Microsoft Back Office suite of network applications. Once planted, the Back Orifice program provides the hacker with complete control over the infected computer.
Sniffing is the use of a network interface to receive data not intended for the machine in which the interface resides. A variety of types of machines need to have this capability. A token-ring bridge, for example, typically has two network interfaces that normally receive all packets traveling on the media on one interface and retransmit some, but not all, of these packets on the other interface. Another example of a device that incorporates sniffing is one typically marketed as a “network analyzer.” A network analyzer helps network administrators diagnose a variety of obscure problems that may not be visible on any one particular host. These problems can involve unusual interactions between more than just one or two machines and sometimes involve a variety of protocols interacting in strange ways. Devices that incorporate sniffing are useful and necessary. However, their very existence implies that a malicious person could use such a device or modify an existing machine to snoop on network traffic. Sniffing programs could be used to gather passwords, read inter-machine e-mail, and examine client-server database records in transit. Besides these high-level data, low level information might be used to mount an active attack on data in another computer system.
Sniffing: How It Is Done
In a shared media network, such as Ethernet, all network interfaces on a network segment have access to all of the data that travels on the media. Each network interface has a hardware-layer address that should differ from all hardware-layer addresses of all other network interfaces on the network. Each network also has at least one broadcast address that corresponds not to an individual network interface, but to the set of all network interfaces. Normally, a network interface will only respond to a data frame carrying either its own hardware-layer address in the frame’s destination field or the “broadcast address” in the destination field. It responds to these frames by generating a hardware interrupt to the CPU. This interrupt gets the attention of the operating system, and passes the data in the frame to the operating system for further processing.
The term “broadcast address” is somewhat misleading. When the sender wants to
get the attention of the operating systems of all hosts on the network, he or she uses
the “broadcast address.” Most network interfaces are capable of being put into a
“promiscuous mode.” In promiscuous mode, network interfaces generate a hardware
interrupt to the CPU for every frame they encounter, not just the ones with
their own address or the “broadcast address.” The term “shared media” indicates to
the reader that such networks broadcast all frames—the frames travel on all the
physical media that make up the network.
At times, you may hear network administrators talk about their networking trouble spots when they observe failures in a localized area. They will say a particular area of the Ethernet is busier than other areas of the Ethernet where there are no problems. All of the packets travel through all parts of the Ethernet segment. Interconnection devices that do not pass all the frames from one side of the device to the other form the boundaries of a segment. Bridges, switches, and routers divide segments from each other, but low-level devices that operate on one bit at a time, such as repeaters and hubs, do not divide segments from each other. If only low-level devices separate two parts of the network, both are part of a single segment. All frames traveling in one part of the segment also travel in the other part. The broadcast nature of shared media networks affects network performance and reliability so greatly that networking professionals use a network analyzer, or sniffer, to troubleshoot problems. A sniffer puts a network interface in promiscuous mode so that the sniffer can monitor each data packet on the network segment. In the hands of an experienced system administrator, a sniffer is an invaluable aid in determining why a network is behaving (or misbehaving) the way it is. With an analyzer, you can determine how much of the traffic is due to which network protocols, which hosts are the source of most of the traffic, and which hosts are the destination of most of the traffic. You can also examine data traveling between a particular pair of hosts and categorize it by protocol and store it for later analysis offline. With a sufficiently powerful CPU, you can also do the analysis in real time. Most commercial network sniffers are rather expensive, costing thousands of dollars. When you examine these closely, you notice that they are nothing more than a portable computer with an Ethernet card and some special software. The only item that differentiates a sniffer from an ordinary computer is software. It is also easy to download shareware and freeware sniffing software from the Internet or various bulletin board systems.
The ease of access to sniffing software is great for network administrators because this type of software helps them become better network troubleshooters. However, the availability of this software also means that malicious computer users with access to a network can capture all the data flowing through the network. The sniffer can capture all the data for a short period of time or selected portions of the data for a fairly long period of time. Eventually, the malicious user will run out of space to store the data—the network I use often has 1000 packets per second flowing on it. Just capturing the first 64 bytes of data from each packet fills up my system’s local disk space within the hour.
Esniff.c is a simple 300-line C language program that works on SunOS 4.x. When
run by the root user on a Sun workstation, Esniff captures the first 300 bytes of each
TCP/IP connection on the local network. It is quite effective at capturing all usernames and passwords entered by users for telnet, rlogin, and FTP. TCPDump 3.0.2 is a common, more sophisticated, and more portable Unix sniffing program written by Van Jacobson, a famous developer of high-quality TCP/IP software. It uses the libpcap library for portably interfacing with promiscuous mode network interfaces. The most recent version is available via anonymous FTP to ftp.ee.lbl.gov.
NetMan contains a more sophisticated, portable Unix sniffer in several programs in
its network management suite. The latest version of NetMan is available via
anonymous FTP to ftp.cs.curtin.edu.au in the directory /pub/netman.
EthDump is a sniffer that runs under DOS and can be obtained via anonymous FTP
from ftp.eu.germany.net in the directory /pub/networking/inet/ethernet/.
On some Unix systems, TCPDump comes bundled with the vendor OS. When run by an ordinary, unprivileged user, it does not put the network interface into promiscuous mode. with this command available, a user can only see date being sent to the Unix host, but is not limited to seeing data sent to processes owned by the user. Systems administrators concerned about sniffing should remove user execution privileges from this program.
Sniffing: How It Threatens Security
Sniffing data from the network leads to loss of privacy of several kinds of information that should be private for a computer network to be secure. These kinds of information include the following:
* Financial account numbers
* Private data
* Low-level protocol information
The following subsections are intended to provide examples of these kinds.
Perhaps the most common loss of computer privacy is the loss of passwords. Typical users type a password at least once a day. Data is often thought of as secure because access to it requires a password. Users usually are very careful about guarding their password by not sharing it with anyone and not writing it down anywhere.
Passwords are used not only to authenticate users for access to the files they keep in their
private accounts but other passwords are often employed within multilevel secure database systems. When the user types any of these passwords, the system does not echo them to the computer screen to ensure that no one will see them. After jealously guarding these passwords and having the computer system reinforce the notion that they are private, a setup that sends each character in a password across the network is extremely easy for any Ethernet sniffer to see. End users do not realize just how easily these passwords can be found by someone using a simple and common piece of software.
Sniffing Financial Account Numbers
Most users are uneasy about sending financial account numbers, such as credit card numbers and checking account numbers, over the Internet. This apprehension may be partly because of the carelessness most retailers display when tearing up or returning carbons of credit card receipts. The privacy of each user’s credit card numbers is important. Although the Internet is by no means bulletproof, the most likely location for the loss of privacy to occur is at the endpoints of the transmission. Presumably, businesses making electronic transactions are as fastidious about security as those that make paper transactions, so the highest risk probably comes from the same local network in which the users are typing passwords. However, much larger potential losses exist for businesses that conduct electronic funds transfer or electronic document interchange over a computer network. These transactions involve the transmission of account numbers that a sniffer could pick up; the thief could then transfer funds into his or her own account or order goods paid for by a corporate account. Most credit card fraud of this kind involves only a few thousand dollars per incident.
Sniffing Private Data
Loss of privacy is also common in e-mail transactions. Many e-mail messages have been
publicized without the permission of the sender or receiver. Remember the Iran-Contra affair in which President Reagan’s secretary of defense, Caspar Weinberger, was convicted. A crucial piece of evidence was backup tapes of PROFS e-mail on a National Security Agency computer. The e-mail was not intercepted in transit, but in a typical networked system, it could have been. It is not at all uncommon for e-mail to contain confidential business information or personal information. Even routine memos can be embarrassing when they fall into the wrong hands.
Sniffing Low-Level Protocol Information
Information network protocols send between computers includes hardware addresses of local network interfaces, the IP addresses of remote network interfaces, IP routing information, and sequence numbers assigned to bytes on a TCP connection. Knowledge of any of this information can be misused by someone interested in attacking the security of machines on the network. See the second part of this chapter for more information on how these data can pose risks for the security of a network. A sniffer can obtain any of these data. After an attacker has this kind of information, he or she is in a position to turn a passive attack into an active attack with even greater potential for damage.
Ref: Gaining Access and Securingthe Gateway
The construction industry is one of the biggest industriesin the United Kingdom, although most workers are employed by small companies employing less than
25 people. The construction industry carries out all types of building work from basic housing to offices, hotels, schools and airports. In all of these construction projects the Electrotechnical Industry plays a major role in designing and installing the electrical systems to meet the needs of those who will use the completed buildings.
The construction process is potentially hazardous and many construction sites these days insist on basic safety standards being met before you are allowed on
site. All workers must wear hard hats and safety boots or safety trainers and use low voltage or battery tools. When the building project is finished, all safety systems
will be in place and the building will be safe for those who will use it. However, during the construction period, temporary safety systems are in place. People
work from scaffold towers, ladders and stepladders. Permanent stairways and safety handrails must be put in by the construction workers themselves.
When the electrical team arrives on site to, let us say, ‘first fix’ a new domestic dwelling house, the downstairs floorboards and the ceiling plasterboards will
probably not be in place, and the person putting in the power cables for the downstairs sockets will need to step over the floor joists, or walk and kneel on
planks temporarily laid over the floor joists. The electrical team spend a lot of time on their hands and knees in confined spaces, on ladders, scaffold
towers and on temporary safety systems during the ‘first fix’ of the process and, as a consequence, slips, trips and falls do occur.
To make all working environments safer, laws and safety regulations have been introduced. To make your working environment safe for yourself and those
around you, you must obey all the safety regulations that are relevant to your work.
The many laws and regulations controlling the working environment have one common purpose, to make the working environment safe for everyone.
Let us now look at some of these laws and regulations as they apply to the Electrotechnical Industry.
Here are the details of the specific parts you will need
Part Total Qty. Description Substitutions
- R1 1 33K 1/4W Resistor
R2 1 5K Pot
R3 1 1.5K 1/4W Resistor
C1 1 1uF 16V Electrolytic Capacitor
Q1 1 2N3565 NPN Transistor
M1 1 0-1 mA Analog Meter
MISC 1 Case, Wire, Electrodes (See Nots)
1. The electrodes can be alligator clips (although they can be painful), electrode pads (like the type they use in the hospital), or just wires and tape.
2. To use the circuit, attach the electrodes to the back of the subjects hand, about 1 inch apart. Then, adjust the meter for a reading of 0. Ask the questions. You know the subject is lying when the meter changes.
and destruction of resources and information, encryption/decryption protects
information from being usable by the attacker. Encryption/decryption is a security
mechanism where cipher algorithms are applied together with a secret key
to encrypt data so that they are unreadable if they are intercepted. Data are then
decrypted at or near their destination. This is shown in Figure 3.8 .
As such, encryption/decryption enhances other forms of security by protecting
information in case other mechanisms fail to keep unauthorized users from
that information. There are two common types of encryption/decryption: public
key and private key. Software implementations of public key encryption/decryption
are commonly available. Examples include data encryption standard (DES)
private key encryption, triple DES private key encryption, and Rivest, Shamir, and
Adleman (RSA) public key encryption.
Public key infrastructure (PKI) is an example of a security infrastructure that
uses both public and private keys. Public key infrastructure is a security infrastructure
that combines security mechanisms, policies, and directives into a system that
is targeted for use across unsecured public networks (e.g., the Internet), where
information is encrypted through the use of a public and a private cryptographic
key pair that is obtained and shared through a trusted authority. PKI is targeted
toward legal, commercial, offi cial, and confi dential transactions, and includes cryptographic
keys and a certifi cate management system. Components of this system are:
■ Managing the generation and distribution of public/private keys
■ Publishing public keys with UIDs as certifi cates in open directories
■ Ensuring that specifi c public keys are truly linked to specifi c private keys
■ Authenticating the holder of a public/private key pair
PKI uses one or more trusted systems known as Certifi cation Authorities (CA),
which serve as trusted third parties for PKI. The PKI infrastructure is hierarchical,
with issuing authorities, registration authorities, authentication authorities, and
local registration authorities.
Another example is the secure sockets library (SSL). Secure sockets library is
a security mechanism that uses RSA-based authentication to recognize a party’s
digital identity and uses RC4 to encrypt and decrypt the accompanying transaction
or communication. SSL has grown to become one of the leading security protocols
on the Internet.
One trade-off with encryption/decryption is a reduction in network performance.
Depending on the type of encryption/decryption and where it is implemented in
the network, network performance (in terms of capacity and delay) can be degraded
from 15% to 85% or more. Encryption/decryption usually also requires administration
and maintenance, and some encryption/decryption equipment can be expensive.
While this mechanism is compatible with other security mechanisms, trade-offs
such as these should be considered when evaluating encryption/decryption.
ground for less-than-scrupulous individuals who have both the tools and the know-how to penetrate your computer and steal your personal and financial information or who simply enjoy playing practical jokes or deliberately harming other people’s computer systems. The introduction of wide spread high-speed Internet access makes your computer an easier and more attractive target for these people. The mission of this book is to introduce you to personal firewalls and to help you protect your data and your privacy when you are surfing around the
World Wide Web.
1 Learn about the hacker community and the dangers of surfing unprotected
on the Internet
2 Examine the dangers of high-speed cable and DSL access
3 Discover how easy it is to protect yourself by installing your own personal
4 Review the differences between software and hardware firewalls and decide
which solution is best for you
5 Find out which features you should look for when you go firewall shopping
cause trouble. In fact, you might be surprised to know that there is an active hacker
community flourishing on the Internet. This community has a heritage that goes
back to the 1960s and can trace its roots back to the first hackers who used to hack
into the phone company to steal long-distance service. These people eventually gave
themselves the title of phone freaks. As you will see, colorful names abound in the
Perhaps the best way to learn about and understand the hacker community is to
examine its various self-named members. These classifications include:
A hacker is an individual who possesses a technical mastery of computing skills and
who thrives on finding and solving technical challenges. This person usually has a
very strong UNIX and networking background. A hacker’s networking background
includes years of experience on the Internet and the ability to break into and infiltrate
other networks. Hackers can program using an assortment of programming
languages. In fact, this person can probably learn a new language in a matter of
days. The title of hacker is not something that you can claim. Instead, your peers
must give it to you. These people thrive on the admiration of their peers. In order to
earn this level of respect, an individual must share his or her knowledge. It is this
sharing of knowledge that forms the basis of the hacker community.
UNIX is one of the oldest and most powerful operating systems in the world. It’s also
one of the most advanced. UNIX provides most of the computing infrastructure that
runs the Internet today and a comprehensive understanding of UNIX’s inner workings is
a prerequisite for a true hacker.
One basic premise of this community is that no one should ever have to solve the
same problem twice. Time is too precious to waste reinventing the wheel. Therefore,
hackers share their knowledge and discoveries and as a result their status within the
hacker community grows as does the community itself.
Hackers believe that information is meant to be free and that it is their duty to make
sure that it is. Hackers are not out to do any harm. Their mission, they think, is to
seek a form of personal enlightenment, to constantly learn and explore and to
share. Of course, this is a terribly self-gratifying view but that is how hackers see
each other. They see their conduct as honorable and noble.
But the bottom line is that hackers use their computing skills to break into computers
and networks. Even though they might not do harm, it is still an unethical and
illegal act. Hacking into someone else’s computer is very much the same thing as
breaking into their home. Whether it makes them more enlightened or not is insufficient
justification for the crimes that they commit.
Another group in the hacker community is the group that gives hackers a bad
name. The individuals in this group are known as crackers. Crackers are people who
break into computers and networks with the intent of creating mischief. Crackers
tend to get a great deal of media attention and are always called hackers by the TV
news and press. This, of course, causes hackers much frustration. Hackers have little
respect for crackers and want very much to distinguish themselves from them. To a
hacker, a cracker is a lower form of life deserving no attention. Of course, crackers
always call themselves hackers.
Usually, a cracker doesn’t have anywhere near the skill set of a true hacker,
although they do posses a certain level of expertise. Mostly they substitute brute
force attacks and a handful of tricks in place of the ingenuity and mastery wielded
Whacker is another title that you might have heard. A whacker is essentially a person
who shares the philosophy of the hacker, but not his or her skill set. Whackers
are less sophisticated in their techniques and ability to penetrate systems. Unlike a
hacker, a whacker is someone who has never achieved the goal of making the perfect
hack. Although less technically sophisticated, whackers still posses a formidable
skill set and although they might not produce new discoveries, they are able to follow
in the footsteps of hackers and can often reproduce their feats in an effort to
learn from them.
A samurai is a hacker who decides to hire out his or her finely honed skills in order
to perform legal activities for corporations and other organizations. Samurai are
often paid by companies to try to break into their networks. The samurai is modeled
after the ancient Japanese Samurai and lives by a rigid code of honor that prohibits
the misuse of his or her craft for illegal means.
Larvas are beginner hackers. They are new to the craft and lack the years of experience
required to be a real hacker. They idolize true hackers and in time hope to
reach true hacker status.
So what do hackers, crackers, whackers, Samurai, or larva want with you or your
computer? After all there are plenty of corporate and government computers and
networks in the world that must offer far more attractive targets. Well, although
hackers, whackers, and Samurai might not be targeting them, home computers can
often be viewed as low lying fruit for crackers who want easy access to financial
information and a fertile training ground for larva to play and experiment.
But the biggest threat of all might come from a group of people not associated with
the hacker community. This group consists of teenagers and disgruntled adults with
too much time on their hands. These people usually have little if any real hacking
skills. And were it not for the information sharing code of the hacker community,
these people would never pose a threat to anybody. However, even with very little
know-how, these people can still download and execute scripts and programs developed
by real hackers. In the wrong hands, these programs seek out and detect vulnerable
computers and networks and wreak all kinds of destruction.
Other Hacker Terms
In addition to the more common titles previously presented, there are a few other
hacker terms that you should be aware of. For example, a wannabee is an individual
who is in the beginning larva stage of his or her hacking career. Wannabees are
seen as very eager pupils and can be dangerous because of their inexperience even
when their intentions are good. A dark-side hacker is an individual who for one reason
or another has lost their faith in the hacker philosophy and now uses their skills
maliciously. A demigod is a hacker with decades of experience and a worldwide reputation.
Just remember that somebody is always watching you; that on the Internet nothing is private anymore and it’s not always the bad guys that you need to be worried about. In early 2000, the FBI installed a device called the Carnivore at every major ISP that allowed them to trap and view every IP packet that crossed over the wire. It has since been renamed to a less intimidating name of CDS1000. The FBI installed this surveillance hardware and software, they say, so that they can collect court-ordered information
regarding specifically targeted individuals. It’s kind of scary but it is true. Just be careful with whatever you put into your e-mail because you never know who will read it.
Selasa, 16 Desember 2008
of why that function is needed for that particular network. While one may argue
that security is always necessary, we still need to ensure that the security mechanisms
we incorporate into the architecture are optimal for achieving the security
goals for that network. Therefore, toward developing a security architecture, we
should answer the following questions:
1. What are we trying to solve, add, or differentiate by adding security mechanisms
to this network?
2. Are security mechanisms suffi cient for this network?
While it is likely that some degree of security is necessary for any network, we
should have information from the threat analysis to help us decide how much
security is needed. As with the performance architecture, we want to avoid implementing
(security) mechanisms just because they are interesting or new.
When security mechanisms are indicated, it is best to start simple and work
toward a more complex security architecture when warranted. Simplicity may be
achieved in the security architecture by implementing security mechanisms only in
selected areas of the network (e.g., at the access or distribution [server] networks),
or by using only one or a few mechanisms, or by selecting only those mechanisms
that are easy to implement, operate, and maintain.
In developing the security architecture, you should determine what problems
your customer is trying to solve. This may be clearly stated in the problem defi nition,
developed as part of the threat analysis, or you may need to probe further to
answer this question. Some common areas that are addressed by the security architecture
■ Which resources need to be protected
■ What problems (threats) are we protecting against
■ The likelihood of each problem (threat)
■ This information becomes part of your security and privacy plan for the network.
This plan should be reviewed and updated periodically to refl ect the
current state of security threats to the network. Some organizations review
their security plans yearly, others more frequently, depending on their requirements
Note that there may be groups within a network that have different security
needs. As a result, the security architecture may have different levels of security.
This equates to the security perimeters or zones introduced in the previous chapter.
How security zones are established is discussed later in this chapter.
Once you have determined which problems will be solved by each security
mechanism, you should then determine if these security mechanisms are suffi cient
for that network. Will they completely solve the customer’s problems, or are they
only a partial solution? If they are a partial solution, are there other mechanisms that
are available, or will be available within your project time frame? You may plan to
implement basic security mechanisms early in the project, and upgrade or add to
those mechanisms at various stages in the project.