Wednesday, 29 February 2012
BMW New Tri Turbo Diessel Engine Technology
Why is it that whenever someone talks about performance the first thing that comes to our minds is BMW? I guess they are just that good, and they are getting better as their technology rapidly expands. The leading performance car manufacturer is adding another piece to their already monumental collection of engines.BMW recently announced their new Tri-Turbo diesel technology will be fitted into all BMW “M models” which will be exhibited at the Geneva Auto Show.
The Tri-Turbo diesel engine is a vast leap for diesel technology. When people hear the word diesel they always think of huge trucks, buses, and tractors but that is not the case anymore. BMW envisioned an engine that would emit a great load of power but still have good fuel economy. The Tri-Turbo engine is a 3.0-liter inline six engine with not one, not two, but three turbochargers which help discharge 381 horsepower and 740 Nm (546 ft-lb) of torque. This new engine sets a new record with 93.6 kW/127.3 hp per liter of displacement.
This record did not come easy, BMW had to add a third turbocharger and engineer a new aluminum crankcase. In order to provide optimum response and efficiency, the turbochargers were scaled; there are two high pressure units with variable vane geometry and one larger one which operate with less pressure. The engine will supply power to these vehicles: an M550d xDrive Sedan, M550d xDrive Touring, X5 M50d and X6 M550d.
One major point of concern when engineering this engine was, Would it be fuel efficient and eco-friendly. The answer is yes, it is very clean and efficient. Besides meeting strict EU5/EU6 CO2 regulations, it delivers projected combined economy figures ranging between the U.S. equivalent of 30.5 mpg for the X6 M550d to 37.3 mpg for the M550d xDrive Sedan.
This new diesel technology shows just how advanced and how close we are to finding more efficient, more powerful, and more abundant sources to power our vehicles.
Here is a video showing just how this new Tri-Turbo diesel engine functions.
How Kudankulam can Be Made Safer?
The Unit 1 of the Kudankulam Nuclear Power Project (KKNPP) is under advanced stage of commissioning. Construction of Unit 2 is progressing well. In the meanwhile, sections of the public have expressed apprehensions about the safety of these reactors. Lack of understanding, misconceptions and misinformation contribute to this. Apparently, the Fukushima accident and other issues influence them.
Twenty-five VVER 1,000 MW reactors are in operation now in five countries. Nine more are under construction. The version offered to India is more recent and has more advanced safety features.
Satisfactory
Atomic Energy Regulatory Board (AERB) satisfied itself that the plant is of proven design. Indianspecialists visited Russia and had significant exchange of information from nuclear power plant designers. Indian engineers had completed licensing training process in either Balakova nuclear power plant (NPP) or Kalinin NPP.
The AERB and Bhabha Atomic Research Centre (BARC) and specialists from reputed academic institutions such as the Indian Institute of Technology, Mumbai, the Boilers Board and the Central Electricity Authority have spent over 7,000 man-days in carrying out the safety review and inspection of the Kudankulam reactors.
These system-wise reviews were comprehensive. AERB used relevant documents from the International Atomic Energy Agency (IAEA) and IAEA's peer reviews of VVER for safety assessment of these reactors.
These reactors belong to the Generation 3 + category (with more safety features than Generation 3) with a simpler and standardised design.
The Kudankulam site is located in the lowest seismic hazard zone in the country. The water level experienced at the site due to the December 26, 2004 tsunami, triggered by a 9.2 earthquake was 2.2 metres above the mean sea level. The safety-related buildings are located at higher elevation (SafetyDiesel Generators,9.3 metre) and belong to the highest seismic category and are closed with double sealed, water leak tight doors.
The reactors have redundant, diverse and thus reliable provisions needed to control nuclear reactions, to cool the fuel and to contain radioactive releases. They have in–built safety features to handle Station Black Out.
Besides fast acting control rods, the reactors also have a “quick boron injection system”, serving as a back-up to inject concentrated boric acid into the reactor coolant circuit in an emergency. Boron is an excellent neutron absorber.
Retains radioactivity
The enriched uranium fuel is contained in Zirconium-Niobium tubes. It can retain the radioactivity generated during the operation of the reactor. The fuel tubes are located in the 22 cm thick Reactor Pressure Vessel (RPV) which weighs 350 tonnes. RPV is kept inside a one metre thick concrete vault.
The reactor has double containment, inner 1.2 metre-thick concrete wall lined on the inside with a 6 mm layer of steel and an outer 60 cm thick concrete wall. The annulus between the walls is kept at negative pressure so that if any radioactivity is released it cannot go out. Air carrying such activity will have to pass through filters before getting released through the stack. Multiple barriers and systems ensure that radioactivity is not released into the environment.
KKNPP-1&2 has many new safety systems in comparison with earlier models. The Four-train Safety-System instead of just one system leads to enhanced reliability. The reactors have many passive safety systems which depend on never-failing forces such as gravitation, conduction, convection etc.
Decay heat removal
Its Passive Heat Removal System (PHRS) is capable of removing decay heat of reactor core to the outside atmosphere, during Station Black Out (SBO) condition lasting up to 24 hours. It can maintain hot shutdown condition of the reactor, thus, delaying the need for boron injection.
It works without any external or diesel power or manual intervention.
The reactors are equipped with passive hydrogen recombiners to avoid formation of explosive mixtures .The reactors have a reliable Emergency Core Cooling System (ECCS).
Core catcher
Located outside the reactor vessel, a core catcher in the form of a vessel weighing 101 tonnes and filled with specially developed compound (oxides of Fe, Al & Gd) is provided to retain solid and liquid fragments of the damaged core, parts of the reactor pressure vessel and reactor internals under severe accident conditions.
The presence of gadolinium (Gd) which is a strong neutron absorber ensures that the molten mass does not go critical. The vessel prevents the molten material from spreading beyond the limits of containment. The filler compound has been developed to have minimum gas release during dispersal and retention of core melt.Rat
Fukushima plant spread gloom; the Onagawa plant close to it, in contrast, shut down safely; its gym served for three months as a shelter for those made homeless (Reuters, Oct 21). The plant showed that it is possible for nuclear facilities to withstand even the greatest shocks and to retain public trust.
Kudankulam reactors are more modern and safe. Exercising due diligence, AERB issued clearances to it at various stages. Public may rest assured thatIndian scientists and engineers will operate the reactor safely.AERB shall continue to enforce measures to maintain safe operation of these advanced nuclear power reactors.
Tune Up in veichles
A car is essentially a machine, and as such, it requires a certain amount of preventative maintenance in order to continue to perform. A tune up is a regularly scheduled opportunity, usually once a year, to do all of the preventative maintenance that needs to be done. Ensuring that your car gets a tune up regularly will help maintain the performance of your car and extend its life.
A tune up generally includes replacement of several parts on your car. These parts may seem superficial, but failing to replace them regularly can cause decreased performance in your car, and may even lead to other problems. For example, an air filter should be replaced at least once a year; failing to replace it when it's dirty will cause your engine to get less and less of the air it needs to run properly. If the problem is left unattended, the air-fuel mixture will continue to run richer and richer —- meaning that there will be too much fuel and not enough air in the mixture —- and eventually cause other parts to fail.
As you can see, a regular tune up is important to your car's performance. A tune up should involve replacing the air filter, replacing or cleaning the spark plugs, and replacing the distributor cap and rotor. A tune up can also include replacement of the spark plug wires, fuel filter, PCV valve, and oxygen sensor.
Maintenance that is not included in the basic tune up may also be required, so a yearly tune up provides a good opportunity to check the car's systems, such as the brakes and clutch; all fluid and oil levels; and the operation of any other systems that are not used or checked regularly. If the tune up is performed in spring or early summer, the air conditioning system should be checked as well, as it likely will not have been used for many months.
A note on newer cars: most new cars use platinum spark plugs, which do not require frequent replacement. Platinum spark plugs are often claimed to last 60,000 to 100,000 miles (95,561 to 160,934 km), or even more. These spark plugs will not need to be replaced with every tune up. Some newer cars also use an electronic ignition instead of a distributor, and therefore do not need a new distributor cap and rotor. For most cars, it is a good idea to check the owner's manual or shop manual to see what maintenance is recommended during a tune up.
Sunday, 26 February 2012
Dual Core Processor
A dual core processor for a computer is a central processing unit (CPU) that has two separate cores on the same die, each with its own cache. It essentially is two microprocessors in one. This type of CPU is widely available from many manufacturers. Other types of multi-core processors also have been developed, including quad-core processors with four cores each, hexa-core processors with six, octa-core processors with eight and many-core processors with an even larger number of cores.
In a single-core or traditional processor, the CPU is fed strings of instructions that it must order, execute, then selectively store in its cache for quick retrieval. When data outside the cache is required, it is retrieved through the system bus from random access memory (RAM) or from storage devices. Accessing these slows down performance to the maximum speed that the bus, RAM or storage device will allow, which is far slower than the speed of the CPU.
This situation is compounded when the computer user is multi-tasking. In this case, the processor must switch back and forth between two or more sets of data streams and programs. CPU resources are depleted, and performance suffers.
In a dual core processor, each core handles incoming data strings simultaneously to improve efficiency. Just as two heads are better than one, so are two hands. When one core is executing, the other can be accessing the system bus or executing its own code.
o utilize a dual core processor, the operating system must be able to recognize multi-threading, and the software must have simultaneous multi-threading technology (SMT) written into its code. SMT enables parallel multi-threading, wherein the cores are served multi-threaded instructions in parallel. Without SMT, the software will recognize only one core. SMT also is used with multi-processor systems that are common to servers.
A dual core processor is different from a multi-processor system. In the latter, there are two separate CPUs with their own resources. In the former, resources are shared, and the cores reside on the same chip. A multi-processor system is faster than a system with a dual core processor, and a dual core system is faster than a single-core system, when everything else is equal.
An attractive value of dual core processors is that they do not require new motherboards but can be used in existing boards that feature the correct sockets. For the average user, the difference in performance will be most noticeable during multi-tasking, until more software is SMT aware. Servers that are running multiple dual core processors will see an appreciable increase in performance.
Difference Between 4-cyclinder and 6-cyclinder Engine
I
n a four-stroke engine, a series of movements causes fuel to be converted into forward motion. All else being equal, the difference between a 4-cylinder and 6-cylinder engine is that the latter produces more power. This is due to the two extra cylinders that create additional piston thrust.
In a basic engine design, pistons travel down cylinder sleeves or chambers, allowing intake valves to open. Intake valves let fuel and air enter the cylinders, while rising pistons compress these gasses. Spark plugs ignite the compressed gas, causing explosions that drive the pistons back down. The next rise of the pistons coincides with exhaust valves opening to clear the chambers. The timing of the pistons is staggered so that one pair rises while another falls. Pistons are connected to rocker arms, which turn a crankshaft; the crankshaft turns the wheels, thereby converting fuel into motion.
In a 4-cylinder engine, there are four pistons rising and falling in four chambers. A 6-cylinder engine features six pistons and produces a theoretical 50% more power than the same 4-cylinder engine. While a 4-cylinder engine might hesitate when you press on the gas, a 6-cylinder will tend to be more responsive, with greater get-up-and-go. The 4-cylinder engine is standard in smaller cars, as the relatively light weight of the vehicle makes it an economical choice with plenty of power for average motoring needs. Many models include a 6-cylinder engine upgrade option.
The 6-cylinder engine is standard on passenger cars, vans, small trucks and small to midsize sports utility vehicles (SUVs). Some of these models may also offer alternate engine designs as options. Standard trucks and larger SUVs commonly feature an 8-cylinder engine. These heavier vehicles are used for towing and carrying substantial weight.
Though more cylinders equal more power when comparing the same engine models, there are exceptions when comparing different engines. Improved engine designs over the years have resulted in substantial gains. This has made 4-cylinder engines more powerful than they were a decade ago, and 8-cylinder engines more fuel-efficient than they once were. In short, a 6-cylinder engine from 1993 that’s still running strong might nevertheless have less power than a recently designed 4-cylinder engine. In addition, a new 8-cylinder engine might get better gas mileage than the older 6-cylinder engine.
If deciding between a 4 and 6-cylinder engine on a new vehicle, there are a few considerations. The smaller engine will be less expensive and should get slightly better gas mileage. The disadvantage is a lack of power that might factor in more for commuters and travelers. For hilly or mountainous areas, the 6-cylinder engine would likely be a better choice. If interested in towing substantial weight, such as a powerboat or house trailer, consider an 8-cylinder motor.
Note that not all 4-cylinder engines are created equal. Differing technologies can make one engine feel gutless and another peppy. Differences also exist in larger engines of differing designs. The only way to tell if a particular engine will suit your needs is to give it a fair test drive.
Friday, 24 February 2012
Earth Photography: It’s Harder Than It Looks
From my orbital perspective, I am sitting still and Earth is moving. I sit above the grandest of all globes spinning below my feet, and watch the world speed by at an amazing eight kilometers per second (288 miles per minute, or 17,300 miles per hour).
This makes Earth photography complicated.
Even with a shutter speed of 1/1000th of a second, eight meters (26 feet) of motion occurs during the exposure. Our 400-millimeter telephoto lens has a resolution of less than three meters on the ground. Simply pointing at a target and squeezing the shutter always yields a less-than-perfect image, and precise manual tracking must be done to capture truly sharp pictures. It usually takes a new space station crewmember a month of on-orbit practice to use the full capability of this telephoto lens.
Another surprisingly difficult aspect of Earth photography is capturing a specific target. If I want to take a picture of Silverton, Oregon, my hometown, I have about 10 to 15 seconds of prime nadir (the point directly below us) viewing time to take the picture. If the image is taken off the nadir, a distorted, squashed projection is obtained. If I float up to the window and see my target, it’s too late to take a picture. If the camera has the wrong lens, the memory card is full, the battery depleted, or the camera is on some non-standard setting enabled by its myriad buttons and knobs, the opportunity will be over by the time the situation is corrected. And some targets like my hometown, sitting in the middle of farmland, are low-contrast and difficult to find. If more than a few seconds are needed to spot the target, again the moment is lost. All of us have missed the chance to take that “good one.” Fortunately, when in orbit, what goes around comes around, and in a few days there will be another chance.
It takes 90 minutes to circle the Earth, with about 60 minutes in daylight and 30 minutes in darkness. The globe is equally divided into day and night by the shadow line, but being 400 kilometers up, we travel a significant distance over the nighttime earth while the station remains in full sunlight. During those times, as viewed from Earth, we are brightly lit against a dark sky. This is a special period that makes it possible for people on the ground to observe space station pass overhead as a large, bright, moving point of light. This condition lasts for only about seven minutes; after that we are still overhead, but are unlit and so cannot be readily observed.
Ironically, when earthlings can see us, we cannot see them. The glare from the full sun effectively turns our windows into mirrors that return our own ghostly reflection. This often plays out when friends want to flash space station from the ground as it travels overhead. They shine green lasers, xenon strobes, and halogen spotlights at us as we sprint across the sky. These well-wishers don’t know that we cannot see a thing during this time. The best time to try this is during a dark pass when orbital calculations show that we are passing overhead. This becomes complicated when highly collimated light from lasers are used, since the beam diameter at our orbital distance is about one kilometer, and this spot has to be tracking us while in the dark. And of course we have to be looking. As often happens, technical details complicate what seems like a simple observation. So far, all attempts at flashing the space station have failed.
Dual Interpretations: Milky Way's Outer Fringe of Stars Sparks Disagreement
t's well known that the Milky Way is a spiral galaxy, a swirl of stars in an extended, many-armed disk. But the structure of the galaxy is far from two-dimensional. Above and below those familiar spiral arms is a lesser-known feature, a spherical swarm of stars that makes up a halo around the disk.
For decades the presence of the halo has prodded astronomers to ask big questions about its nature: How is it structured? How do stars in the halo compare with disk stars such as our sun, or to stars elsewhere in the halo? And just how did the halo get there? In recent years a group of astronomers has suggested an answer to some of those big questions by drawing on a large telescopic survey of the sky.
The halo, they have concluded, is composed of at least two distinct populations of stars, with different chemical makeups and different orbits. One group of stars, dubbed the inner halo, generally orbits closer to the galactic center, and its members tend to contain more heavy elements such as iron than do stars farther out. (Halo stars as a whole are depleted in these heavy elements, relative to stars in the galactic disk.) Stars of the outer halo occupy somewhat wider orbits around the galactic center, contain lower levels of heavy elements, and—unlike the inner halo—tend to follow retrograde orbits, circling the Milky Way in a direction counter to the rotation of the galactic disk.
"We don't think it's just one halo," says Timothy Beers, an astronomer at the National Optical Astronomy Observatory and Michigan State University, who was lead author on a recent study in The Astrophysical Journal. Beers, Daniela Carollo of Macquarie University in Australia and their colleagues based their analysis on data from the Sloan Digital Sky Survey, a long-running telescopic campaign based at Apache Point Observatory in New Mexico. "We advocate the position that we are looking at a minimum of a dual halo," he says.
As the Milky Way built up by accretion of smaller galaxies, the inner and outer halo would represent two different epochs of galactic assembly. "We actually think that the formation scenario was something you could describe as a multiphase assembly," Beers says. The inner halo would represent the remnants of relatively massive dwarf galaxies, which coalesced early on. Lighter-weight galaxies would have attached themselves later on in a very gradual agglomeration to form the outer halo.
The inner and outer halo are not cleanly divided, but the differences in how the two populations move could aid astronomers in finding extremely primitive stars, which contain primarily hydrogen and helium. Those were the raw materials for the first generation of stars, early in the history of the universe; subsequent generations contained heavier elements that were fused in stellar cores and supernovae and then released into interstellar space. "Knowing that you have this dichotomy helps direct us to finding these interesting low-metallicity stars," Beers says. Outer-halo stars could be identified for detailed study by their distinctive motions on the sky. "Those are the ones that tell the story of how the universe built its elements," Beers says.
But not everyone agrees that the facts support the dual-halo interpretation. "I have a very relaxed opinion about single halos, dual halos, multiple halos," says astrophysicist Ralph Schönrich, a NASA Hubble Fellow at The Ohio State University. "I don't mind any idea of a dual halo. It's just that I don't see any evidence for it."
Thursday, 23 February 2012
The New 2014 Audi R8
With nearly 6 years under its belt Audi has finally decided to give the R8 a face-lift. The German luxury vehicle manufacturer recently stated that in 2014 it will release a new, more revolutionized R8 that will have more power, better looks, and overall a better feel. The R8, which is one of Audi’s best super cars, will be supped up with a faster more powerful 4.2- liter V8 engine capable of supplying nearly 450 HP, and that’s only the base model. The 5.2- liter V10 version will have the ability to reach up to 550 HP.
Thanks to the replacement of the Aluminum based ASF structure the Gallardo inspired vehicle will also look a little slimmer as it seems to be shedding 60 kilos. The new R8 will feature carbon fiber for the rear firewall and a lot of Aluminum. A redesigned Audi R8 with mainly just exterior changes and a seven speed dual transmission will be introduced sometime this year to help quench the thirst of Audi fanatics but do not expect many sales.
Audi really focused on fuel consumption. With rising gas prices and an economy stuck in quick sand something had to be done to increase the fuel economy of the supercar. Though, I’m sure if you have enough money to purchase the vehicle you will have enough to pay for gas. Introduction of fuel saving technologies such as an engine stop-start, higher pressure direct injection, cylinder deactivation, and a sailing function which idles the engine when deceleration takes place, should be the bulk of those technologies and should really make customers happy.
Audi as always, will continue to offer its R8 GT model to those power lovers but tack on 2 more years for the production of that model. Many people were wondering whether Audi would move to smaller and turbocharged engines but they will not, having a car with instant throttle response helps distinguish it from its rivals.
Audi is one of the best luxury/sports car manufacturers and with the introduction of the new R8 they have just added more space between them and their competitors.
The Fireballs of February
Feb. 22, 2012: In the middle of the night on February 13th, something disturbed the animal population of rural Portal, Georgia. Cows started mooing anxiously and local dogs howled at the sky. The cause of the commotion was a rock from space.
"At 1:43 AM Eastern, I witnessed an amazing fireball," reports Portal resident Henry Strickland. "It was very large and lit up half the sky as it fragmented. The event set dogs barking and upset cattle, which began to make excited sounds. I regret I didn't have a camera; it lasted nearly 6 seconds."
Strickland witnessed one of the unusual "Fireballs of February."
February Fireballs (splash, 558 px)
"This month, some big space rocks have been hitting Earth's atmosphere," says Bill Cooke of NASA's Meteoroid Environment Office. "There have been five or six notable fireballs that might have dropped meteorites around the United States."
It’s not the number of fireballs that has researchers puzzled. So far, fireball counts in February 2012 are about normal. Instead, it's the appearance and trajectory of the fireballs that sets them apart.
"These fireballs are particularly slow and penetrating," explains meteor expert Peter Brown, a physics professor at the University of Western Ontario. "They hit the top of the atmosphere moving slower than 15 km/s, decelerate rapidly, and make it to within 50 km of Earth’s surface."
February Fireballs (signup)
The action began on the evening of February 1st when a fireball over central Texas wowed thousands of onlookers in the Dallas-Fort Worth area.
"It was brighter and long-lasting than anything I've seen before," reports eye-witness Daryn Morran. "The fireball took about 8 seconds to cross the sky. I could see the fireball start to slow down; then it exploded like a firecracker artillery shell into several pieces, flickered a few more times and then slowly burned out." Another observer in Coppell, Texas, reported a loud double boom as "the object broke into two major chunks with many smaller pieces."
The fireball was bright enough to be seen on NASA cameras located in New Mexico more than 500 miles away. "It was about as bright as the full Moon," says Cooke. Based on the NASA imagery and other observations, Cooke estimates that the object was 1 to 2 meters in diameter.
So far in February, NASA's All-Sky Fireball Network has photographed about a half a dozen bright meteors that belong to this oddball category. They range in size from basketballs to buses, and all share the same slow entry speed and deep atmospheric penetration. Cooke has analyzed their orbits and come to a surprising conclusion: "They all hail from the asteroid belt—but not from a single location in the asteroid belt," he says. "There is no common source for these fireballs, which is puzzling."
This isn't the first time sky watchers have noticed odd fireballs in February. In fact, the "Fireballs of February" are a bit of a legend in meteor circles.
Brown explains: "Back in the 1960s and 70s, amateur astronomers noticed an increase in the number of bright, sound-producing deep-penetrating fireballs during the month of February. The numbers seemed significant, especially when you consider that there are few people outside at night in winter. Follow-up studies in the late 1980s suggested no big increase in the rate of February fireballs. Nevertheless, we've always wondered if something was going on."
Indeed, a 1990 study by astronomer Ian Holliday suggests that the 'February Fireballs' are real. He analyzed photographic records of about a thousand fireballs from the 1970s and 80s and found evidence for a fireball stream intersecting Earth's orbit in February. He also found signs of fireball streams in late summer and fall. The results are controversial, however. Even Halliday recognized some big statistical uncertainties in his results.
NASA's growing All-Sky Fireball Network could end up solving the mystery. Cooke and colleagues are adding cameras all the time, spreading the network's coverage across North America for a dense, uninterrupted sampling of the night sky.
"The beauty of our smart multi-camera system," notes Cooke, "is that it measures orbits almost instantly. We know right away when a fireball flurry is underway—and we can tell where the meteoroids came from." This kind of instant data is almost unprecedented in meteor science, and promises new insights into the origin of February’s fireballs.
Meanwhile, the month isn't over yet. "If the cows and dogs start raising a ruckus tonight," advises Cooke, "go out and take a look."
Raising the Dead: New Species of Life Resurrected from Ancient Andean Tomb
UITO, ECUADOR—Long before the Spanish conquered the Incas in 1533, and centuries before the Incas inhabited this area, the present-day site of Quito International Airport was a marshy lake surrounded by Indian settlements—the Quitus on one shore and the Ipias on the other. Between A.D. 200 and 800 these cultures prospered here, fishing the lake, growing corn, beans and potatoes in the fertile soil, and fermenting an alcoholic drink—chicha—made of a watery corn broth.
In 1980, while clearing land for new construction in a warren of graffiti-covered cinderblock shanties bristling with barbwire and defended by concrete walls tipped with broken glass, workers scraped open a tomb that had been hidden for over a millennium beneath the ramshackle neighborhood. Then, nine more deep-welled tombs were uncovered in the volcanic rock, each containing about 20 bodies. The walls of the shafts were lined with Quitus remains, each one crouched in the fetal position, clothed in the finest textile, adorned with gold jewelry, and surrounded by pottery containing offerings of food and chicha for the afterlife.
Yeast biologist Javier Carvajal Barriga, of the Pontificia Universidad Católica del Ecuador in Quito, collected scrapings from inside large, torpedo-shaped clay fermentation vessels taken from one of the tombs in an attempt to recover microbes that had fermented the ancient chicha and, if possible, revive them.
» View a Slide Show of Tombs in Ecuador Where Ancient Yeast Was Found
Under the sterile conditions of his laboratory, he scratched away the surface layers from inside the fermentation vessels hoping to collect yeast trapped deep in the pottery's pores. Using a special method that he devised to humidify the desiccated cells, repair their damaged membranes, and jump-start their arrested metabolisms, he coaxed a community of yeasts, which had lain dormant in the entombed vessels since A.D. 680, back to life. Carvajal says he resurrected "a consortium of yeasts" from the containers, but none of the yeasts were Saccharomyces cerevisiae—the type used in modern fermentation. They were primarily strains of the genus Candida, closely related to the well-known yeast that causes skin and vaginal infections. But careful genetic analysis showed that two strains of yeast were a new species of Candida, which he named C. theae, meaning "tea."
These findings confirm 16th-century reports of how indigenous people in the Ecuadorian Andes fermented their chicha. According to Spanish chroniclers, Inca Indians initiated fermentation using animal bones, human saliva and even human feces.
"The most closely related species to C. theae are C. orthopsilosis, C. metapsilosis and C. parapsilosis, all of which are found in human saliva and feces," Carvajal says. Indeed he found human-associated C. parapsilosis, along with C. tropicalis, among the community of yeast in the ancient fermentation vessels. C. parapsilosis is the second-most commonly isolated pathogenic species of Candida infecting people. "Also [there are] the Crytococcus saitoi and C. laurentii that are related to respiratory diseases. They [the Quitus] were chewing and spitting the corn [into the fermentation vessels], so we can assume this population probably had some respiratory problems caused by pathogenic yeasts."
Sunday, 19 February 2012
2013 Chevrolet Camaro ZL1 Convertible – The Most-Powerful Convertible Ever
Chevrolet
has announced the 2013 Chevrolet Camaro ZL1 Convertible, the brand’s most-powerful convertible ever. The 2013 Chevrolet Camaro ZL1 Convertible will debut at the 2011 Los Angeles Auto Show and goes on sale in late 2012. This car will provides more performance and technology than many exotic cars and ultra-luxury convertibles.
Al Oppenheiser, Camaro chief engineer ,said,”The 2013 Chevrolet Camaro ZL1 Convertible will be one of the most powerful and most capable, convertibles available at any price. This is a car that is guaranteed to put a smile on your face every time you drop the top – or hit the gas.”
2013 Chevrolet Camaro ZL1 Convertible Exterior
The 2013 Chevrolet Camaro ZL1 Convertible will feature the same design language as the ZL1 coupe. It will get a raised, a front splitter, carbon fiber insert, and new vertical fog lamps with air intakes designed for brake cooling. The hood equiped front-mounted air extractors with the center section made in satin black carbon fiber. At the rear, there will be a larger diffuser and spoiler while the car will sit on a new set of 20″ forged aluminum wheels.
2013 Chevrolet Camaro ZL1 Convertible Exterior
At the the interior, 2013 Chevrolet Camaro ZL1 Convertible will get a redesigned steering wheel, alloy pedals, black leather sports seats with microfiber suede inserts, a Head-Up Display with unique performance readouts, and the “4-pack” auxiliary gauge system featuring a boost readout.
2013 Chevrolet Camaro ZL1 Convertible Engine
The 2013 Chevrolet Camaro ZL1 Convertible will be powered by the same LSA 6.2-liter supercharged V8 engine as the Coupe, that produces a total of 580 HP and 556 lb.-ft. of torque. The powerful engine will be combined with a Tremec TR6060 6-speed manual transmission that uses a 240-mm dual-mass flywheel matched with a 240-mm twin-disc clutch system and provide excellent shift smoothness.
2013 Chevrolet Camaro ZL1 Convertible Prices
The 2013 Chevrolet Camaro ZL1 Convertible will go on sale in late 2012 and the prices will be announced at a later date.
PHOTON
A photon is one of the basic structures of the universe. As an elementary particle, it acts as one of the fundamental forces within the electromagnetic field and is considered in the discipline of particle physics to be the basic unit of light. At both the microscopic and macroscopic level, the effects of electromagnetic force as caused by photons can be readily observed in the interactions of the physical world. Photons, since they're an elementary particle, demonstrate the attributes of quantum mechanics, meaning it acts as both a wave and a particle. This can be seen in the fact that photon light is refracted by lenses like waveforms and also measured when accounting for quantitative mass.
Photons were first identified by Max Planck in 1900 as “packets” of energy he referred to as quanta. This was followed by research conducted by Albert Einstein, who identified these packets as electromagnetic waves in 1905. Gilbert Lewis, a chemist, finally coined the actual name for photons in 1926, identifying the particles as being a basic element of the universe, thereby not able to be created or destroyed. Identification of photons within physics is denoted as y, while chemistry identifies them as hv.
Certain properties exist with photons that make them unique as compared to other elementary particles. First, a photon contains no mass itself. It also has no electrical charge and will not decay within empty space since it has no smaller subparticles. When a charge, either positive or negative, is accelerated near the speed of light, synchrotron radiation is created, causing photons to be released. Additionally, photons can be emitted when the energy of molecules, atoms, or nuclei alter to a lower level. According to quantum physics, when electron-positron annihilation occurs, meaning a particle and antiparticle are eliminated, photon light is also emitted.
Due to the fact that photons exhibit properties of both waves and particles, they have a number of applications within industry and technology. The photoelectric effect, the process by which matter emits electrons due to photons landing on metal plates, can be measured by a photomultiplier tube. This helped with the invention of charge-coupled devices, the chips used within digital cameras to make a digital image. Geiger counters use the presence of photons to identify radiation due to the their ability to recognize ionized gas molecules. Molecular biology also uses the concept to study the interaction of proteins by injecting fluorescent molecules into tissue and cells, which react with the photon energy to demonstrate changes.
Saturday, 18 February 2012
How the First Plant Came to Be?
Earth is the planet of the plants—and it all can be traced back to one green cell. The world's lush profusion of photosynthesizers—from towering redwoods to ubiquitous diatoms—owe their existence to a tiny alga eons ago that swallowed a cyanobacteria and turned it into an internal solar power plant.
By studying the genetics of a glaucophyte—one of a group of just 13 unique microscopic freshwater blue-green algae, sometimes called "living fossils"—an international consortium of scientists led by molecular bioscientist Dana Price of Rutgers University, has elucidated the evolutionary history of plants. The glaucophyte Cyanophora paradoxa still retains a less domesticated version of this original cyanobacteria than most other plants.
According to the analysis of C. paradoxa's genome of roughly 70 million base pairs, this capture must have occurred only once because most modern plants share the genes that make the merger of photosynthesizer and larger host cell possible. That union required cooperation not just from the original host and the formerly free-ranging photosynthesizer but also, apparently, from a bacterial parasite. Chlamydia-like cells, such as Legionella (which includes the species that causes Legionnaire's disease), provided the genes that enable the ferrying of food from domesticated cyanobacteria, now known as plastids, or chloroplasts, to the host cell.
"These three entities forged the nascent organelle, and the process was aided by multiple horizontal gene transfers as well from other bacteria," explains biologist Debashish Bhattacharya of Rutgers University, whose lab led the work published in Science on February 17. "Gene recruitment [was] likely ongoing" before the new way of life prospered and the hardened cell walls of most plants came into being.
In fact, such a confluence of events is so rare that evolutionary biologists have found only one other example: the photosynthetic amoeba Paulinella domesticated cyanobacteria roughly 60 million years ago. "The amoeba plastid is still a 'work in progress' in evolutionary terms," Bhattacharya notes. "We are now analyzing the genome sequence from Paulinella to gain some answers" as to how these events occur.
The work provides the strongest support yet for the hypothesis of late biologist Lynn Margulis, who first proposed in the 1960s to widespread criticism the theory that all modern plant cells derived from such a symbiotic union, notes biologist Frederick Spiegel of the University of Arkansas in Fayetteville, who was not involved in the work. That thinking suggests that all plants are actually chimeras—hybrid creatures cobbled together from the genetic bits of this ancestral union, including the enabling parasitic bacteria.
The remaining question is why this complex union took place roughly 1.6 billion years ago. One suggestion is that local conditions may have made it more beneficial for predators of cyanobacteria to stop eating and start absorbing, due to a scarcity of prey and an abundance of sunlight. "When the food runs out but sunlight is abundant, then photosynthesis works better" to support an organism, Bhattacharya notes. And from that forced union a supergroup of extremely successful organisms—the plants—sprang.
App to Convert Units like Length, Energy, Entropy, Electric Charge on Android
Engineering student needs a scientific calculator to calculate various scientific calculations like angel, area, density, dynamic, electric charge, electric current, force & lot others. Students were allowed to carry this electronic calculator in their examinations halls too.
The latest version update of this application supports Android 4.0 Ice Cream Sandwich too with perfect optimization and with the options to add fractions too. With over 70,000 Android market download this is certainly the best Android app for your scientific calculation needs. Not only with downloads, with the change logs (version update history) of it clearly indicates that developers at Quartic Software are always back to maintain their #1 slot for scientific calculator.
As soon as you install this application on your mobile phone and launch it, you will be first asking to upgrade your application to the paid version app that they are offering it at Android Market along with the change logs at first glance. I decided to stay with the free versions but users who have lot more work with scientific calculations all the time they can purchase the paid one costing just $ 3.
The options in yellow can be select by tapping on Shift and select those capacitive keys.
The Settings can be access my tapping on the menu button and you get the following options Tap on Settings you will find many more options and among them are the decimal point, digital group format, screen settings, full screen settings, RPN settings and history settings.
Whatever calculations done by you on this application, it will store the history of all calculations donePros
Best Scientific calculator
Saves the history of calculations done
Cons
Most advance calculator, you need to be a geek
RealCalc Scientific Calculator
RealCalc Scientific Calculator can be easily downloaded from Android Apps Labs for free. To down it directly on to your mobile phone you need to visit the RealCalc Scientific Calculator Android Apps Labs listing page and then click on Install button to automatically proceed with the installation.
Digital Dictionaries Helps For Vanishing Language
There are some 7,000 languages spoken in the world, and half of them could be gone by 2100. To rescue these languages, two linguists decided to use a combination of digital recording technology and the Internet.
K. David Harrison and Gregory Anderson are compiling what they call "talking dictionaries." Some of the languages they recorded have never been documented before. In 2010, they made the first recordings of Koro, for example, a language spoken by only a few hundred people in northeastern India.
he dictionaries so far contain more than 32,000 word entries in eight endangered languages, with over 24,000 audio recordings of native speakers pronouncing words and sentences.
Some of the work is available online. In one case, a community in Papua New Guinea that speaks a language called Matukar Panau, with only 600 speakers, asked that the language be put on the Internet even though it was only in the last two years that their village received electricity. Their talking dictionary can be seen here.
Another example is Chamacoco, spoken in Paraguay by only 1,200 people. In the United States, the team is working with the Confederated Tribes of Siletz Indians in Oregon on reviving their language. Even languages that have written forms can be endangered: one of the groups Harrison and Anderson work with speaks Ho, which is spoken by a million people in the Indian state of Jharkhand. The people who speak it have been under pressure to assimilate with the larger tongues that surround it.
Preserving language can become an important part of preserving the cultures of smaller and marginalized groups. Living languages also tell scientists a lot about the past evolution of language itself.
Harrison, associate professor of linguistics at Swarthmore College, and Anderson, president of the Living Tongues Institute for Endangered Languages, are presenting their work today at the annual meeting of the American Association for the Advancement of Science (AAAS) in Vancouver, British Columbia.
NEW SYSTEM THAT ALLOWS ROBOT TO CONTINUOUSLY MAP THEIR ENVIRONMENT
Robots could one day navigate through constantly changing surroundings with virtually no input from humans, thanks to a system that allows them to build and continuously update a three-dimensional map of their environment using a low-cost camera such as Microsoft’s Kinect.
The researchers used at PR2 robot, developed by Willow Garage, with a Microsoft's Kinect sensor to test their system.
The system, being developed by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), could also allow blind people to make their way unaided through crowded buildings such as hospitals and shopping malls.
To explore unknown environments, robots need to be able to map them as they move around — estimating the distance between themselves and nearby walls, for example — and to plan a route around any obstacles, says Maurice Fallon, a research scientist at CSAIL who is developing these systems alongside John J. Leonard, professor of mechanical and ocean engineering, and graduate student Hordur Johannsson.
But while a large amount of research has been devoted to developing one-off maps that robots can use to navigate around an area, these systems cannot adjust to changes in the surroundings over time, Fallon says: “If you see objects that were not there previously, it is difficult for a robot to incorporate that into its map.”
The new approach, based on a technique called Simultaneous Localization and Mapping (SLAM), will allow robots to constantly update a map as they learn new information over time, he says. The team has previously tested the approach on robots equipped with expensive laser-scanners, but in a paper to be presented this May at the International Conference on Robotics and Automation in St. Paul, Minn., they have now shown how a robot can locate itself in such a map with just a low-cost Kinect-like camera.
As the robot travels through an unexplored area, the Kinect sensor’s visible-light video camera and infrared depth sensor scan the surroundings, building up a 3-D model of the walls of the room and the objects within it. Then, when the robot passes through the same area again, the system compares the features of the new image it has created — including details such as the edges of walls, for example — with all the previous images it has taken until it finds a match.
At the same time, the system constantly estimates the robot’s motion, using on-board sensors that measure the distance its wheels have rotated. By combining the visual information with this motion data, it can determine where within the building the robot is positioned. Combining the two sources of information allows the system to eliminate errors that might creep in if it relied on the robot’s on-board sensors alone, Fallon says.
Once the system is certain of its location, any new features that have appeared since the previous picture was taken can be incorporated into the map by combining the old and new images of the scene, Fallon says.
The team tested the system on a robotic wheelchair, a PR2 robot developed by Willow Garage in Menlo Park, Calif., and in a portable sensor suite worn by a human volunteer. They found it could locate itself within a 3-D map of its surroundings while traveling at up to 1.5 meters per second.
Ultimately, the algorithm could allow robots to travel around office or hospital buildings, planning their own routes with little or no input from humans, Fallon says.
It could also be used as a wearable visual aid for blind people, allowing them to move around even large and crowded buildings independently, says Seth Teller, head of the Robotics, Vision and Sensor Networks group at CSAIL and principal investigator of the human-portable mapping project. “There are also a lot of military applications, like mapping a bunker or cave network to enable a quick exit or re-entry when needed,” he says. “Or a HazMat team could enter a biological or chemical weapons site and quickly map it on foot, while marking any hazardous spots or objects for handling by a remediation team coming later. These teams wear so much equipment that time is of the essence, making efficient mapping and navigation critical.”
While a great deal of research is focused on developing algorithms to allow robots to create maps of places they have visited, the work of Fallon and his colleagues takes these efforts to a new level, says Radu Rusu, a research scientist at Willow Garage who was not involved in this project. That is because the team is using the Microsoft Kinect sensor to map the entire 3-D space, not just viewing everything in two dimensions.
“This opens up exciting new possibilities in robot research and engineering, as the old-school ‘flatland’ assumption that the scientific community has been using for many years is fundamentally flawed,” he says. “Robots that fly or navigate in environments with stairs, ramps and all sorts of other indoor architectural elements are getting one step closer to actually doing something useful. And it all starts with being able to navigate.”
Friday, 10 February 2012
Fuel Pump
Because the fuel tank is located on the opposite end of the car from the engine, a fuel pump is required to draw the gas toward the engine. There are two kinds of fuel pumps: the mechanical fuel pump, which was used in carbureted cars, and the electric fuel pump, which is used in cars with electronic fuel injection.
A carburetor is a fuel delivery mechanism that makes use of the simple principle of vacuum in order to deliver fuel to the engine. The same vacuum that draws the air-fuel mixture into the engine also draws fuel along the lines toward the engine. However, additional help is necessary, so carbureted engines have a mechanical fuel pump. A mechanical fuel pump runs off of the engine's rotation; as a result, the fuel pump in a carbureted car is located alongside the engine.
Electronic fuel injection is a fuel delivery system that squirts a fine mist of fuel into the combustion chambers of the engine. A computer controls the system, closely monitoring factors such as the position of the throttle, the air-fuel ratio, and the contents of the exhaust. Because the system does not use a pre-existing force, such as vacuum, to draw the fuel along the lines, the fuel pump must be located at the source -- that is, inside or next to the fuel tank itself. The fuel pump is electronic, meaning that it is powered and controlled electronically. Sometimes, the operation of the fuel pump can be identified by a soft, steady humming sound coming from the rear of the car.
Fuel pump failure is not uncommon, particularly in cars with electronic fuel injection. Usually, when a fuel pump fails, a car will simply sputter and die, and will not restart. Essentially, a car with fuel pump failure will act like it is out of gas, even when there is gas in the tank. Fuel pump failure can be verified by checking the fuel delivery end of the system -- if no fuel is being delivered to the engine, the fuel pump has most likely failed.
Replacing an electronic fuel pump can be tricky business. In some cars, the fuel pump is located in an area that is easy to access from underneath the car. Other cars have an access panel in the interior of the car that can be removed to reach the fuel pump. Still other cars require the fuel tank to be siphoned and removed, or dropped, before the fuel pump can be accessed. The latter type of car usually makes for the most laborious job of replacing a fuel pump.
Tuesday, 7 February 2012
HSDPA, short for High-Speed Down link Packet Access, is a new protocol for mobile telephone data transmission. It is known as a 3.5G (G stands for generation) technology. Essentially, the standard will provide download speeds on a mobile phone equivalent to an ADSL (Asymmetric Digital Subscriber Line) line in a home, removing any limitations placed on the use of your phone by a slow connection. It is an evolution and improvement on W-CDMA, or wide band Code Division Multiple Access, a 3G protocol. HSDPA improves the data transfer rate by a factor of at least five over W-CDMA. HSDPA can achieve theoretical data transmission speeds of 8-10 Mbps (megabits per second). Though any data can be transmitted, applications with high data demands such as video and streaming music are the focus of HSDPA.
HSDPA improves on W-CDMA by using different techniques for modulation and coding. It creates a new channel within W-CDMA called HS-DSCH, or high-speed down link shared channel. That channel performs differently than other channels and allows for faster down link speeds. It is important to note that the channel is only used for down link. That means that data is sent from the source to the phone. It isn't possible to send data from the phone to a source using HSDPA. The channel is shared between all users which lets the radio signals to be used most effectively for the fastest downloads.
The widespread availability of HSDPA may take a while to be realized, or it may never be achieved. Most countries did not have a widespread 3G network in place as of the end of 2005. Many mobile telecommunications providers are working quickly to deploy 3G networks which can be upgraded to 3.5G when the market demand exists. Other providers tested HSDPA through 2005 and are rolling out the service in mid to late 2006. Early deployments of the service will be at speeds much lower than the theoretically possible rates. Early service will be at 1.8 Mbps, with upgrades to 3.6Mbps as devices are made available that can handle that increased speed.
The long-term acceptance and success of HSDPA is unclear, because it is not the only alternative for high speed data transmission. Standards like CDMA2000 1xEV-DO and WiMax are other potential high speed standards. Since HSDPA is an extension of W-CDMA, it is unlikely to succeed in locations where W-CDMA has not been deployed. Therefore, the eventual success of HSDPA as a 3.5G standard will first depend upon the success of W-CDMA as a 3G standard.
Monday, 6 February 2012
World's Most Expensive Cars<\b>:
1. Bugatti Veyron Super Sports $2,400,000. This is by far the most expensive street legal car available on the market today (the base Veyron costs $1,700,000). It is the fastest accelerating car reaching 0-60 in 2.5 seconds. It is also the fastest street legal car when tested again on July 10, 2010 with the 2010 Super Sport Version reaching a top speed of 267 mph. When competing against the Bugatti Veyron, you better be prepared!
Saturday, 4 February 2012
A SIM card or Subscriber Identity Module is a portable memory chip used in some models of cellular telephones. The SIM card makes it easy to switch to a new phone by simply sliding the SIM out of the old phone and into the new one. The SIM holds personal identity information, cell phone number, phone book, text messages and other data. It can be thought of as a mini hard disk that automatically activates the phone into which it is inserted.
A SIM card can come in very handy. For example, let's say your phone runs out of battery power at a friend's house. Assuming you both have SIM-based phones, you can remove the SIM card from your phone and slide it into your friend's phone to make your call. Your carrier processes the call as if it were made from your phone, so it won't count against your friend's minutes.
If you upgrade your phone there's no hassle involved. The SIM card is all you need. Just slide it into the new phone and you're good to go. You can even keep multiple phones for different purposes. An inexpensive phone in the glove compartment, for example, for emergency use, one phone for work and another for home. Just slide your SIM card into whatever phone you wish to use.
High-end cell phones can be very attractive and somewhat pricey. If you invest in an expensive phone you might want to keep it awhile. Using a SIM card, it is even possible to switch carriers and continue to use the same phone. The new carrier will simply issue you their own SIM card. The phone must be unlocked, however, and operate on the new carrier's frequency or band.
A SIM card provides an even bigger advantage for international travelers -- simply take your phone with you and buy a local SIM card with minutes. For example, a traveler from the U.S. staying in the U.K. can purchase a SIM card across the pond. Now the phone can be used to call throughout England without paying international roaming charges from the carrier back home.
SIM cards are used with carriers that operate on the Global System for Mobile Communication (GSM) network. The competing network is Code Division Multiple Access (CDMA), a technology created by U.S. company Qualcomm. As of fall 2005, CD
MA cell phones and CDMA carriers do not support SIM cards in most parts of the world, though this is changing. A CD MA SIM card called the R-UIM (Re-Useable Identification Module) was made available in China in 2002, and will eventually be available worldwide. Expectations for the future include a cell phone market that supports both SIM (GSM) and R-UIM (CDMA) cards by default.
Android apps are applications that may be downloaded and used on cell phones that feature the Android operating system, owned by Google. These applications may be free or cost a small fee, but can make a cell phone more useful and fun. There are many different types of Android apps for different purposes.
Some Android apps are fairly basic, and may be able to pull in information such as news and weather from various sources. They may be able to display weather alerts or breaking news, as well as allow one to access the Internet and read complete news stories. In addition, many applications on Android phones can sync with other Google programs, allowing a user to easily access email, calendars, or chat programs, among many others. Entertainment apps may allow users to quickly access television shows, movie clips, or quick entertainment facts.
Stock and finance trackers are common Android apps as well. There are numerous games available for the Android phones, making it more fun to pass the time when waiting in line. Other apps are just meant to be relaxing, and may simply show calming images such as ocean waves on the home screen. Because the Android operating system is open-source, anyone can create applications for it, so there are apps for virtually any idea imaginable.
Fitness trackers are excellent Android apps as well. Some may allow the user to enter the amount of time spent exercising or the amount of calories consumed into the device, which may then be shown on a chart or graph. Others may display detailed instructions as to how to perform certain exercises, and allow users to create customized exercise plans that may be referenced as needed. GPS apps may help someone find their way when traveling, or even just find their way back to a parked car.
Music apps may allow users to listen to downloaded music, Internet radio, or watch music videos. Other apps may be able to show a notification if a chosen favorite artist is playing a show near a location. These are just a few of the hundreds of Android™ apps available for download and use on Android™ phones. These apps may be found by searching online or on the phone itself; they may also be easily removed from the phone if they do not perform as expected, or if one simply wants to free up some space.
Osmosis
Osmosis is a process in which a fluid passes through a semipermeable membrane, moving from an area in which a solute such as salt is present in low concentrations to an area in which the solute is present in high concentrations. The end result of osmosis, barring external factors, will be equal amounts of fluid on either side of the barrier, creating a state which is known as “isotonic.” The fluid most commonly used in demonstrations of osmosis is water, and osmosis with a wide variety of fluid solutions is key for every living organism on Earth, from humans to plants.
There are some key terms related to osmosis which may be helpful to know when thinking about how osmosis works. The fluid which passes through the membrane is known as a solvent, while the dissolved substance in the fluid is a solute. Together, the solvent and dissolved solute make up a solution. When a solution has low levels of a solute, it is considered to be hypo-tonic, while solutions with high solute levels are known as hyper tonic.
In a classic example of osmosis, plants use osmosis to absorb water and nutrients from the soil. The solution in the roots of the plants is hyper tonic, drawing in water from the surrounding hypo tonic soil. Roots are designed as selectively permeable membranes, admitting not only water, but some useful solutes, such as minerals the plant needs for survival. Osmosis also plays a critical role in plant and animal cells, with fluids flowing in and out of the cell wall to bring in nutrients and carry out waste.
Fluid passes both in and out of the semipermeable membrane in osmosis, but usually there is a net flow in one direction or another, depending on which side of the membrane has a higher concentration of solutes. It is possible to alter the process of osmosis by creating pressure in the hyper tonic solution. When the pressure becomes so intense that the solvent from the hypnotic solution cannot pass through the membrane, it is known as osmotic pressure, and it will prevent the attainment of an isotonic state.
The principles which underlie osmosis are key to understanding a wide variety of concepts. For example, the sometimes fatal medical condition known as water intoxication occurs when people drink a large amount of water very rapidly, causing a dilution of the water which flows freely through their bodies. This diluted solution is capable of pushing through the cell membranes, thanks to osmosis, and it can cause cells to explode as they expand to accommodate the water. Conversely, when people become dehydrated, cells shrivel and die as the free-flowing water in the body becomes highly concentrated with solutes, causing water to flow out of the cells in an attempt to reach an isotonic state.
Robots
On the most basic level, human beings are made up of five major components:
A body structure
A muscle system to move the body structure
A sensory system that receives information about the body and the surrounding environment
A power source to activate the muscles and sensors
A brain system that processes sensory information and tells the muscles what to do
Of course, we also have some intangible attributes, such as intelligence and morality, but on the sheer physical level, the list above about covers it.
A robot is made up of the very same components. A typical robot has a movable physical structure, a motor of some sort, a sensor system, a power supply and a computer "brain" that controls all of these elements. Essentially, robots are man-made versions of animal life -- they are machines that replicate human and animal behavior.
In this article, we'll explore the basic concept of robotics and find out how robots do what they do.
Joseph Engelberger, a pioneer in industrial robotics, once remarked "I can't define a robot, but I know one when I see one." If you consider all the different machines people call robots, you can see that it's nearly impossible to come up with a comprehensive definition. Everybody has a different idea of what constitutes a robot.
You've probably heard of several of these famous robots:
R2D2 and C-3PO: The intelligent, speaking robots with loads of personality in the "Star Wars" movies
Sony's AIBO: A robotic dog that learns through human interaction
Honda's ASIMO: A robot that can walk on two legs like a person
Industrial robots: Automated machines that work on assembly lines
Data: The almost human android from "Star Trek"
Battle Bots: The remote control fighters on Comedy Central
Bomb-defusing robots
NASA's Mars rovers
HAL: The ship's computer in Stanley Brick's "2001: A Space Odyssey"
Robomower: The lawn-mowing robot from Friendly Robotics
The Robot in the television series "Lost in Space"
Mind Storms: Le Go's popular robotics kit
All of these things are considered robots, at least by some people. The broadest definition around defines a robot as anything that a lot of people recognize as a robot. Most robotics (people who build robots) use a more precise definition. They specify that robots have a reprogrammable brain (a computer) that moves a body.
By this definition, robots are distinct from other movable machines, such as cars, because of their computer element. Many new cars do have an on board
computer, but it's only there to make small adjustments. You control most elements in the car directly by way of various mechanical devices. Robots are distinct from ordinary computers in their physical nature -- normal computers don't have a physical body attached to them.
Subscribe to:
Posts (Atom)