Herkimer DiamondsDoubly terminated quartz crystals named for Herkimer County, New York |
|||||||||||||||||||
The host rock for Herkimer Diamonds is the Cambrian-age, Little Falls Dolostone. The Little Falls Dolostone was deposited about 500 million years ago and the Herkimer Diamonds formed in cavities within the dolostone. These cavities are frequently lined with drusy quartz crystals and often are coated with a tarry hydrocarbon (see image below). Although Herkimer County, New York is the location for which these crystals are named, similar doubly terminated quartz crystals have been found in a few other locations, including Arizona, Afghanistan, Norway, Ukraine and China. They have the same appearance but can not rightfully be called "Herkimers". The doubly terminated quartz crystals shown in the lower right photo are from a deposit in Afghanistan.
Who Discovered Herkimer Diamonds?The Herkimer Diamonds of New York are not a recent discovery. The Mohawk Indians and early settlers knew about the crystals. They found them in stream sediments and plowed fields. These people were amazed with the crystals and immediately held them in high esteem. Herkimer Diamond MinesSome of the best places to find Herkimer Diamonds today are located along New York State Route 28 in Middleville, New York. (When visiting this area it is important to remember that all land in New York either belongs to the government or is private property. Collecting minerals from government lands is illegal in New York and collecting on private property always requires permission in advance.) There are two commercial mines on New York State Route 28 at Middleville, New York. These are: Ace of Diamonds Mine and Herkimer Diamond Mine. Both allow collectors to enter and prospect for a nominal fee. Both locations also rent equipment such as hammers, wedges and other small tools. They also have small exhibit areas where you can view and/or purchase specimens. Mining for Herkimer DiamondsThe key to finding Herkimer Diamonds is a knowledge that they occur in cavities (vugs) within the Little Falls Dolostone (see photo above). These cavities can be smaller than a pea or several feet across. At both of the mines listed above the Little Falls Dolostone is exposed at the surface and a significant amount of broken rock is scattered across the quarry floor. "Find and Break" ProspectingDolomite is a very tough rock so expect to work hard. The use of safety glasses is required and wise collectors wear gloves to protect their hands. We always wear jeans or heavy long pants and a long sleeve shirt for "find and break" prospecting. Small pieces of dolomite will sometimes fly when a rock breaks and they can easily cut or bruise a person wearing short pants. The "find and break" prospecting method described above is employed by many people who visit these mines and can lead to a few good finds. The keys to success are selection of good rocks to break and not being discouraged if you break fifty rocks without finding a crystal. (See image below to know what "vuggy rock" looks like. Click the image for a closer view.) Vuggy rock containing a nice Herkimer Diamond. Rock is about six inches across. "Scavenger" ProspectingSome visitors to the mines have been successful by simply searching the rock rubble for exposed crystals or searching the quarry floor for loose crystals. We found several really nice crystals this way and lots of tiny ones. We have also seen children find many nice crystals this way. "Cavity" ProspectingFor finding large quanties of crystals, the most successful mining method is to break into large cavities in the quarry walls and floors using sledge hammers and wedges (power equipment is not permitted at the mines listed in this article). This method requires tools, patience, time and a knowledge of how to break an extremely durable dolostone.
The cavity shown above was opened by Anne and Bill. It contained over one hundred quartz crystals in a variety of sizes, ranging from a few millimeters to several centimeters in length. A very nice prize for a day's work! Two large clusters from the cavity are shown below
Herkimer Diamond Specimens & JewelryWhy hunt for Herkimer Diamonds? It's great fun and every time you break open a rock you will look with anticipation to see if you liberated an unseen quartz crystal. Nice Herkimer Diamonds are highly prized mineral specimens and are sought by mineral collectors worldwide. Large numbers of Herkimer crystals are also used in jewelry because their natural facets are both beautiful and interesting. Some people also seek Herkimer diamonds because they are thought to have "holistic qualities". If you like minerals and have an opportinity to visit the Herkimer County area of New York, consider spending a day looking for Herkimer Diamonds. Be sure to wear clothes that are suitable for working outdoors. Safety glasses are required and you will be sorry if you don't wear gloves. If you need a sledge hammer or other tools you can rent them at the mine for a very small fee. If you want to obtain some nice Herkimer Diamonds but are unable to visit Herkimer to mine them yourself please visit Bill's site at HerkimerDiamonds.ca. |
Tuesday, 16 July 2013
Herkimer Diamonds
What Causes a Tsunami?
What Causes a Tsunami?
What causes a tsunami?... A tsunami is a large ocean wave that is caused by sudden motion on the ocean floor. This sudden motion could be an earthquake, a powerful volcanic eruption, or an underwater landslide. The impact of a large meteorite could also cause a tsunami. Tsunamis travel across the open ocean at great speeds and build into large deadly waves in the shallow water of a shoreline.
Subduction Zones are Potential Tsunami Locations
Most tsunamis are caused by earthquakes generated in a subduction zone, an area where an oceanic plate is being forced down into the mantle by plate tectonic forces. The friction between the subducting plate and the overriding plate is enormous. This friction prevents a slow and steady rate of subduction and instead the two plates become "stuck".
Accumulated Seismic Energy
As the stuck plate continues to descend into the mantle the motion causes a slow distortion of the overriding plage. The result is an accumulation of energy very similar to the energy stored in a compressed spring. Energy can accumulate in the overriding plate over a long period of time - decades or even centuries.
Earthquake Causes Tsunami
Energy accumulates in the overriding plate until it exceeds the frictional forces between the two stuck plates. When this happens, the overriding plate snaps back into an unrestrained position. This sudden motion is the cause of the tsunami - because it gives an enormous shove to the overlying water. At the same time, inland areas of the overriding plate are suddenly lowered.
Tsunami Races Away From the Epicenter
The moving wave begins travelling out from where the earthquake has occurred. Some of the water travels out and across the ocean basin, and, at the same time, water rushes landward to flood the recently lowered shoreline.
Tsunamis Travel Rapidly Across Ocean Basis
Tsunamis travel swiftly across the open ocean. The map below shows how a tsunami produced by an earthquake along the coast of Chile in 1960 traveled across the Pacific Ocean, reaching Hawaii in about 15 hours and Japan in less than 24 hours.
Tsunami "Wave Train"
Many people have the mistaken belief that tsunamis are single waves. They are not. Instead tsunamis are "wave trains" consisting of multiple waves. The chart below is a tidal gauge record from Onagawa, Japan beginning at the time of the 1960 Chile earthquake. Time is plotted along the horizontal axis and water level is plotted on the vertical axis. Note the normal rise and fall of the ocean surface, caused by tides, during the early part of this record. Then recorded are a few waves a little larger than normal followed by several much larger waves. In many tsunami events the shoreline is pounded by repeated large waves.
Monday, 15 July 2013
This Trike Motorcycle Concept Is Like A Big Wheel For Adults
TrailTrike Concept by Charles Bombardier
IMAGE BY Brian Miller
Cars
//
There's something about being perched above three
massive plastic wheels that imbues ordinary crybaby toddlers with a
terrifyingly kick-ass, gelled-hair attitude. Unfortunately, there's
something about adult trikes that is really, really uncool. Now designer
Charles Bombardier, one of the creators of the three-wheeled Spyder
roadster and grandson of the inventor of the snowmobile, has developed a
trike motorcycle concept that looks just as fabulous to ride as your
childhood Big Wheel.
Unlike the Big Wheel, the Trail Trike concept
has two wheels in front and one in back. Bombardier designed the
motorcycle concept to ride on asphalt as well as dirt roads and trails.
The seat is another invention of Bombardier's: To help maintain balance
on bumpy backroads, the motorized "carving seat" tilts at various angles and speeds to respond to how a rider leans during turns and acceleration.
Bombardier tells Popular Science that he also imagines the TrailTrike with a so-called intelligent stability system, in which a rider can input a certain type of terrain (dirt, snow, or asphalt), and an algorithm will adjust engine power supply, braking on each wheel, and traction control as needed.
Powering the trike would be a 165-hp, 2-stroke, direct-injection engine with a continuously variable transmission. Two output shafts would provide power to each wheel. In order to concentrate most of the motorcycle's mass around its center of gravity-thus making it easier to handle-Bombardier mounted the front disc brakes on the chassis of the vehicle instead of on the wheel hubs.
Bombardier tells Popular Science that he also imagines the TrailTrike with a so-called intelligent stability system, in which a rider can input a certain type of terrain (dirt, snow, or asphalt), and an algorithm will adjust engine power supply, braking on each wheel, and traction control as needed.
Powering the trike would be a 165-hp, 2-stroke, direct-injection engine with a continuously variable transmission. Two output shafts would provide power to each wheel. In order to concentrate most of the motorcycle's mass around its center of gravity-thus making it easier to handle-Bombardier mounted the front disc brakes on the chassis of the vehicle instead of on the wheel hubs.
What did giant extinct vampire bats eat?
What did giant extinct vampire bats eat?
Prior to the spread of people and domestic livestock, vampire bats (here we’re mostly talking about the Common vampire Desmodus rotundus) most likely fed on capybaras, tapirs, peccaries, deer and birds, though we know that they also sometimes feed on fruit bats and reptiles. Populations that live on islands off the Peruvian and Chilean coasts feed on seabirds and sealions. Now that the Americas are full of millions of cattle, horses, donkeys, pigs and chickens however, vampires have largely switched to these domestic prey, and it’s said that the majority of modern vampires now feed almost entirely on the blood of livestock, particularly cattle, horses and donkeys. [Image of vampire skeleton below by Mokele.]There are three extant vampires. We know from fossils that two of them (the Common vampire and Hairy-legged vampire Diphylla ecaudata) were extant in the Pleistocene, and members of the same lineage as the third species (the White-winged vampire Diaemus youngi) must have been present too, since phylogenetic studies show that Diaemus is as old as Desmodus (Honeycutt et al. 1981, Wetterer et al. 2000, Jones et al. 2002).
But it gets better: there are numerous additional fossil vampires. They include Desmodus archaeodaptes from the Upper Pliocene of Florida (this is the oldest reported vampire species), De. stocki from the USA and Mexico, the Cuban endemic form De. puntajudensis, De. draculae from Venezuela, Belize and Brazil, and an unnamed related form from Buenos Aires, Argentina. De. stocki – sometimes known as Stock’s vampire – was 15-20% bigger than the extant Common vampire. Indeed, a specimen now included within this species was originally named De. magnus. De. draculae – sometimes referred to as a ‘giant vampire’ – was about 25% bigger than a modern Common vampire, suggesting a wingspan of perhaps 50 cm and a mass of about 60 g. This makes it on par with a large horseshoe bat or small fruit bat: keep in mind that the majority of ‘microbats’ weigh between 10 and 20 g!
What sort of animals were these fossil vampires feeding from? Of the living vampires, both the Hairy-legged vampire and White-winged vampire mostly prey on birds. However, the Common vampire mostly preys on mammals, and because the fossil species are all members of the genus Desmodus, it’s reasonable to assume that they, also, mostly fed on mammals. However, they surely exploited other prey when they were available. Here’s a wholly speculative reconstruction of a Pleistocene Desmodus feeding from the leg of a sleeping teratorn (aka teratornithid). Teratorns are giant, condor-like birds; the last time I used a version of this image I was reminded that they likely defecated down their legs as living New World vultures do today. Nevertheless, I’m sure the bat is safe in this particular instance…
A few vampire bat fossils are preserved in association with large mammals. A fossil Common vampire from a Brazilian cave, radiometrically dated to about 12,000 years ago, was discovered adhering to the underside of a coprolite produced by the sloth Nothrotherium (Czaplewski & Cartelle 1998) and De. stocki fossils from Florida are preserved in the same caves as ground sloths. A skull belonging to De. draculae was preserved in association with a skull of the extinct horse Equus neogeus. None of these associations demonstrate the predatory preference of the extinct vampire species, but they are at the very least highly suggestive. The idea that some of these bats may have fed on giant sloths is likely and entirely acceptable, and one published life restoration – a drawing by Randy Babb, in Brown (1994) – depicts a De. stocki feeding on a nothrotheriid sloth.
Intriguingly, the morphology of some of
these vampires suggests that they differed in ecology and behaviour from
the living vampire species. Both De. archaeodaptes and the Cuban species De. puntajudensis
seems to have had far more freedom of movement in their jaw joint that
the Common vampire, a feature suggesting that they somehow differed in
how they procured and/or bit their prey (Morgan 1991, Suarez 2005). The
robust hindlimb bones of De. puntajudensis and De. stocki
also suggest that their style of terrestrial locomotion differed from
that of the Common vampire, though exactly how it differed remains
unknown. The large size of De. stocki, De. draculae
and the Argentinean giant form of course indicate that they fed on
larger prey than living vampires and, as noted, these fossil bats are
sometimes found associated with ground sloths.
Bats have been covered on Tet Zoo quite a bit: there’s lots in the
archives on vampires and vespertilionids in particular. However, there
is still tons and tons to get through!50-year-old assumptions about strength muscled aside
50-year-old assumptions about strength muscled aside
The basics of how a muscle generates power remain the same: Filaments of myosin tugging on filaments of actin shorten, or contract, the muscle -- but the power doesn't just come from what's happening straight up and down the length of the muscle, as has been assumed for 50 years.
Instead, University of Washington-led research shows that as muscles bulge, the filaments are drawn apart from each other, the myosin tugs at sharper angles over greater distances, and it's that action that deserves credit for half the change in muscle force scientists have been measuring.
Researchers made this discovery when using computer modeling to test the geometry and physics of the 50-year-old understanding of how muscles work. The computer results of the force trends were validated through X-ray diffraction experiments on moth flight muscle, which is very similar to human cardiac muscle. The X-ray work was led by co-author Thomas Irving, an Illinois Institute of Technology professor and director of the Biophysics Collaborative Access Team (Bio-CAT) beamline at the Advanced Photon Source, which is housed at the U.S. Department of Energy's Argonne National Laboratory.
A previous lack of readily available access to computational power and X-ray diffraction facilities are two reasons that this is the first time these findings have been documented, speculated lead-author C. David Williams, who earned his doctorate at the UW while conducting the research, and now is a postdoctoral researcher at Harvard University. Currently, X-ray lightsources have a waiting list of about three researchers for every one active experiment. The APS is undergoing an upgrade that will greatly increase access and research power and expedite data collection.
The new understanding of muscle dynamics derived from this study has implications for the research and use of all muscles, including organs.
"In the heart especially, because the muscle surrounds the chambers that fill with blood, being able to account for forces that are generated in several directions during muscle contraction allows for much more accurate and realistic study of how pressure is generated to eject blood from the heart," said co-author Michael Regnier, a UW bioengineering professor. "The radial and long axis forces that are generated may be differentially compromised in cardiac diseases and these new, detailed models allow this to be studied at a molecular level for the first time. They also take us to a new level in testing therapeutic treatments targeted to contractile proteins for both cardiac and skeletal muscle diseases. "
This study gives scientists and doctors a new basis for interpreting experiments and understanding the mechanisms that regulate muscle contraction. Researchers have known for sometime that the muscle filament lattice spacing changes over the length-tension curve, but its importance in generating the steep length dependence of force has not been previously demonstrated.
"The predominant thinking of the last 50 years is that 100 percent of the muscle force comes from changes as muscles shorten and myosin and actin filaments overlap. But when we isolated the effects of filament overlap we only got about half the change in force that physiologists know muscles are capable of producing," Williams said.
The rest of the force, he said, should be credited to the lattice work of filaments as it expands outward in bulging muscle -- whether in a body builder's buff biceps or the calves of a sinewy marathon runner.
"One of the major discoveries that David Williams brought to light is that force is generated in multiple directions, not just along the long axis of muscle as everyone thinks, but also in the radial direction," said Thomas Daniel, UW professor of biology and co-author on the paper.
"This aspect of muscle force generation has flown under the radar for decades and is now becoming a critical feature of our understanding of normal and pathological aspects of muscle," Daniel added.
Since the 1950s scientists have had a formula -- the so-called length-tension curve -- that accurately describes the force a muscle exerts at all points from fully outstretched, when every weight lifter knows there is little strength, to the middle points that display the greatest force, to the completely shortened muscle when, again, strength is minimized.
Williams developed computer models to consider the geometry and physics at work on the filaments at all those points.
"The ability to model in three dimensions and separate the effects of changes in lattice spacing from changes in muscle length wouldn't even have been possible without the advent of cloud computing in the last 10 years, because it takes ridiculous amounts of computational resources," Williams said.
Sunday, 14 July 2013
Computer sales plummet 10% in just three months as buyers switch to tablets
Computer sales plummet 10% in just three months as buyers switch to tablets
Experts predict sales of tablets including the iPad, Samsung Galaxy, Google Nexus 7 and Kindle Fire will overtake computers by 2015
Getty Images
Tablets are giving computer giants a headache as buyers are ditching PCs in favour of the hand-held devices.
Demand tumbled by almost 11% in the last three months, hitting sales of big names such as Dell, Acer and HP.
The worst decline was in Europe, Asia and the Middle East where sales plummeted by 16.8% as the popularity of tablets soared.
Experts predict sales of tablets including the iPad, Samsung Galaxy, Google Nexus 7 and Kindle Fire will overtake computers by 2015.
Global demand for tablets is expected to rocket by 45% from this year and hit 332.4 million in 2015, compared with an estimated 322.7 million for PCs.
Figures from analysts Gartner today revealed Acer took the biggest hit globally in the last three months with sales diving by 35%, Asus was down by 20% and Dell by almost 4%.
Mikako Kitagawa, principal analyst at Gartner said: “In emerging markets, inexpensive tablets have become the first computing device for many people, who at best are deferring the purchase of a PC.”
Lenovo topped the PC leaderboard with 12.75 million sales world wide, followed by HP with 12.4 million and Dell was third with almost 9 million between April and June.
Jay Chou, a senior analyst at IDC Worldwide PC Tracker said the industry needed to pimp-up PCs to take on the tablets - or face another slump.
“A lot still needs to be done in launching attractive products and addressing competition from devices like tablets,” he said.
Analysts at IDC said as tablet price wars bring the cost down, internet users will switch from buying new PCs to the increasingly smaller, handheld devices which could hit sales of 410 million worldwide by 2017.
One said: “Many users are realising that everyday computing, such as accessing the Web, connecting to social media, sending e-mails, as well as using a variety of apps, doesn’t require a lot of computing power or local storage.
“Instead, they are putting a premium on access from a variety of smaller devices.”
Lenovo 12,755,068 12,677,265
HP 13,028,822 12,402,887
Dell 9,349,171 8,984,634
Acer Group 9,743,663 6,305,000
Asus 5,772,043 4,590,07
Others 34,675,824 31,041,13
Demand tumbled by almost 11% in the last three months, hitting sales of big names such as Dell, Acer and HP.
The worst decline was in Europe, Asia and the Middle East where sales plummeted by 16.8% as the popularity of tablets soared.
Experts predict sales of tablets including the iPad, Samsung Galaxy, Google Nexus 7 and Kindle Fire will overtake computers by 2015.
Global demand for tablets is expected to rocket by 45% from this year and hit 332.4 million in 2015, compared with an estimated 322.7 million for PCs.
Figures from analysts Gartner today revealed Acer took the biggest hit globally in the last three months with sales diving by 35%, Asus was down by 20% and Dell by almost 4%.
Mikako Kitagawa, principal analyst at Gartner said: “In emerging markets, inexpensive tablets have become the first computing device for many people, who at best are deferring the purchase of a PC.”
Lenovo topped the PC leaderboard with 12.75 million sales world wide, followed by HP with 12.4 million and Dell was third with almost 9 million between April and June.
Jay Chou, a senior analyst at IDC Worldwide PC Tracker said the industry needed to pimp-up PCs to take on the tablets - or face another slump.
“A lot still needs to be done in launching attractive products and addressing competition from devices like tablets,” he said.
Analysts at IDC said as tablet price wars bring the cost down, internet users will switch from buying new PCs to the increasingly smaller, handheld devices which could hit sales of 410 million worldwide by 2017.
One said: “Many users are realising that everyday computing, such as accessing the Web, connecting to social media, sending e-mails, as well as using a variety of apps, doesn’t require a lot of computing power or local storage.
“Instead, they are putting a premium on access from a variety of smaller devices.”
Decline of PCs
Company April to June 2012 April to June 2013Lenovo 12,755,068 12,677,265
HP 13,028,822 12,402,887
Dell 9,349,171 8,984,634
Acer Group 9,743,663 6,305,000
Asus 5,772,043 4,590,07
Others 34,675,824 31,041,13
Nokia's new Lumia packs a crazy 41-megapixel camera
Nokia's new Lumia packs a crazy 41-megapixel camera
The new Lumia 1020 has the potential be a photographer's smartphone dream.
NEW YORK (CNNMoney)
After releasing two intriguing quasi-updates to last year's flagship Lumia 920 phone, Nokia finally has its true Windows Phone successor: the Lumia 1020, which packs a 41-megapixel PureView camera.
Despite the extra camera power, the phone looks and feels thinner than the too-bulky Lumia 920.The sensor and camera lens protrude from the back in noticeable fashion, but not so much that the phone becomes unpocketable.
The Lumia 1020 has a 4.5-inch screen and a 1280 x 768 resolution, 2 gigabytes of RAM, and a dual-core Qualcomm Snapdragon S4 chipset. Aside from doubling the RAM, it's basically the same as Nokia's previous Lumia phones.
These non-camera specs aren't any major improvement over the status quo. That's Nokia's gambit: There's not much to upgrade anymore besides the camera, so that's where Nokia is throwing down.
The 41-megapixel sensor isn't there to provide some insane bump in image quality, and you're not meant to handle 41-megapixel images. Instead, it's meant to replace the zoom function found in most point-and-shoot cameras.
With smartphones, trying to capture an object off in the distance usually means settling for a speck-sized representation of that object in the frame or using digital zoom, which adds blurriness and graininess. Nokia's 41-megapixel PureView technology uses those extra pixels to capture details you can't even make out with your own eyes -- but when you zoom, you can later crop the photo and get what you want with little or no drop-off in image quality.
If you don't want to zoom, the PureView camera will use all that pixel power to "oversample" (meaning it will capture the same pixel area multiple times and combine the best parts of each one) and generate a 5-megapixel image with added clarity and detail. It's a noticeable boost in image quality, and applies to video as well.
To support this blinged-out camera, there will be apps from both Nokia and third-party developers. Nokia's excellent Pro Camera app allows full manual control over your images, with an intuitive interface that gives quick access to settings including exposure, ISO, shutter speed and white balance. Apps from Vyclone, Path, Snapcam, Panagraph, Hipstamatic, and, yes, CNN, will be newly available or updated to take full advantage of the camera.
On stage at the new phone's New York unveiling, Nokia CEO Stephen Elop made a vague reference to Hipstamatic allowing uploading to rival photo app Instagram (owned byFacebook (FB)) -- a wildly popular service that has no official app for Windows Phone.
Offstage, Ignacio Riesgo, Nokia's head of app relations for the Americas, confirmed that Nokia worked with Instagram to get this feature on the Lumia 1020, but he couldn't offer any other details on when an official Instagram app might appear for Windows Phone.
Using the Lumia 1020's camera confirms that the zoom functionality has strong potential. In an area with full natural lighting -- or with the aid of the excellent xenon flash -- you can use the digital zoom to crop in tight on a subject five to 10 feet away with little noticeable image degradation.
But the real kicker come in the post-processing. If you choose to crop an image after the fact, Nokia uses a feature that it calls re-framing. Instead of letting you choose a section to zoom in on and deleting the rest of the photo, it will create a locked-in zoom setting for a photo, and leave it that way every time you view it -- but it won't delete the parts of the photo you can't see. If you decide you want to revisit the full photo later, you can simply tap a button and re-frame the shot.
Long story short: This has the potential be a photographer's smartphone dream.
But whether or not this is the Nokia (NOK) phone to buy still (still!) remains to be seen. Windows Phone 8.1 has yet to be released, and it will support a beefier processor than the dual-core Snapdragon Nokia is using here. While you won't notice the extra power in general use, a quad-core processor could come in handy for quicker processing of these PureView images. Nokia CEO Stephen Elop confirmed that Nokia will have a another major phone launch later this year.
For those who can't wait, the Lumia 1020 will arrive at AT&T (T, Fortune 500) stores on July 26 for $300 with a two-year contract.
Vision of the future: 10 hi-tech inventions we'll hopefully be using in 2030
Vision of the future: 10 hi-tech inventions we'll hopefully be using in 2030
We’ve been promised flying cars, teleporters and jet packs for years but none of them – as yet – have made it to the high street
People have been trying to predict the future since Nostradamus was a lad.
We’ve been promised flying cars, teleporters and jet packs for years but none of them – as yet – have made it to the high street.
However, futurologist Ian Pearson has a list of 10 hi-tech innovations that he claims will be surefire hits by 2030.
A smart yoghurt, anyone?
This will not only show when someone is dreaming, but recent developments indicate that we’ll also be able to tell what they are dreaming about.
It is also possible (with prior agreement presumably, and when both people are in a dream state at the same time) for two people to share dreams.
One could try to steer a friend’s dream in the same direction, so that they could effectively share a dream, and may even be able to interact in it.
We will be able to directly access more information outside the brain, making us much smarter, with thought access to most of human knowledge.
The link will also allow us to share ideas directly with other people, effectively sharing their consciousness, memories, experiences.
This will create a whole new level of intimacy, and let you explore other people’s creativity directly.
This could certainly be one of the most fun bits of the future as long as we take suitable precautions.
But they will have three tiny lasers and a micromirror to beam pictures directly onto the retina, creating images in as high resolution as your eye can see.
This could make all other forms of display superfluous.
There is no need to wear a wristwatch,have a mobile phone, tablet or TV but you could still have them visually.
The contact lens can deliver a full 3D, totally immersive perfect resolution experience.
They will even let you watch movies or read your messages without opening your eyes.
When your body dies, you’ll only lose the bits still based in the brain. Most of your mind will carry on.
You’ll go to your funeral, buy an android body and carry on.
Death won’t be a career problem.
If you don’t want to use an android, maybe you’ll link into your friends’ bodies and share them, just as students hang out on friends’ sofas.
Life really begins after death.
This will increase until computers have millions of processors.
These might be suspended in gel to keep them cool and allow them to be wired together via light beams.
In separate developments, bacteria are being genetically modified to let them make electronic components.
Putting these together, smart yoghurt could be the basis of future computing.
With potentially vastly superhuman intelligence, one day your best friend could be a yogurt.
With them you could turn your whole forearm into a computer display. Anyone with ordinary tattoos will wish they’d waited a while.
You will also be able to get electronic makeup.
You would just wipe it all over your face and then touch it to, and it will instantly become whatever you want.
You will be able to change your appearance several times a day depending on your mood.
That technology area is developing very fast now and soon we will all be wearing a lightweight visor as we walk around.
As well as all the stuff your phone does, it will allow you to place anything you want straight right in front of you.
The streets can be full of cartoon characters, aliens or zombies.
You can change how people look too, replacing them with your favourite models if you wish.
They are too expensive to make today, but not in the future.
Imagine free-running and leaping between buildings like a superhero, and having built-in reactive armour to make you bulletproof too, with extra super-senses also built in.
A lot of that stuff is feasible, so exoskeletons might become very popular leisure and sports wear, as well as the obvious military and emergency service uses.
These can easily link wirelessly to robots.
Robotics technology will use polymer gel muscles too, and a nice silicone covering could make them very human-like, so they can mix easily with humans as servants, colleagues, guards or companions, pretty much what they do in the movie I, Robot, but with a much nicer appearance and probably much smarter.
Then you could relive the experience days or years later.
From a favourite ski run to the feel of everyday objects, you can replay the full sensory experience.
Computer games will become totally immersive too.
We’ve been promised flying cars, teleporters and jet packs for years but none of them – as yet – have made it to the high street.
However, futurologist Ian Pearson has a list of 10 hi-tech innovations that he claims will be surefire hits by 2030.
A smart yoghurt, anyone?
1. Dream linking
Using pillows with conducting fibres in the fabric, it will be possible to see monitor electrical activity from the brain.This will not only show when someone is dreaming, but recent developments indicate that we’ll also be able to tell what they are dreaming about.
It is also possible (with prior agreement presumably, and when both people are in a dream state at the same time) for two people to share dreams.
One could try to steer a friend’s dream in the same direction, so that they could effectively share a dream, and may even be able to interact in it.
2. Shared consciousness
Many people believe we will one day have full links between their brains and an external computer.We will be able to directly access more information outside the brain, making us much smarter, with thought access to most of human knowledge.
The link will also allow us to share ideas directly with other people, effectively sharing their consciousness, memories, experiences.
This will create a whole new level of intimacy, and let you explore other people’s creativity directly.
This could certainly be one of the most fun bits of the future as long as we take suitable precautions.
3. Active contact lenses
These nifty gadgets will sit in your eyes like normal contact lenses.But they will have three tiny lasers and a micromirror to beam pictures directly onto the retina, creating images in as high resolution as your eye can see.
This could make all other forms of display superfluous.
There is no need to wear a wristwatch,have a mobile phone, tablet or TV but you could still have them visually.
The contact lens can deliver a full 3D, totally immersive perfect resolution experience.
They will even let you watch movies or read your messages without opening your eyes.
4. Immortality and body sharing
While computers get smarter, the brain-IT link will also get better, so you’ll use external IT more, until most of your mind is outside your brain.When your body dies, you’ll only lose the bits still based in the brain. Most of your mind will carry on.
You’ll go to your funeral, buy an android body and carry on.
Death won’t be a career problem.
If you don’t want to use an android, maybe you’ll link into your friends’ bodies and share them, just as students hang out on friends’ sofas.
Life really begins after death.
3. Smart yoghurt
A ‘quad core’ PC has for processors all sharing the same chip, instead of the single one there used to be.This will increase until computers have millions of processors.
These might be suspended in gel to keep them cool and allow them to be wired together via light beams.
In separate developments, bacteria are being genetically modified to let them make electronic components.
Putting these together, smart yoghurt could be the basis of future computing.
With potentially vastly superhuman intelligence, one day your best friend could be a yogurt.
6. Video tattoos
It will soon be possible to have electronic displays printed on thin plastic membranes, just like the ones you use for temporary tattoos that you put on your skin.With them you could turn your whole forearm into a computer display. Anyone with ordinary tattoos will wish they’d waited a while.
You will also be able to get electronic makeup.
You would just wipe it all over your face and then touch it to, and it will instantly become whatever you want.
You will be able to change your appearance several times a day depending on your mood.
7. Augmented reality
You’ve seen films where the hero sees the world with computer generated graphics or data superimposed on their field of view.That technology area is developing very fast now and soon we will all be wearing a lightweight visor as we walk around.
As well as all the stuff your phone does, it will allow you to place anything you want straight right in front of you.
The streets can be full of cartoon characters, aliens or zombies.
You can change how people look too, replacing them with your favourite models if you wish.
8. Exoskeletons
Polymer gel muscles will be five times stronger than natural ones, so you could buy clothing that gives you superhuman strength.They are too expensive to make today, but not in the future.
Imagine free-running and leaping between buildings like a superhero, and having built-in reactive armour to make you bulletproof too, with extra super-senses also built in.
A lot of that stuff is feasible, so exoskeletons might become very popular leisure and sports wear, as well as the obvious military and emergency service uses.
9. Androids
Artificial intelligence is likely to make computers that you can talk to just like humans in the near future.These can easily link wirelessly to robots.
Robotics technology will use polymer gel muscles too, and a nice silicone covering could make them very human-like, so they can mix easily with humans as servants, colleagues, guards or companions, pretty much what they do in the movie I, Robot, but with a much nicer appearance and probably much smarter.
10. Active skin
Tiny tiny skin-cell sized electronic capsules blown into the skin would enable us to record nerve signals associated with any sensation.Then you could relive the experience days or years later.
From a favourite ski run to the feel of everyday objects, you can replay the full sensory experience.
Computer games will become totally immersive too.
How Technology Is Destroying Jobs
How Technology Is Destroying Jobs
Given his calm and reasoned
academic demeanor, it is easy to miss just how provocative Erik
Brynjolfsson’s contention really is. Brynjolfsson, a professor at the
MIT Sloan School of Management, and his collaborator and coauthor Andrew
McAfee have been arguing for the last year and a half that impressive
advances in computer technology—from improved industrial robotics to
automated translation services—are largely behind the sluggish
employment growth of the last 10 to 15 years. Even more ominous for
workers, the MIT academics foresee dismal prospects for many types of
jobs as these powerful new technologies are increasingly adopted not
only in manufacturing, clerical, and retail work but in professions such
as law, financial services, education, and medicine.
That robots,
automation, and software can replace people might seem obvious to
anyone who’s worked in automotive manufacturing or as a travel agent.
But Brynjolfsson and McAfee’s claim is more troubling and controversial.
They believe that rapid technological change has been destroying jobs
faster than it is creating them, contributing to the stagnation of
median income and the growth of inequality in the United States. And,
they suspect, something similar is happening in other technologically
advanced countries.Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.
It’s a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. “It’s the great paradox of our era,” he says. “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren’t keeping up.”
Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.
Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world’s manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts—tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics’ Baxter (see “The Blue-Collar Robot,” May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google’s driverless car suggest what automation might be able to accomplish someday soon.
A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.
It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, “digital versions of human intelligence” are increasingly replacing even those jobs once thought to require people. “It will change every profession in ways we have barely seen yet,” he warns.
McAfee, associate director of the MIT Center for Digital Business at the Sloan School of Management, speaks rapidly and with a certain awe as he describes advances such as Google’s driverless car. Still, despite his obvious enthusiasm for the technologies, he doesn’t see the recently vanished jobs coming back. The pressure on employment and the resulting inequality will only get worse, he suggests, as digital technologies—fueled with “enough computing power, data, and geeks”—continue their exponential advances over the next several decades. “I would like to be wrong,” he says, “but when all these science-fiction technologies are deployed, what will we need all the people for?”
New Economy?
But are these new technologies really responsible for a decade of lackluster job growth? Many labor economists say the data are, at best, far from conclusive. Several other plausible explanations, including events related to global trade and the financial crises of the early and late 2000s, could account for the relative slowness of job creation since the turn of the century. “No one really knows,” says Richard Freeman, a labor economist at Harvard University. That’s because it’s very difficult to “extricate” the effects of technology from other macroeconomic effects, he says. But he’s skeptical that technology would change a wide range of business sectors fast enough to explain recent job numbers.
Employment trends have polarized the workforce and hollowed out the middle class.David Autor, an economist at MIT who has extensively studied the connections between jobs and technology, also doubts that technology could account for such an abrupt change in total employment. “There was a great sag in employment beginning in 2000. Something did change,” he says. “But no one knows the cause.” Moreover, he doubts that productivity has, in fact, risen robustly in the United States in the past decade (economists can disagree about that statistic because there are different ways of measuring and weighing economic inputs and outputs). If he’s right, it raises the possibility that poor job growth could be simply a result of a sluggish economy. The sudden slowdown in job creation “is a big puzzle,” he says, “but there’s not a lot of evidence it’s linked to computers.”
To be sure, Autor says, computer technologies are changing the types of jobs available, and those changes “are not always for the good.” At least since the 1980s, he says, computers have increasingly taken over such tasks as bookkeeping, clerical work, and repetitive production jobs in manufacturing—all of which typically provided middle-class pay. At the same time, higher-paying jobs requiring creativity and problem-solving skills, often aided by computers, have proliferated. So have low-skill jobs: demand has increased for restaurant workers, janitors, home health aides, and others doing service work that is nearly impossible to automate. The result, says Autor, has been a “polarization” of the workforce and a “hollowing out” of the middle class—something that has been happening in numerous industrialized countries for the last several decades. But “that is very different from saying technology is affecting the total number of jobs,” he adds. “Jobs can change a lot without there being huge changes in employment rates.”
What’s more, even if today’s digital technologies are holding down job creation, history suggests that it is most likely a temporary, albeit painful, shock; as workers adjust their skills and entrepreneurs create opportunities based on the new technologies, the number of jobs will rebound. That, at least, has always been the pattern. The question, then, is whether today’s computing technologies will be different, creating long-term involuntary unemployment.
At least since the Industrial Revolution began in the 1700s, improvements in technology have changed the nature of work and destroyed some types of jobs in the process. In 1900, 41 percent of Americans worked in agriculture; by 2000, it was only 2 percent. Likewise, the proportion of Americans employed in manufacturing has dropped from 30 percent in the post–World War II years to around 10 percent today—partly because of increasing automation, especially during the 1980s.
While such changes can be painful for workers whose skills no longer match the needs of employers, Lawrence Katz, a Harvard economist, says that no historical pattern shows these shifts leading to a net decrease in jobs over an extended period. Katz has done extensive research on how technological advances have affected jobs over the last few centuries—describing, for example, how highly skilled artisans in the mid-19th century were displaced by lower-skilled workers in factories. While it can take decades for workers to acquire the expertise needed for new types of employment, he says, “we never have run out of jobs. There is no long-term trend of eliminating work for people. Over the long term, employment rates are fairly stable. People have always been able to create new jobs. People come up with new things to do.”
Still, Katz doesn’t dismiss the notion that there is something different about today’s digital technologies—something that could affect an even broader range of work. The question, he says, is whether economic history will serve as a useful guide. Will the job disruptions caused by technology be temporary as the workforce adapts, or will we see a science-fiction scenario in which automated processes and robots with superhuman skills take over a broad swath of human tasks? Though Katz expects the historical pattern to hold, it is “genuinely a question,” he says. “If technology disrupts enough, who knows what will happen?”
Dr. Watson
To get some insight into Katz’s question, it is worth looking at how today’s most advanced technologies are being deployed in industry. Though these technologies have undoubtedly taken over some human jobs, finding evidence of workers being displaced by machines on a large scale is not all that easy. One reason it is difficult to pinpoint the net impact on jobs is that automation is often used to make human workers more efficient, not necessarily to replace them. Rising productivity means businesses can do the same work with fewer employees, but it can also enable the businesses to expand production with their existing workers, and even to enter new markets.
Take the bright-orange Kiva robot, a boon to fledgling e-commerce companies. Created and sold by Kiva Systems, a startup that was founded in 2002 and bought by Amazon for $775 million in 2012, the robots are designed to scurry across large warehouses, fetching racks of ordered goods and delivering the products to humans who package the orders. In Kiva’s large demonstration warehouse and assembly facility at its headquarters outside Boston, fleets of robots move about with seemingly endless energy: some newly assembled machines perform tests to prove they’re ready to be shipped to customers around the world, while others wait to demonstrate to a visitor how they can almost instantly respond to an electronic order and bring the desired product to a worker’s station.
A warehouse equipped with Kiva robots can handle up to four times as many orders as a similar unautomated warehouse, where workers might spend as much as 70 percent of their time walking about to retrieve goods. (Coincidentally or not, Amazon bought Kiva soon after a press report revealed that workers at one of the retailer’s giant warehouses often walked more than 10 miles a day.)
Despite the labor-saving potential of the robots, Mick Mountz, Kiva’s founder and CEO, says he doubts the machines have put many people out of work or will do so in the future. For one thing, he says, most of Kiva’s customers are e-commerce retailers, some of them growing so rapidly they can’t hire people fast enough. By making distribution operations cheaper and more efficient, the robotic technology has helped many of these retailers survive and even expand. Before founding Kiva, Mountz worked at Webvan, an online grocery delivery company that was one of the 1990s dot-com era’s most infamous flameouts. He likes to show the numbers demonstrating that Webvan was doomed from the start; a $100 order cost the company $120 to ship. Mountz’s point is clear: something as mundane as the cost of materials handling can consign a new business to an early death. Automation can solve that problem.
Though advances like these suggest how some aspects of work could be subject to automation, they also illustrate that humans still excel at certain tasks—for example, packaging various items together. Many of the traditional problems in robotics—such as how to teach a machine to recognize an object as, say, a chair—remain largely intractable and are especially difficult to solve when the robots are free to move about a relatively unstructured environment like a factory or office.
Techniques using vast amounts of computational power have gone a long way toward helping robots understand their surroundings, but John Leonard, a professor of engineering at MIT and a member of its Computer Science and Artificial Intelligence Laboratory (CSAIL), says many familiar difficulties remain. “Part of me sees accelerating progress; the other part of me sees the same old problems,” he says. “I see how hard it is to do anything with robots. The big challenge is uncertainty.” In other words, people are still far better at dealing with changes in their environment and reacting to unexpected events.
For that reason, Leonard says, it is easier to see how robots could work with humans than on their own in many applications. “People and robots working together can happen much more quickly than robots simply replacing humans,” he says. “That’s not going to happen in my lifetime at a massive scale. The semiautonomous taxi will still have a driver.”
One of the friendlier, more flexible robots meant to work with humans is Rethink’s Baxter. The creation of Rodney Brooks, the company’s founder, Baxter needs minimal training to perform simple tasks like picking up objects and moving them to a box. It’s meant for use in relatively small manufacturing facilities where conventional industrial robots would cost too much and pose too much danger to workers. The idea, says Brooks, is to have the robots take care of dull, repetitive jobs that no one wants to do.
It’s hard not to instantly like Baxter, in part because it seems so eager to please. The “eyebrows” on its display rise quizzically when it’s puzzled; its arms submissively and gently retreat when bumped. Asked about the claim that such advanced industrial robots could eliminate jobs, Brooks answers simply that he doesn’t see it that way. Robots, he says, can be to factory workers as electric drills are to construction workers: “It makes them more productive and efficient, but it doesn’t take jobs.”
The machines created at Kiva and Rethink have been cleverly designed and built to work with people, taking over the tasks that the humans often don’t want to do or aren’t especially good at. They are specifically designed to enhance these workers’ productivity. And it’s hard to see how even these increasingly sophisticated robots will replace humans in most manufacturing and industrial jobs anytime soon. But clerical and some professional jobs could be more vulnerable. That’s because the marriage of artificial intelligence and big data is beginning to give machines a more humanlike ability to reason and to solve many new types of problems.
Even if the economy is only going through a transition, it is an extremely painful one for many.In the tony northern suburbs of New York City, IBM Research is pushing super-smart computing into the realms of such professions as medicine, finance, and customer service. IBM’s efforts have resulted in Watson, a computer system best known for beating human champions on the game show Jeopardy! in 2011. That version of Watson now sits in a corner of a large data center at the research facility in Yorktown Heights, marked with a glowing plaque commemorating its glory days. Meanwhile, researchers there are already testing new generations of Watson in medicine, where the technology could help physicians diagnose diseases like cancer, evaluate patients, and prescribe treatments.
IBM likes to call it cognitive computing. Essentially, Watson uses artificial-intelligence techniques, advanced natural-language processing and analytics, and massive amounts of data drawn from sources specific to a given application (in the case of health care, that means medical journals, textbooks, and information collected from the physicians or hospitals using the system). Thanks to these innovative techniques and huge amounts of computing power, it can quickly come up with “advice”—for example, the most recent and relevant information to guide a doctor’s diagnosis and treatment decisions.
Despite the system’s remarkable ability to make sense of all that data, it’s still early days for Dr. Watson. While it has rudimentary abilities to “learn” from specific patterns and evaluate different possibilities, it is far from having the type of judgment and intuition a physician often needs. But IBM has also announced it will begin selling Watson’s services to customer-support call centers, which rarely require human judgment that’s quite so sophisticated. IBM says companies will rent an updated version of Watson for use as a “customer service agent” that responds to questions from consumers; it has already signed on several banks. Automation is nothing new in call centers, of course, but Watson’s improved capacity for natural-language processing and its ability to tap into a large amount of data suggest that this system could speak plainly with callers, offering them specific advice on even technical and complex questions. It’s easy to see it replacing many human holdouts in its new field.
Digital Losers
The contention that automation and digital technologies are partly responsible for today’s lack of jobs has obviously touched a raw nerve for many worried about their own employment. But this is only one consequence of what Brynjolfsson and McAfee see as a broader trend. The rapid acceleration of technological progress, they say, has greatly widened the gap between economic winners and losers—the income inequalities that many economists have worried about for decades. Digital technologies tend to favor “superstars,” they point out. For example, someone who creates a computer program to automate tax preparation might earn millions or billions of dollars while eliminating the need for countless accountants.
New technologies are “encroaching into human skills in a way that is completely unprecedented,” McAfee says, and many middle-class jobs are right in the bull’s-eye; even relatively high-skill work in education, medicine, and law is affected. “The middle seems to be going away,” he adds. “The top and bottom are clearly getting farther apart.” While technology might be only one factor, says McAfee, it has been an “underappreciated” one, and it is likely to become increasingly significant.
Not everyone agrees with Brynjolfsson and McAfee’s conclusions—particularly the contention that the impact of recent technological change could be different from anything seen before. But it’s hard to ignore their warning that technology is widening the income gap between the tech-savvy and everyone else. And even if the economy is only going through a transition similar to those it’s endured before, it is an extremely painful one for many workers, and that will have to be addressed somehow. Harvard’s Katz has shown that the United States prospered in the early 1900s in part because secondary education became accessible to many people at a time when employment in agriculture was drying up. The result, at least through the 1980s, was an increase in educated workers who found jobs in the industrial sectors, boosting incomes and reducing inequality. Katz’s lesson: painful long-term consequences for the labor force do not follow inevitably from technological changes.
Brynjolfsson himself says he’s not ready to conclude that economic progress and employment have diverged for good. “I don’t know whether we can recover, but I hope we can,” he says. But that, he suggests, will depend on recognizing the problem and taking steps such as investing more in the training and education of workers.
“We were lucky and steadily rising productivity raised all boats for much of the 20th century,” he says. “Many people, especially economists, jumped to the conclusion that was just the way the world worked. I used to say that if we took care of productivity, everything else would take care of itself; it was the single most important economic statistic. But that’s no longer true.” He adds, “It’s one of the dirty secrets of economics: technology progress does grow the economy and create wealth, but there is no economic law that says everyone will benefit.” In other words, in the race against the machine, some are likely to win while many others lose.
High-Tech Cheetah Tracking Reveals the Cat’s Hunting Secret
High-Tech Cheetah Tracking Reveals the Cat’s Hunting Secret
Research into wild animal locomotion could inform the design of future robots.
Agile cat:
A new study shows that cheetahs in the Okavango Delta of Botswana, such
as the one shown here, are more agile than previously thought. This
will help further refine robotic biomimicry of the animal, as conducted
by an MIT engineering lab.
Biologically inspired robots could prove useful for all sorts of tasks (see “Just What Soldiers Need: A Bigger Robotic Dog”).
But the design of such robots has been limited by our understanding of
animal locomotion. Now, thanks to tracking technology, this is changing,
and more nimble-footed machines could soon follow.A recent study published in the journal Nature highlights this shift. A group of researchers tracked several cheetahs living in the Okavango river delta of Botswana. The solar powered collars collected GPS data along with information from accelerometers and gyroscopes. This data combination was set up, averaged, and analyzed in a way that overcame the many possible shortcomings, which include GPS inaccuracy during fast movement, battery life, and errors associated with each individual measurement.
Cheetahs have long been known to catch their prey, often small-sized antelope such as impalas or gazelle, by cutting corners during the chase and tripping them up with a paw swipe (this is in stark contrast to Africa’s other cats, such as leopards or lions, which bring down their prey by jumping or latching onto it to drag it down).
Yet the extent to which a cheetah’s agility and acceleration plays a role in its hunting prowess was underestimated. While cheetahs can run at around 60 miles per hour, the researchers found that many successful hunts occurred at relatively low speeds, with a top speed of only 30 mph, while their acceleration, and ability to quickly change direction, played a large role in hunting.
Meanwhile, at MIT, a robotic biomimicry group has been working on replicating cheetah locomotion by building a cheetah-like robot, which has been tested to jump, walk, and run (at a top speed of only 13 mph) with efficiency and stamina that arguably already overwhelms the abilities of its animal counterpart. To be like a realistic cheetah, it’s less important for MIT’s robot to run at 60 mph, than to change direction at 30.
Meet Atlas, the Robot Designed to Save the Day
Meet Atlas, the Robot Designed to Save the Day
New humanoid robots will compete in a contest designed to test the ability of machines to take on extremely dangerous and high-stakes human jobs.
Man-machine: Atlas was developed for the military agency DARPA as a prototype emergency response robot.
The latest innovation from the U.S. Defense Department’s research agency, DARPA, is a humanoid robot called Atlas that looks as if it could’ve walked straight off the set of the latest Hollywood sci-fi blockbuster.
In fact, Atlas is designed to eventually take on some of the most dangerous and high-stakes jobs imaginable, such as tending to a nuclear reactor during a meltdown, shutting off a deep-water oil spill, or helping to put out a raging wildfire. And if Atlas proves itself at such daredevil tasks, then one of its descendants might one day be allowed to do something just as important: help take care of the elderly and infirm.
Atlas was unveiled on Thursday at Boston Dynamics, a company based in Waltham, Massachusetts, that has already developed an impressive menagerie of robotic beasts, some with funding from the Department of Defense, including a headless robot pack mule called LS3, a gecko-like, wall-climbing bot called RiSE, and a four-legged machine called Cheetah capable of bounding along at 29 miles per hour.
Like these other machines, Atlas has incredible capabilities for a legged machine. The six-foot-tall, 330-pound robot has 28 degrees of freedom enabled by powerful hydraulically driven joints that allow it to not only carry heavy objects but adjust with remarkable speed to loss of balance. The robot’s head includes a laser-ranging instrument called a lidar that provides it with a detailed 3-D map of its surroundings. And it has two pairs of slightly different robotic hands. The robot currently requires a tether that feeds it cooling water and high-voltage power, but the goal is to develop an untethered version in 2014.
At Thursday’s event, Atlas performed robotic calisthenics designed to demonstrate its flexibility—somewhat noisily due to the shuddering movement of its hydraulic muscles. Videos showed earlier prototypes walking over uneven ground and inching along narrow ledges.
Several Atlas robots, and a handful of other robots, are involved in the DARPA Robotics Challenge—a contest designed to spur the creation of a robot capable of being remotely operated in treacherous, complex emergency situations. Teams from academia and industry are competing in two groups: one involved in designing and building robots for such missions; another engaged in developing the control software for rescue robots. The seven teams competing in the latter track will each be loaned an Atlas by DARPA to perfect their code.
The teams enrolled in the challenge will spend the next few months training their robots to compete in a grueling physical contest designed to gauge their ability to perform tasks that would challenge many humans. This December, at an event held at the Homestead Miami Speedway, the robots will try to navigate a robot obstacle course involving such challenges as climbing into and driving a vehicle, clambering over rubble, and attaching and operating a hose.
Despite the fact that Atlas bears a more-than-passing resemblance to an early Terminator prototype, DARPA insists that the robot is not designed for “adversarial” military tasks, and is intended only for humanitarian missions. The agency notes that its Robotics Challenge was inspired by the Fukushima nuclear accident in 2011, when human workers struggled to control a nuclear plant severely damaged by an earthquake and tsunami. DARPA did, in fact, send a handful of wheeled robots to the Fukushima plant, but these were unable to cope with obstacles such as rubble on the ground, or to perform the complex tasks needed. “We were tearing our hair out trying to help, and the truth is there was very little we could do,” DARPA program manager Gill Pratt said at Thursday’s unveiling.
Walking tall: A version of Atlas without its arms walks on a treadmill at Boston Dynamics.
Long a staple of science fiction, humanoid robots have been kicking around robotics research labs for decades. But they have typically been too slow, weak, or clumsy to do much. Recent improvements in sensors and hardware have brought the prospect of a humanoid ready for real-world deployment closer. “A number of technologies have gotten just good enough, or almost good enough, to make this thing work,” Pratt said, pointing to the hydraulic controls, the lidar navigation system built into the robot’s head, and its interchangeable hands.
“It’s an extraordinary machine,” said Seth Teller, a professor at MIT who, along with colleague Russ Tedrake, leads one of the groups selected to receive an Atlas. “They’ve done a fantastic job on these machines; it’s been a real pleasure to see and touch and use the real hardware.”
The teams given Atlas robots will have to develop control software that will allow human controllers to operate the robots despite significant time delays—a constraint designed to mimic the challenge of operating from through the walls of a crumbling nuclear plant, or at a far-flung distance. The strategy adopted by Teller’s team involves having the human operator break each high-level mission into a series of smaller tasks, and guide the robot through a performance of each task. “Existing teleoperation systems impose too much cognitive load on the operator. One major aspect of the DARPA challenge is finding a way of commanding these robots that reduces that burden,” Teller said.
Asked what kinds of innovations Atlas could inspire beyond emergency work, he said humanoid robots could perhaps one day find a job in health care. “I know this robot looks big, and I know it weighs 300 pounds, but the number-one use for machines of this type is going to be in home care and health care,” he said.
Subscribe to:
Posts (Atom)