Month: October, 2017

Astronomers May Have Found The First Exomoon

When the first exoplanet—or planet orbiting another star—was discovered in 1992, it was a very big deal.

Today, we’ve discovered thousands of exoplanets and it takes a particularly noteworthy one to grab our attention.

We’ve spotted big exoplanets, small exoplanets, and everything in between.

Now scientists are moving on to the next big thing: Exomoons.




Researchers examining old data from the Kepler Space Telescope have spotted what they believe is the first-ever moon beyond our solar system to be found, and they’re planning to use the Hubble Space Telescope to confirm it.

As you might have guessed, the exomoon is an enormous one. The planet in question is Jupiter-sized, and the moon if it indeed exists is around the same size as Neptune.

The Kepler telescope observed the planet and its moon passing in front of their star, which caused the star’s brightness to dip slightly.

This exoplanet-exomoon pair is a strange one, and looks nothing like anything in our own solar system. The researchers believe that the larger, Jupiter-sized planet captured the smaller one and turned it from planet into moon.

Unfortunately, the observations from Kepler aren’t clear enough for the scientists to say definitively that the moon exists. That’s why they need to use Hubble to take a second look.

If Hubble confirms the moon’s existence, it will be the first exomoon ever found. With the many highly sensitive telescopes scheduled to be completed in the next few years, more exomoon discoveries are almost certain.

We’ll probably find a few really big moons over the next few years, and as our telescopes get better we might start finding moons that look like our own.

Pretty soon, exomoons will be old news too, so enjoy this discovery while it’s still fresh.

Please like, share and tweet this article.

Pass it on: New Scientist

The Evolution Of Forests And Trees In Devonian Period

The vascular plant emerged around 400 million years ago and started Earth’s forest-building process during the Silurian geologic period.

Although not yet a “true” tree, this new member of the terrestrial plant kingdom became the perfect evolutionary link (and the largest plant species) with developing tree parts and considered the first proto-tree.

Vascular plants developed the ability to grow large and tall with massive weight needed for the support of a vascular internal plumbing system.




The First Trees

The earth’s first real tree continued to develop during the Devonian period and scientists think that tree was probably the extinct Archaeopteris.

These tree species followed later by other tree types became the definitive species comprising a forest during the late Devonian period.

As mentioned, they were the first plants to overcome the biomechanical problems of supporting additional weight while delivering water and nutrients to fronds (leaves) and roots.

Entering the Carboniferous period around 360 million years ago, trees were prolific and a major part of the plant life community, mostly located in coal-producing swamps.

Trees were developing the parts that we immediately recognize today. Of all the trees that existed during the Devonian and Carboniferous, only the tree fern can still be found, now living in Australasian tropical rainforests.

If you happen to see a fern with a trunk leading to a crown, you have seen a tree fern.

During that same geologic period, now extinct trees including clubmoss and giant horsetail were also growing.

Our Present Evolutionary Forest

Few dinosaurs ever made a meal on hardwood leaves because they were rapidly disappearing before and during the beginning of the new “age of hardwoods” (95 million years ago).

Magnolias, laurels, maples, sycamores and oaks were the first species to proliferate and dominate the world.

Hardwoods became the predominant tree species from mid-latitudes through the tropics while conifers were often isolated to the high-latitudes or the lower latitudes bordering the tropics.

Not a lot of change has happened to trees in terms of their evolutionary record since the palms made their first appearance 70 million years ago.

Fascinating are several tree species that simply defy the extinction process and show no indication that they will change in another dozen million years.

Ginkgo was mentioned earlier but there are others: dawn redwood, Wollemi pine, and monkey puzzle tree.

Please like, share and tweet this article.

Pass it on: Popular Science

Too Much Big Data May Not Be Enough

In the quest to mine and analyze meaningful, reliable, and useful data from the burgeoning plethora of electronic and online sources, healthcare organizations can allow the big picture to overshadow many underlying and valuable components contributing to patient care improvement.

The clinical data and diagnostic images in radiology information systems (RIS) and picture archiving and communication systems (PACS) remain two examples.

For clinical imaging and radiology executives, these visual clues and cues are necessary for effective, efficient decision support.

Certainly a growing number of manufacturers and information technology companies recognize this – even if many healthcare providers have not yet reached the point where they can tackle the necessary underlying infrastructure beyond the planning and strategic stages.

As a result, they’re offering providers a light at the end of the tunnel.




The latest generation of reporting capabilities can help improve the utilization of imaging data for diagnostic decision making,” says Cristine Kao, Global Marketing Director for Healthcare Information Solutions, Carestream.

An NIH study concluded that oncologists and radiologists prefer quantitative reports that include measurements as well as hyperlinks to annotated images with tumor measurements, for example.

A report by Emory and ACR shows eight out of 10 physicians will send more referrals to facilities that can offer interactive multimedia reporting – citing the ability to better collaborate with radiologists.

Connecting all of the technology and tools remains important, too, for a visually rich information view, according to Todd Winey, Senior Advisor, Strategic Markets, InterSystems.

For the clinical and diagnostic data to play a more valuable role in patient care improvement, these trends need to be accelerated, Winey insists, which isn’t without challenges.

VNAs remain only marginally deployed,” he laments. “Many of the advances in radiology information systems and PACS have been focused on productivity improvements for radiologists and are not yet fully supporting advanced interoperability.

Kao agrees with the foundational importance of a VNA but adds that it shouldn’t stop there.

Depending on an organization’s capabilities, imaging data must be accessible to more than just one clinical segment to be included as part of the decision support process, according to Winey.

Kao says she fully anticipates future reporting functions may include “more intuitive searching capabilities that will link pertinent patient information for a specific condition or disease, even if previous reports did not include the specific word involved in the search command.”

“The goal for enhancing the entire diagnostic process is to provide clinically relevant information when and where it’s needed.”

“New advanced reporting techniques provide information that can lead to improved decision support and diagnostic outcomes.

Please like, share and tweet this articles.

Pass it on: New Scientist

Each Time You Recall An Event, Your Brain Distorts It

Remember the telephone game where people take turns whispering a message into the ear of the next person in line?

By the time the last person speaks it out loud, the message has radically changed. It’s been altered with each retelling.

Turns out your memory is a lot like the telephone game, according to a new Northwestern Medicine study.

Every time you remember an event from the past, your brain networks change in ways that can alter the later recall of the event.

Thus, the next time you remember it, you might recall not the original event but what you remembered the previous time. The Northwestern study is the first to show this.

A memory is not simply an image produced by time traveling back to the original event it can be an image that is somewhat distorted because of the prior times you remembered it,” said Donna Bridge, a postdoctoral fellow at Northwestern University Feinberg School of Medicine and lead author of the paper on the study recently published in the Journal of Neuroscience.




Your memory of an event can grow less precise even to the point of being totally false with each retrieval.”

Maybe a witness remembers something fairly accurately the first time because his memories aren’t that distorted,” she said. “After that it keeps going downhill.”

The published study reports on Bridge’s work with 12 participants, but she has run several variations of the study with a total of 70 people.

Every single person has shown this effect,” she said. “It’s really huge.

The reason for the distortion, Bridge said, is the fact that human memories are always adapting.

Memories aren’t static,” she noted.

If you remember something in the context of a new environment and time, or if you are even in a different mood, your memories might integrate the new information.

For the study, people were asked to recall the location of objects on a grid in three sessions over three consecutive days.

On the first day during a two-hour session, participants learned a series of 180 unique object-location associations on a computer screen.

The next day in session two, participants were given a recall test in which they viewed a subset of those objects individually in a central location on the grid and were asked to move them to their original location.

Then the following day in session three, participants returned for a final recall test.

The results showed improved recall accuracy on the final test for objects that were tested on day two compared to those not tested on day two.

However, people never recalled exactly the right location.

Most importantly, in session three they tended to place the object closer to the incorrect location they recalled during day two rather than the correct location from day one.

Our findings show that incorrect recollection of the object’s location on day two influenced how people remembered the object’s location on day three,” Bridge explained.

Retrieving the memory didn’t simply reinforce the original association. Rather, it altered memory storage to reinforce the location that was recalled at session two.

The results revealed a particular electrical signal when people were recalling an object location during session two.

This signal was greater when the next day the object was placed close to that location recalled during session two.

When the electrical signal was weaker, recall of the object location was likely to be less distorted.

The research was supported by National Science Foundation grant BCS1025697 and National Institute of Neurological Disorders and Stroke of the National Institutes of Health grant T32 NS047987.

Please like, share and tweet this articles.

Pass it on: Popular Science

Adobe And Stanford Just Taught AI To Edit Better Videos Than You

Just one minute of video typically takes several hours of editing — but Stanford and Adobe researchers have developed an artificial intelligence (AI) program that partially automates the editing process, while still giving the user creative control over the final result.

The program starts by organizing all of the footage, which is often from multiple takes and camera angles. Those clips are matched to the script, so it’s easy to find several video options for each line of dialogue.

The program then works to recognize exactly what is inside those clips. Using facial recognition alongside emotion recognition and other computational imaging effects, the program determines what is in each frame.




For example, the program flags whether the shot is a wide-angle or a close-up and which characters the shot includes.

With everything organized, the video editor then instructs the program in just how the videos should be edited using different styles and techniques the researchers call idioms.

For example, a common style is to show the face of the character during their lines. If the editor wants that to happen, he or she just drags that idiom over.

The idioms can also be negative. For example, the idiom “avoid jump cuts,” can be added to actually avoid them, or negatively to intentionally add jump cuts whenever possible.

The editor can drag over multiple idioms to instruct the program on an editing style.

In a video demonstrating the technology, the researchers created a cinematic edit by using idioms that tell the software to keep the speaker visible while talking, to start with a wide-angle shot, to mix with close-ups and to avoid jump cuts.

To edit the video in a completely different, fast-paced style, the researchers instead dragged over idioms for including jump cuts, using fast performance, and keeping the zoom consistent.

Editing styles can be saved to recall later, and with the idioms in place, a stylized video edit is generated with a click. Alternative clips are arranged next to the computer’s edit so editors can quickly adjust if something’s not quite right.

The program speeds up video editing using artificial intelligence, but also allows actual humans to set the creative parameters in order to achieve a certain style.

The researchers did acknowledge a few shortcomings of the program. The system is designed for dialogue-based videos, and further work would need to be done for the program to work with other types of shots, such as action shots.

The program also couldn’t prevent continuity errors, where the actor’s hands or a prop is in a different location in the next clip.

The study, conducted by Stanford University and Adobe Research, is included in the July issue of the ACM Transactions on Graphics Journal.

Please like, share and tweet this article.

Pass it on: Popular Science

Take A Look At Our Cosmic Neighborhood

Due to the protective shielding of dangerous Galactic Cosmic Rays provided by a heliosphere or astrosphere, these structures are important for the planets that orbit the respective stars.

Only over the last 15 years, we have been able to detect the first astrospheres and planets around other stars (exoplanets). Graphic of the most immediate environment around the Sun, our cosmic neighborhood.




The locations of known astrospheres and exoplanets are indicated, while we anticipate that many more are present and just awaiting discovery.

The nearest star, alpha Centauri has an astrosphere, and we know of at least two cases where we have detected both an astrosphere and exoplanets.

These systems are truly analogous to our system in which the heliosphere shields a diverse planetary system.

Please like, share and tweet this article.

Pass it on: Popular Science

How Do Dogs Smell Fear?

The portion of the canine’s brain that is used for sorting out smells is up to 40 times the size of that same part of the human brain.

As for fear, being frightened can make humans sweat, an odor that a dog can easily identify.

Then there is adrenaline and associated hormones, which will pump through our bodies when we are are even a little nervous.

Just because we don’t know the “adrenaline scent” ourselves doesn’t mean that a dog won’t recognize it.

However, let me nitpick and say that being able to smell our sweat glands doesn’t mean a dog can literally smell the emotion of fear itself.




Most likely, playing a far bigger role in determining our level of fear is the canine’s outstanding ability to read our body language.

The dog feels our fear and senses we are scared just by watching us.

Dogs are very smart at figuring out our emotions. Anger, feeling threatened or being nervous cannot be hidden from a dog.

In fact, he may pinpoint your fears before you even realize them. People who are afraid of a dog often stare directly at it, probably in hopes of watching his every move.

But the dog may take the stare as a warning that he is going to be confronted, therefore becoming aggressive.

How do dogs smell fear? And can dogs even smell fear at all, in the strictest sense of the phrase?

The answer to these questions may never be known with certainty. But even though you’re an adult now, still choose to follow your parents’ teachings and not show your anxiety when approaching an unknown dog.

Whether your uneasiness can be smelled or sensed, always try to keep my cool.

Please like, share and tweet this article.

Pass it on: New Scientist

More Than 90 Percent Of All Organisms That Have Ever Lived On Earth Are Extinct

As new species evolve to fit ever changing ecological niches, older species fade away. But the rate of extinction is far from constant.

At least a handful of times in the last 500 million years, 50 to more than 90 percent of all species on Earth have disappeared in a geological blink of the eye.

Though these mass extinctions are deadly events, they open up the planet for new life-forms to emerge.

Dinosaurs appeared after one of the biggest mass extinction events on Earth, the Permian-Triassic extinction about 250 million years ago.

The most studied mass extinction, between the Cretaceous and Paleogene periods about 65 million years ago, killed off the dinosaurs and made room for mammals to rapidly diversify and evolve.




Scientists have narrowed down several of the most likely causes of mass extinction. Flood basalt events (volcano eruptions), asteroid collisions, and sea level falls are the most likely causes of mass extinctions, though several other known events may also contribute.

These include global warming, global cooling, methane eruptions and anoxic events–when the earth’s oceans lose their oxygen.

Both volcano eruptions and asteroid collisions would eject tons of debris into the atmosphere, darkening the skies for at least months on end.

Starved of sunlight, plants and plant-eating creatures would quickly die.

Space rocks and volcanoes could also unleash toxic and heat-trapping gases that—once the dust settled—enable runaway global warming.

An extraterrestrial impact is most closely linked to the Cretaceous-Paleogene extinction event, one of the five largest in the history of the world, and the most recent.

A huge crater off Mexico’s Yucatán Peninsula is dated to about 65 million years ago, coinciding with the extinction.

Global warming fueled by volcanic eruptions at the Deccan Flats in India may also have aggravated the event. Dinosaurs, as well as about half of all species on the planet, went extinct.

Massive floods of lava erupting from the central Atlantic magmatic province about 200 million years ago may explain the Triassic-Jurassic extinction.

About 20 percent of all marine families went extinct, as well as most mammal-like creatures, many large amphibians, and all non-dinosaur archosaurs.

An asteroid impact is another possible cause of the extinction, though a telltale crater has yet to be found.

The Permian-Triassic extinction event was the deadliest: More than 90 percent of all species perished. Many scientists believe an asteroid or comet triggered the massive die-off, but, again, no crater has been found.

Another strong contender is flood volcanism from the Siberian Traps, a large igneous province in Russia. Impact-triggered volcanism is yet another possibility.

Starting about 360 million years ago, a drawn-out event eliminated about 70 percent of all marine species from Earth over a span of perhaps 20 million years.

Pulses, each lasting 100,000 to 300,000 years, are noted within the larger late Devonian extinction.

Insects, plants, and the first proto-amphibians were on land by then, though the extinctions dealt landlubbers a severe setback.

Today, many scientists think the evidence indicates a sixth mass extinction is under way. The blame for this one, perhaps the fastest in Earth’s history, falls firmly on the shoulders of humans.

By the year 2100, human activities such as pollution, land clearing, and overfishing may drive more than half of the world’s marine and land species to extinction.

Please like, share and tweet this article.

Pass it on: New Scientist

Here’s What Happens Inside You When A Mosquito Bites

 

The video above shows a brown needle that looks like it’s trying to bury itself among some ice-cubes. It is, in fact, the snout of a mosquito, searching for blood vessels in the flesh of a mouse.

This footage was captured by Valerie Choumet and colleagues from the Pasteur Institute in Paris, who watched through a microscope as malarial mosquitoes bit a flap of skin on an anaesthetised mouse.

The resulting videos provide an unprecedented look at exactly what happens when a mosquito bites a host and drinks its blood.

For a start, look how flexible the mouthparts are! The tip can almost bend at right angles, and probes between the mouse’s cells in a truly sinister way.

This allows the mosquito to search a large area without having to withdraw its mouthparts and start over.




From afar, a mosquito’s snout might look like a single tube, but it’s actually a complicated set of tools, encased in a sheath called the labium.

You can’t see the labrum at all in the videos; it buckles when the insect bites, allowing the six mouthparts within to slide into the mouse’s skin.

Four of these—a pair of mandibles and a pair of maxillae—are thin filaments that help to pierce the skin. You can see them flaring out to the side in the video.

The maxillae end in toothed blades, which grip flesh as they plunge into the host. The mosquito can then push against these to drive the other mouthparts deeper.

The large central needle in the video is actually two parallel tubes—the hypopharynx, which sends saliva down, and the labrum, which pumps blood back up.

When a mosquito finds a host, these mouthparts probe around for a blood vessel. They often take several attempts, and a couple of minutes, to find one.

And unexpectedly, around half of the ones that Choumet tested failed to do so. While they could all bite, it seemed that many suck at sucking.

The video below shows what happens when a mosquito finally finds and pierces a blood vessel.

On average, they drink for around 4 minutes and at higher magnifications, Choumet could actually see red blood cells rushing up their mouthparts.

They suck so hard that the blood vessels start to collapse. Some of them rupture, spilling blood into the surrounding spaces.

When that happens, the mosquito sometimes goes in for seconds, drinking directly from the blood pool that it had created.

When the mosquitoes were infected with the Plasmodium parasites that cause malaria, they spent more time probing around for blood vessels.

It’s not clear why—the parasites could be controlling the insect’s nervous system or changing the activity of genes in its mouthparts.

Either way, the infected mosquitoes give up much less readily in their search for blood, which presumably increases the odds that the parasites will enter a new host.

Many hours after a bite, Choumet’s team found Plasmodium in the rodents’ skin, huddled in areas that were also rife with the mosquito’s saliva.

The team also tested “immunised” mice, which were loaded with antibodies that recognise a mosquito’s saliva.

Some people, especially in Africa and Asia, are bitten several times every day, so we wanted to know if mosquitoes behaved differently when they bit animals that were immunised against their saliva,” says Choumet.

Beyond the stunning videos, these discoveries are unlikely to lead to new ways of preventing or treating malaria by themselves.

However, they do tell us a lot more about the event that kicks off every single malaria case—a mosquito bite. It’s a resource that other researchers will undoubtedly use.

I have submitted a grant application to investigate aspects of the interactions between mosquitoes, hosts and parasites,” says Logan.

The techniques and discoveries from this paper are very exciting to me, and will be of value to future activities of my own research group.

Please like, share and tweet this article.

Pass it on: Popular Science

GPUs: The Unsung Heroes That Are Accelerating The Development Of AI

Today, a self-driving car powered by AI can meander through a country road at night and find its way to its destination. An AI-powered robot can learn motor skills through trial and error.

And artificial neural networks can analyse images to identify the early signs of disease more accurately than any human. Truly, we are living in an extraordinary time.

Back in 1995, the convergence of low-cost microprocessors (CPUs), a standard operating system (Windows 95), and a new portal to a world of information (Yahoo!) sparked the PC-Internet era.




It brought the power of computing to about a billion people and realized Microsoft’s vision to put ‘a computer on every desk and in every home.’

A decade later, the iPhone put an ‘Internet communications’ device in our pockets. Coupled with the launch of Amazon’s AWS, the Mobile-Cloud era was born.

A world of apps entered our daily lives and some 3 billion people now enjoy the freedom that mobile computing can afford.

The AI computing era that we are living in today is driven by a new computing model, GPU-accelerated deep learning.

The graphics processing unit or GPU was originally invented to drive 3D graphics in video games.

But, because of its ability to handle large amounts of data at the same time, known as parallel processing, this tiny piece of silicon punches well above its weight.

Several years ago, researchers discovered that the GPU’s parallel processing power makes it perfectly suited to crunching the huge amounts of data required to train the artificial neural networks on which deep learning is based.

The ‘big bang’ of AI was ignited and GPU-accelerated deep learning was born.

Since then, the world has woken up to the power of GPU-accelerated deep learning. Baidu, Google, Facebook and Microsoft were the first companies to adopt it for pattern recognition.

Now, AI researchers everywhere across a wide variety of fields have turned to GPU-accelerated deep learning to advance their work.

GPU-accelerated deep learning is being applied to solve challenges in every industry around the world. Self-driving cars will transform the $10 trillion transportation industry.

In healthcare, doctors will use AI to detect disease at the earliest possible moment, to understand the human genome and tackle cancer, or to learn from the massive volume of medical data to recommend the best treatments.

And AI will usher in the 4th industrial revolution — intelligent robotics will drive a new wave of productivity improvements and enable mass consumer customization.AI will touch everyone.

AI will touch everyone.

Please like, share and tweet this article.

Pass it on: Popular Science