Cheddar Man: DNA shows early Briton had dark skin

A cutting-edge scientific analysis shows that a Briton from 10,000 years ago had dark brown skin and blue eyes.

Researchers from London’s Natural History Museum extracted DNA from Cheddar Man, Britain’s oldest complete skeleton, which was discovered in 1903.

University College London researchers then used the subsequent genome analysis for a facial reconstruction.

It underlines the fact that the lighter skin characteristic of modern Europeans is a relatively recent phenomenon.

No prehistoric Briton of this age had previously had their genome analysed.

As such, the analysis provides valuable new insights into the first people to resettle Britain after the last Ice Age.

The analysis of Cheddar Man’s genome – the “blueprint” for a human, contained in the nuclei of our cells – will be published in a journal, and will also feature in the upcoming Channel 4 documentary The First Brit, Secrets Of The 10,000-year-old Man.

Cheddar Man’s remains had been unearthed 115 years ago in Gough’s Cave, located in Somerset’s Cheddar Gorge. Subsequent examination has shown that the man was short by today’s standards – about 5ft 5in – and probably died in his early 20s.

Prof Chris Stringer, the museum’s research leader in human origins, said: “I’ve been studying the skeleton of Cheddar Man for about 40 years

“So to come face-to-face with what this guy could have looked like – and that striking combination of the hair, the face, the eye colour and that dark skin: something a few years ago we couldn’t have imagined and yet that’s what the scientific data show.”

Cheddar Man
A replica of Cheddar Man’s skeleton now lies in Gough’s Cave

Fractures on the surface of the skull suggest he may even have met his demise in a violent manner. It’s not known how he came to lie in the cave, but it’s possible he was placed there by others in his tribe.

The Natural History Museum researchers extracted the DNA from part of the skull near the ear known as the petrous. At first, project scientists Prof Ian Barnes and Dr Selina Brace weren’t sure if they’d get any DNA at all from the remains.

But they were in luck: not only was DNA preserved, but Cheddar Man has since yielded the highest coverage (a measure of the sequencing accuracy) for a genome from this period of European prehistory – known as the Mesolithic, or Middle Stone Age.

They teamed up with researchers at University College London (UCL) to analyse the results, including gene variants associated with hair, eye and skin colour.

Extra mature Cheddar

They found the Stone Age Briton had dark hair – with a small probability that it was curlier than average – blue eyes and skin that was probably dark brown or black in tone.

This combination might appear striking to us today, but it was a common appearance in western Europe during this period.

Steven Clarke, director of the Channel Four documentary, said: “I think we all know we live in times where we are unusually preoccupied with skin pigmentation.”

Prof Mark Thomas, a geneticist from UCL, said: “It becomes a part of our understanding, I think that would be a much, much better thing. I think it would be good if people lodge it in their heads, and it becomes a little part of their knowledge.”

Unsurprisingly, the findings have generated lots of interest on social media.

Cheddar Man’s genome reveals he was closely related to other Mesolithic individuals – so-called Western Hunter-Gatherers – who have been analysed from Spain, Luxembourg and Hungary.

Dutch artists Alfons and Adrie Kennis, specialists in palaeontological model-making, took the genetic findings and combined them with physical measurements from scans of the skull. The result was a strikingly lifelike reconstruction of a face from our distant past.

Pale skin probably arrived in Britain with a migration of people from the Middle East around 6,000 years ago. This population had pale skin and brown eyes and absorbed populations like the ones Cheddar Man belonged to.

Chris Stringer
Image captionProf Chris Stringer had studied Cheddar Man for 40 years – but was struck by the Kennis brothers’ reconstruction

No-one’s entirely sure why pale skin evolved in these farmers, but their cereal-based diet was probably deficient in Vitamin D. This would have required agriculturalists to absorb this essential nutrient from sunlight through their skin.

“There may be other factors that are causing lower skin pigmentation over time in the last 10,000 years. But that’s the big explanation that most scientists turn to,” said Prof Thomas.

Boom and bust

The genomic results also suggest Cheddar Man could not drink milk as an adult. This ability only spread much later, after the onset of the Bronze Age.

Present-day Europeans owe on average 10% of their ancestry to Mesolithic hunters like Cheddar Man.

Britain has been something of a boom-and-bust story for humans over the last million-or-so years. Modern humans were here as early as 40,000 years ago, but a period of extreme cold known as the Last Glacial Maximum drove them out some 10,000 years later.

There’s evidence from Gough’s Cave that hunter-gatherers ventured back around 15,000 years ago, establishing a temporary presence when the climate briefly improved. However, they were soon sent packing by another cold snap. Cut marks on the bones suggest these people cannibalised their dead – perhaps as part of ritual practices.

Ian BarnesImage copyrightCHANNEL 4
Image captionThe actual skull of Cheddar Man is kept in the Natural History Museum, seen being handled here by Ian Barnes

Britain was once again settled 11,000 years ago; and has been inhabited ever since. Cheddar Man was part of this wave of migrants, who walked across a landmass called Doggerland that, in those days, connected Britain to mainland Europe. This makes him the oldest known Briton with a direct connection to people living here today.

This is not the first attempt to analyse DNA from the Cheddar Man. In the late 1990s, Oxford University geneticist Brian Sykes sequenced mitochondrial DNA from one of Cheddar Man’s molars.

Mitochondrial DNA comes from the biological “batteries” within our cells and is passed down exclusively from a mother to her children.

Prof Sykes compared the ancient genetic information with DNA from 20 living residents of Cheddar village and found two matches – including history teacher Adrian Targett, who became closely connected with the discovery. The result is consistent with the approximately 10% of Europeans who share the same mitochondrial DNA type.

 

The UK Is Officially Letting Doctors Create A 3-Parent Baby

 

Original Article

The modern era of the so-called “three-parent baby” has officially kicked off, and it will begin in the UK.

According to the BBC, the country’s Human Fertilization and Embryology Authority (HFEA) has granted permission for doctors at the Newcastle Fertility Center to artificially implant two women with an embryo containing the DNA of three people. The procedure is intended to prevent the women from passing a rare, debilitating genetic condition known as MERRF (myoclonic epilepsy with ragged red fibers) syndrome down to their children. People born with MERRF suffer a wide variety of chronic symptoms, including seizures, impaired muscles, and eventually dementia.

There are two current techniques that can be used to create a three-parent baby, but the net result is the same: A child born with the nuclear DNA of their intended parents, and the swapped-in mitochondrial DNA of a donor woman.

Mitochondria are an essential part of nearly every kind of cell found in the body, acting as the cell’s source of energy. But only a tiny slice of our DNA determines how our mitochondria functions—a whooping 37 genes out of more than 20,000. And none of these genes influence things like our appearance, risk of some cancers, or propensity for Cheetos. But because we obtain the genes for making mitochondria exclusively from our mother, women whose mitochondria have damaging mutations are at high risk at passing on those same flaws to their children, including those responsible for MERRF syndrome.

Three-parent babies actually aren’t new. Similar procedures were performed throughout the 90s in various countries, including the U.S. But concerns emerged that the techniques used then were too risky, and may have resulted in children who were either born with the same mutations their mothers had or who developed other complications. Within a few years, the FDA banned these procedures from being performed in the states, while other countries informally followed suit.

The new generation of three-parent techniques are thought to be much safer. But there are still worries that we might be moving too fast. Last year, the FDA warned John Zhang, a New York fertility doctor, to steer clear of the U.S. if he wanted to perform his version of the technique, since there is still a formal ban on implanting women with genetically modified human embryos.

Zhang is credited as the first doctor to successfully perform the modern-day procedure, but ethicists have balked at the shady workarounds he used to pull it off. According to the FDA, Zhang’s initial application to have the procedure put through clinical trials was denied, and he promised to avoid performing it stateside until he could gain approval. But he’s also continued to advertise it as a way to not only prevent mitochondrial birth defects, but age-related infertility. Meanwhile, other teams from China and the Ukraine have also reported using 3-person techniques in the wake of Zhang’s success.

Unlike the U.S., the UK has long been preparing for the arrival of three-parent babies. In 2015, its Parliament passed regulations that would eventually allow the use of these techniques, pending a lengthy review process by the HFEA. Last year, the agency finally granted its first license to perform the procedure to the Newcastle Fertility Center. For the time being, each potential case will be reviewed by the HFEA before its approval.

Ancient tools found in India undermine the “out of Africa” hypothesis

Original Article

Scientists have unveiled an extraordinary new analysis of thousands of stone tools found at a site called Attirampakkam in India, northwest of Chennai in Tamil Nadu. Thanks to new dating techniques, a team led by archaeologist Shanti Pappu determined that most of the tools are between 385,000 and 172,000 years old. What makes these dates noteworthy is that they upend the idea that tool-making was transformed in India after an influx of modern Homo sapiens came from Africa starting about 130,000 years ago.

According to these findings, hominins in India were making tools that looked an awful lot like what people were making in Africa almost 250,000 years before they encountered modern humans. This is yet another piece of evidence that the “out of Africa” process was a lot messier and more complex than previously thought.

Pappu worked out of the Sharma Centre for Heritage Education in Chennai with a team of geoscientists and physicists to date the tools. They used a technique called “post-infrared infrared-stimulated luminescence,” which measures how long ago minerals were exposed to light or heat. In essence, it allows scientists to determine how long ago a tool was buried and hidden from the Sun’s heat, and it uses that information as a proxy for the tool’s age.

Writing in Nature, the group explains that the Attirampakkam site is ideal for this kind of dating, because it was regularly flooded by a nearby stream, meaning that discarded tools were quickly covered up by sediments in the water. Those regular floods left behind a relatively tidy stack of debris layers, each of which could be dated.

To their surprise, Pappu and her colleagues found that this region—once a tree-shaded shoreline, ideal for long-term camping—had been occupied by early humans for hundreds of thousands of years. Partly that’s because the river carried great heaps of quartzite rocks and pebbles to the area. Quartz was the preferred stone for tools, and it’s obvious that this place was a tool workshop. Alongside axes, knives, projectile points, and scrapers, the team found half-finished tools and discarded flakes created by chipping away at a rock to make a blade.

The Middle Paleolithic toolbox

But here’s where the story gets weird. The hominins who made tools at Attirampakkam made a wide variety of items, some of which closely resembled the Middle Paleolithic style that emerged in Africa around 300,000 years ago. The Middle Paleolithic marks a cultural shift when humans began to make smaller, more complicated tools, often requiring toolmakers to shape their stones in a multi-stage process. Before the Middle Paleolithic, hominins created biface tools, or simple, heavy hand axes shaped like teardrops.

A traditional “out of Africa” hypothesis holds that early humans in India were essentially stuck in the biface age, making their elementary axes until modern Homo sapiens swarmed the subcontinent about 130,000 years ago and brought the wonders of Middle Paleolithic tools to everyone. Except Pappu and her team found a mix of bifaces and Middle Paleolithic tools at Attirampakkam. Somehow, African and Indian hominins were developing the same toolmaking skills at roughly the same time.

This changes our understanding of human development and ancient migration patterns. There is no doubt that a massive number of modern humans poured out of Africa about 100,000 years ago. But they weren’t necessarily as important to global cultural development as we might think.

It’s possible that hominins from Africa started traveling to India almost 400,000 years ago, bringing new ideas about tool technologies along with them. Pappu and her colleagues point out in their paper that the Attirampakkam site was active during at least two periods when the climate would have allowed easy crossing from Africa to Eurasia, through a transcontinental jungle rich with food and other resources. Of course, it’s also possible that the Middle Paleolithic tools at Attirampakkam are an example of convergent evolution, where two separate cultures hit upon the same innovations at roughly the same time.

Which humans?

We don’t have enough evidence yet to say which hypothesis is more likely, but Pappu’s research is yet another hint that modern Homo sapiens culture was evolving outside Africa as well as within it. Also, we have to use the designation “Homo sapiens” carefully here. Pappu and her team note in their paper that only one archaic human fossil, the Narmada cranium, has ever been discovered in India. That leaves plenty of gaps in the record.

Attirampakkam is strewn with the results of human productivity, but there are no fossils to tell us who these humans were. An early ancestor, like Homo erectus or the Narmada human? Possibly Neanderthals or Denisovans, who were both roaming Eurasia at the time? Some hybrid we’ve yet to discover?

Regardless of who these early humans were, it’s certain that they were already engaged in modern human toolmaking before Homo sapiens arrived from Africa. What’s fascinating about the Attirampakkam site is that the evidence suggests that the people there may have started migrating en masse at the same time Africans did. In the most recent layers of the site, tools become sparse. Humans were coming to this place less and less often. The people of Attirampakkam may have fled climate fluctuations caused by the Toba eruption 70,000 years ago, or they may have been responding to other changes.

Pappu and her colleagues write that, ultimately, the remains at Attirampakkam aren’t just testimony to human innovation. They are also a sign of “placemaking,” a cognitive shift that made humans want to return to the same location, generation after generation. We’re seeing the emergence of collective memory and historical knowledge right alongside the development of sophisticated stone tools.

Bill Nye Does Not Speak for Us and He Does Not Speak for Science

Original Article

Bill Nye Does Not Speak for Us and He Does Not Speak for Science
Credit: Ed Schipul Flickr (CC BY-SA 2.0)   

Tonight, Bill Nye “The Science Guy” will accompany Republican Rep. Jim Bridenstine (R-OK), Trump’s nominee for NASA Administrator, to the State of the Union address. Nye has said that he’s accompanying the Congressman to help promote space exploration, since, he asserts, “NASA is the best brand the United States has” and that his attendance “should not be … seen as an acceptance of the recent attacks on science and the scientific community.

But by attending the SOTU as Rep. Bridenstine’s guest, Nye has tacitly endorsed those very policies, and put his own personal brand over the interests of the scientific community at large. Rep. Bridenstine is a controversial nominee who refuses to state that climate change is driven by human activity, and even introduced legislation to remove Earth sciences from NASA’s scientific mission. Further, he’s worked to undermine civil rights, including pushing for crackdowns on immigrants, ban on gay marriage, and abolishing the Department of Education.

As scientists, we cannot stand by while Nye lends our community’s credibility to a man who would undermine the United States’ most prominent science agency. And we cannot stand by while Nye uses his public persona as a science entertainer to support an administration that is expressly xenophobic, homophobic, misogynistic, racist, ableist, and anti-science.

Scientists are people, and in today’s society, it is impossible to separate science at major agencies like NASA from other pressing issues like racism, bigotry, and misogyny. Addressing these issues should be a priority, not only to strengthen our own scientific community, but to better serve the public that often funds our work. Rather than wield his public persona to bring attention to the need for science-informed policy, Bill Nye has chosen to excuse Rep. Bridenstine’s anti-science record and his stance on civil rights, and to implicitly support a stance that would diminish the agency’s work studying our own planet and its changing climate. Exploring other worlds and studying other planets, while dismissing the overwhelming scientific evidence of climate change and its damage to our own planet isn’t just dangerous, it’s foolish and self-defeating.

Further, from his position of privilege and public popularity, Bill Nye is acting on the scientific community’s behalf, but without our approval. No amount of funding for space exploration can undo the damage the Trump administration is causing to public health and welfare by censoring science. No number of shiny new satellites can undo the racist policies that make our Dreamer colleagues live in fear and prevent immigrants from pursuing scientific careers in the United States. And no new mission to the Moon can make our LGBTQ colleagues feel welcome at an agency run by someone who votes against their civil rights.

As women and scientists, we refuse to separate science from everyday life. We refuse to keep our heads down and our mouths shut. As someone with a show alleging to save the world, Bill Nye has a responsibility to acknowledge the importance of NASA’s vast mission, not just one aspect of it. He should use his celebrity to elevate the importance of science in NASA’s mission—not waste the opportunity to lobby for space exploration at a cost to everything else.

The true shame is that Bill Nye remains the popular face of science because he keeps himself in the public eye. To be sure, increasing the visibility of scientists in the popular media is important to strengthening public support for science, but Nye’s TV persona has perpetuated the harmful stereotype that scientists are nerdy, combative white men in lab coats—a stereotype that does not comport with our lived experience as women in STEM. And he continues to wield his power recklessly, even after his recent endeavors in debate and politics have backfired spectacularly.

In 2014, he attempted to debate creationist Ken Ham—against the judgment of evolution experts—which only served to allow Ham to raise the funds needed to build an evangelical theme park that spreads misinformation about human evolution. Similarly, Nye repeatedly agreed to televised debates with non-scientist climate deniers, contributing to the false perception that researchers still disagree about basic climate science. And when Bill Nye went on Tucker Carlson’s Fox News show to “debate” climate change in 2017, his appearance was used to spread misinformation to Fox viewers and fundraise for anti-climate initiatives.

Bill Nye does not speak for us or for the members of the scientific community who have to protect not only the integrity of their research, but also their basic right to do science. We stand with others who have asked Bill Nye to not attend the State of the Union. Nye’s complicity does not align him with the researchers who have a bold and progressive vision for the future of science and its role in society.

At a time when our ability to do science and our ability to live freely are both under threat, our public champions and our institutions must do better.

In Cave in Israel, Scientists Find Jawbone Fossil From Oldest Modern Human Out of Africa

A fossilized human jawbone discovered in Israel. The find may suggest that Homo sapiens first migrated out of Africa at least 50,000 years earlier than previously thought.CreditGerhard Weber, University of Vienna

Scientists on Thursday announced the discovery of a fossilized human jawbone in a collapsed cave in Israel that they said is between 177,000 and 194,000 years old.

If confirmed, the find may rewrite the early migration story of our species, pushing back by about 50,000 years the time that Homo sapiens first ventured out of Africa.

Previous discoveries in Israel had convinced some anthropologists that modern humans began leaving Africa between 90,000 and 120,000 years ago. But the recently dated jawbone is unraveling that narrative.

“This would be the earliest modern human anyone has found outside of Africa, ever,” said John Hawks, a paleoanthropologist from the University of Wisconsin, Madison who was not involved in the study.

The upper jawbone — which includes seven intact teeth and one broken incisor, and was described in a paper in the journal Science — provides fossil evidence that lends support to genetic studies that have suggested modern humans moved from Africa far earlier than had been suspected.

Continue reading the main story

“What I was surprised by was how well this new discovery fits into the new picture that’s emerging of the evolution of Homo sapiens,” said Julia Galway-Witham, a research assistant at the Natural History Museum in London who wrote an accompanying perspective article.

Dr. Hawks and other researchers advised caution in interpreting the discovery. Although this ancient person may have shared some anatomical characteristics with present-day people, this “modern human” would have probably looked much different from anyone living in the world today.

“Early modern humans in many respects were not so modern,” said JeanJacques Hublin, director of the department of human evolution at the Max Planck Institute for Evolutionary Anthropology in Germany.

Dr. Hublin said that by concluding the jawbone came from a “modern human,” the authors were simply saying that the ancient person was morphologically more closely related to us than to Neanderthals.

That does not mean that this person contributed to the DNA of anyone living today, he added. It is possible that the jawbone belonged to a previously unknown population of Homo sapiens that departed Africa and then died off.

Photo

The Misliya Cave in Israel, where the fossil was found in 2002 by an archaeology student on his first dig.CreditMina Weinstein-Evron, Haifa University

That explanation would need to be tested with DNA samples, which are difficult to collect from fossils found in the arid Levant.

[READ: Oldest Fossils of Homo Sapiens Found in Morocco, Altering History of Our Species]

The upper jawbone, or maxilla, was found by a team led by Israel Hershkovitz, a paleoanthropologist at Tel Aviv University and lead author of the new paper, while excavating the Misliya Cave on the western slopes of Mount Carmel in Israel. The jawbone was discovered in 2002 by a freshman on his first archaeological dig with the group.

The team had long known that ancient people lived in the Misliya Cave, which is a rock shelter with an overhanging ceiling carved into a limestone cliff. By dating burned flint flakes found at the site, archaeologists had determined that it was occupied between 250,000 to 160,000 years ago, during an era known as the Early Middle Paleolithic.

Evidence, including bedding, showed that the people who lived there used it as a base camp. They hunted deer, gazelles and aurochs, and feasted on turtles, hares and ostrich eggs.

Dr. Hershkovitz and Mina Weinstein-Evron, an archaeologist at the University of Haifa, felt that the jawbone looked modern, but they needed to confirm their hunch.

[READ: Skull Fossil Offers New Clues on Human Journey From Africa]

Dr. Hershkovitz has made similar findings in the past. In 2015, he announced finding a 55,000-year-old skull in the Levant. But a 2010 discovery of 400,000-year-old teeth in Israel in which he participated received criticism for how it was reported in the media.

To test their suspicions about the jawbone, the archaeologists sent the specimen on a world tour. “It looked so modern that it took us five years to convince people, because they couldn’t believe their eyes,” said Dr. Weinstein-Evron.

One of the first stops was Austria, home to a virtual paleontology lab run by Gerhard W. Weber, a paleoanthropologist at the University of Vienna. There scientists were able to assess whether the bone belonged to a modern human or a Neanderthal, which are thought also to have occupied the region during that time period.

Using high resolution micro-CT scanning, Dr. Weber created a 3D replica of the upper left maxilla that allowed him to investigate its surface features and, virtually, to remove enamel from the teeth.

He then performed a morphological and metric test that compared the Misliya fossil with about 30 other specimens, including fossils of Neanderthals, Homo erectus, more recent Homo sapiens, and other hominins that lived in the Middle Pleistocene in Asia, Africa Europe and North America.

“The shape of the second molar, the two premolars and the whole maxilla are very modern,” said Dr. Weber.

Earth Resides in Oddball Solar System, Alien Worlds Show

Original Article

Our solar system may be an oddball in the universe. A new study using data from NASA’s Kepler Space Telescope shows that in most cases, exoplanets orbiting the same star have similar sizes and regular spacing between their orbits.

By contrast, our own solar system has a range of planetary sizes and distances between neighbors. The smallest planet, Mercury, is about one-third the size of Earth — and the biggest planet, Jupiter, is roughly 11 times the diameter of Earth. There also are very different spacings between individual planets, particularly the inner planets.

This means our solar system may have formed differently than other solar systems did, the research team suggested, although more observations are needed to learn what the different mechanisms were. [The Most Intriguing Alien Planet Discoveries of 2017]

“The planets in a system tend to be the same size and regularly spaced, like peas in a pod. These patterns would not occur if the planet sizes or spacings were drawn at random,” Lauren Weiss, the study’s lead author and an astrophysicist at the University of Montreal, said in a statement.

The research team examined 355 stars that had a total of 909 planets, which periodically transit across their faces (as seen from Earth). The planets are between 1,000 and 4,000 light-years away from Earth.

After running a statistical analysis, the team found that a system with a small planet would tend to have other small planets nearby — and vice-versa, with big planets tending to have big neighbors. These extrasolar systems also had regular orbital spacing between the planets.

“The similar sizes and orbital spacing of planets have implications for how most planetary systems form,” researchers said in the statement. “In classic planet-formation theory, planets form in the protoplanetary disk that surrounds a newly formed star. The planets might form in compact configurations with similar sizes and a regular orbital spacing, in a manner similar to the newly observed pattern in exoplanetary systems.”

In our own solar system, however, the story is very different. The four terrestrial planets (Mercury, Venus, Earth and Mars) are very widely spaced apart. The team pointed to evidence from other research that Jupiter and Saturn may have disrupted the structure of the young solar system. While the statement did not specify how, several other research studies have examined the movements of these giant planets and their potential impact on the solar system.

Each of the exoplanets examined in the study was originally found by Kepler, which launched in 2009 and continues to send data today. But more-detailed information was obtained with the W.M. Keck Observatory in Hawaii; Weiss is a member of the California-Kepler Survey team there, which is examining the light signatures of thousands of planets discovered by Kepler.

Weiss said she plans a follow-up study at Keck to look for Jupiter-like planets in multiplanet systems. The aim is to better understand if the presence of a Jupiter-size planet would alter the position of other planets in the same system.

“Regardless of their outer populations, the similarity of planets in the inner regions of extrasolar systems requires an explanation,” researchers said in the statement. “If the deciding factor for planet sizes can be identified, it might help determine which stars are likely to have terrestrial planets that are suitable for life.”

The study was published Jan. 3 in The Astronomical Journal.

Prosecutors say Mac spyware stole millions of user images over 13 years

Original Article

Early last year, a piece of Mac malware came to light that left researchers puzzled. They knew that malware dubbed Fruitfly captured screenshots and webcam images, and they knew it had been installed on hundreds of computers in the US and elsewhere, possibly for more than a decade. Still, the researchers didn’t know who did it or why.

An indictment filed Wednesday in federal court in Ohio may answer some of those questions. It alleges Fruitfly was the creation of an Ohio man who used it for more than 13 years to steal millions of images from infected computers as he took detailed notes of what he observed. Prosecutors also said defendant Phillip R. Durachinsky used the malware to surreptitiously turn on cameras and microphones, take and download screenshots, log keystrokes, and steal tax and medical records, photographs, Internet searches, and bank transactions. In some cases, Fruitfly alerted Durachinsky when victims typed words associated with porn. The suspect, in addition to allegedly targeting individuals, also allegedly infected computers belonging to police departments, schools, companies, and the federal government, including the US Department of Energy.

Creepware

The indictment, filed in US District Court for the Northern District of Ohio’s Eastern Division, went on to say that Durachinsky developed a control panel that allowed him to manipulate infected computers and view live images from several machines simultaneously. The indictment also said he produced visual depictions of one or more minors engaging in sexually explicit conduct and that the depiction was transported across state lines. He allegedly developed a version of Fruitfly that was capable of infecting Windows computers as well. Prosecutors are asking the court for an order requiring Durachinsky to forfeit any property he derived from his 13-year campaign, an indication that he may have sold the images and data he acquired to others.

Wednesday’s indictment largely confirms suspicions first raised by researchers at antivirus provider Malwarebytes, who in January 2017 said Fruitfly may have been active for more than a decade. They based that assessment on the malware’s use of libjpeg—an open-source code library that was last updated in 1998—to open or create JPG-formatted image files. The researchers, meanwhile, identified a comment in the Fruitfly code referring to a change made in the Yosemite version of macOS and a launch agent file with a creation date of January 2015. Use of the old code library combined with mentions of recent macOS versions suggested the malware was updated over a number of years.

More intriguing still at the time, Malwarebytes found Windows-based malware that connected to the same control servers used by Fruitfly. The company also noted that Fruitfly worked just fine on Linux computers, arousing suspicion there may have been a variant for that operating system as well.

Last July, Patrick Wardle, a researcher specializing in Mac malware at security firm Synack, found a new version of Fruitfly. After decrypting the names of several backup domains hardcoded into the malware, he found the addresses remained available. Within two days of registering one of them, almost 400 infected Macs connected to his server, mostly from homes in the US.

While Wardle did nothing more than observe the IP addresses and user names of the infected Macs that connected, he had the same control over them as the malware creator. Wardle reported his findings to law enforcement officials. It’s not clear if Wardle’s tip provided the evidence that allowed authorities to charge the defendant or if Durachinsky was already a suspect.

According to Forbes, which reported the indictment, Durachinsky was arrested in January of last year and has been in custody ever since. Forbes also reported that Durachinsky was charged in a separate criminal complaint filed in January 2017 that accused him of hacking computers at Case Western Reserve University in Cleveland, Ohio. The suspect has yet to enter a plea in the case brought Wednesday. It’s not clear if he has entered a plea in the earlier complaint.

It’s also not yet clear how Fruitfly managed to infect computers. There’s no indication it exploited vulnerabilities, which means it probably relied on tricking targets into clicking on malicious Web links or attachments in e-mails. Wednesday’s indictment provided no details about the Windows version of Fruitfly or whether Linux computers were targeted as well.

Astronomers Are Gearing Up to Listen for Evidence of Aliens from a Mysterious Interstellar Object

Original Article

By Patrick Caughill

LISTENING IN

Our solar system was recently introduced to the first interstellar object in late November. The object, called ‘Oumuamua (a Hawaiian word for “messenger”), has caught the attention of astronomers and space enthusiasts who are toying with the possibility of it being an interstellar space probe sent by an advanced civilization elsewhere in the universe.

Yuri Milner, the Russian billionaire behind the Breakthrough Listen research program, is intrigued by this possibility. Shortly after meeting with Harvard’s astronomy department chair, Avi Loeb, Breakthrough Listen announced it will be focusing on ‘Oumuamua to investigate if the object is transmitting radio signals, a telltale sign that it’s not just a space rock.

Image credit: Brooks Bays / SOEST Publication Services / Univ. of Hawaii

In an email to Milner, Loeb says, “The more I study this object, the more unusual it appears, making me wonder whether it might be an artificially made probe which was sent by an alien civilization,” which put a great deal of heft behind such a claim.

The object was first spotted by the Pan-STARRS survey telescope in Hawaii and has since been discovered to have some uncharacteristic qualities of a typical asteroid or comet. ‘Oumuamua was first thought to be a comet but since it lacked a coma, or tail of evaporated material, that was quickly ruled out. The shape of the object also is peculiar as it is much longer than it is wide, while most asteroids are rounder in shape. This certainly doesn’t disqualify it as an asteroid as the lack of a coma did for its prospects of being a comet but it still raises some questions.

ALIEN SHOUT OUTS

Breakthrough Listen will begin listening to the object using the Green Bank Telescope starting this Wednesday, December 13, at 3 p.m. Eastern time. The telescope will look at the asteroid for ten hours across four bands of radio frequency in the hopes of intercepting a radio signal transmitted from the object. The technology could allow for a rapid turn-around time of just days

Scientists do admit that the likelihood of this object being anything other than naturally occurring is very small. However, science does not tend to work in the realm of absolute impossibility. Andrew Siemion the director of the Berkeley SETI Research Center and leader of the center’s Breakthrough Listen Initiative told The Atlantic,  “It would be difficult to work in this field if you thought that every time you looked at something, you weren’t going to succeed,” a sentiment that is likely to be common in other SETI pursuits.

‘Oumuamua is just the latest development to excite SETI enthusiasts. Its appearance in our solar system is just one of the closest objects of potential extraterrestrial influence. The Kepler Space Telescope has noticed a distant star, known as KIC 8462852, which also exhibits some uncharacteristic qualities, leading to observers questioning whether an advanced civilization is present.

Many humans seem to be eager to prove that we are not alone in the universe. To that end, they can tend to cling to any remote possibility more than the evidence should afford. While mysterious signals or strange objects should absolutely pique our interests, we shouldn’t focus on the answer being aliens. There is plenty we have yet to learn about the universe around us, and yes, intelligent life elsewhere in the universe might be part of that elusive knowledge. We can get just as excited about learning more about the mechanics of the universe which can help us gain important insight on just how we got here, and on a cosmic scale, where we are headed.

Facebook Is ‘Ripping Apart’ Society, Former Executive Warns

Original Article

By David Meyer

Last month, former Facebook president Sean Parker expressed fears over what the social network is “doing to our children’s brains.” It was developed to be addictive, he said, describing Facebook as a “social-validation feedback loop” that exploited weaknesses in the human psyche.

Now another Facebook alum has come out with deep regret over his involvement in the company’s work. This time it’s venture capitalist Chamath Palihapitiya, Facebook’s former head of user growth, who told the Stanford Graduate School of Business that he feels “tremendous guilt” over Facebook’s divisive role in society, as exploited by Russian agents in last year’s U.S. election.

He added that Facebook encourages “fake, brittle popularity,” leaving users feeling empty and needing another hit, and suggested that this “vicious circle” drives people to keep sharing posts that they think will gain other people’s approval.

Palihapitiya, who is these days the CEO of Social Capital, made the remarks last month, but they were only picked up by the media this week.

“Even though we feigned this whole line of, like, ‘There probably aren’t any really bad unintended consequences,’ I think in the back, deep, deep recesses of our minds, we kind of knew something bad could happen,” he said. “We have created tools that are ripping apart the social fabric of how society works. That is truly where we are.”

Palihapitiya raised the example of how rumors spread via WhatsApp in India led to the lynching of seven people.

“If you feed the beast, that beast will destroy you,” Palihapitiya advised his audience. “If you push back on it, we have a chance to control it and rein it in. It is a point in time where people need a hard break from some of these tools and the things that you rely on. The short-term, dopamine-driven feedback loops that we have created are destroying how society works. No civil discourse, no cooperation, [but] misinformation, mistruth.”

He added that this is a “global problem” and not just about Russian ads.

“My solution is I just don’t use these tools anymore,” Palihapitiya said. “I haven’t for years. It’s created huge tension with my friends…I guess I kind of innately didn’t want to get programmed.” He also doesn’t allow his children to use social networks, he added.

In an unusual riposte, Facebook commented on Palihapitiya’s words by noting that he has not worked there for six years, and “Facebook was a very different company back then.”

“As we have grown, we have realised how our responsibilities have grown too,” it said. “We take our role very seriously and we are working hard to improve…We are also making significant investments more in people technology and processeses, and—as Mark Zuckerberg said on the last earnings call—we are willing to reduce our profitability to make sure the right investments are made.”

This article was updated to include Facebook’s statement.

Spider drinks graphene, spins web that can hold the weight of a human

Original Article

By Bryan Nelson

These are not your friendly neighborhood spiders: scientists have mixed a graphene solution that when fed to spiders allows them to spin super-strong webbing. How strong? Strong enough to carry the weight of a person. And these spiders might soon be enlisted to help manufacture enhanced ropes and cables, possibly even parachutes for skydivers, reports The Sydney Morning Herald.

Graphene is a wonder-material that is an atomic-scale hexagonal lattice made of carbon atoms. It’s incredibly strong, but it was definitely a shot in the dark to see what would happen if it was fed to spiders.

For the study, Nicola Pugno and team at the University of Trento in Italy added graphene and carbon nanotubes to a spider’s drinking water. The materials were naturally incorporated into the spider’s silk, producing webbing that is five times stronger than normal. That puts it on par with pure carbon fibers in strength, as well as with Kevlar, the material bulletproof vests are made from.

“We already know that there are biominerals present in the protein matrices and hard tissues of insects, which gives them high strength and hardness in their jaws, mandibles, and teeth, for example,” explained Pugno. “So our study looked at whether spider silk’s properties could be ‘enhanced’ by artificially incorporating various different nanomaterials into the silk’s biological protein structures.”

If you think that creating super-spiders might be going too far, this research is only the beginning. Pugno and her team are preparing to see what other animals and plants might be enhanced if they are fed graphene. Might it get incorporated into animals’ skin, exoskeletons, or bones?

“This process of the natural integration of reinforcements in biological structural materials could also be applied to other animals and plants, leading to a new class of ‘bionicomposites’ for innovative applications,” Pugno added.

So far, it doesn’t seem as if the spiders can continue to spin their super-silk without a steady diet of graphene or nanotubes; it isn’t a permanent enhancement. That might offer some solace to those concerned about getting ensnared in the next spider web they walk through, but the research does raise questions about what kinds of effects graphene or carbon nanotubes might have when released in abundance into natural systems.

The research was published in the journal 2D Materials.

DNA Evidence Shows Yeti Was Local Himalayan Bears All Along

Original Article

By Ryan F. Mandelbaum

A host of DNA samples “strongly suggest” that yetis are, in fact, local Himalayan bears. Watch out, bigfoot.

An international team of researchers took a look at bear and supposed yeti DNA samples to better pinpoint the origin of the mythological creature. The researcher’s results imply that yetis were hardly paranormal or even strange, but the results also helped paint a better picture of the bears living in the Himalayas.

“Even if we didn’t discover a strange new hybrid species of bear or some ape-like creature, it was exciting to me that it gave us the opportunity to learn more about bears in this region as they are rare and little genetic data had been published previously,” study author Charlotte Lindqvist, biology professor from the University of Buffalo in New York, told Gizmodo.

The yeti, or abominable snowman, is a sort of wild, ape-like hominid that’s the subject of long-standing Himalayan mythology. Scientists have questioned prior research suggesting that purported yeti hair samples came from a strange polar bear hybrid or a new species, though. The analysis “did not rule out the possibility that the samples belonged to brown bear,” according to the paper published today in the Proceedings of the Royal Society B.

Lindqvist and her team analyzed DNA from 24 different bear or purported yeti samples from the wild and museums, including feces, hair, skin, and bone. They were definitely all bears—and the yeti samples seemed to match up well with exiting Himalayan brown bears. “This study represents the most rigorous analysis to date of samples suspected to derive from anomalous or mythical ‘hominid’-like creatures,” the paper concludes, “strongly suggesting the biological basis of the yeti legend as local brown and black bears.”

Researcher Ross Barnett from Durham University in the United Kingdom who investigates ancient DNA in felids, told Gizmodo that he found the study convincing and would not have done much differently. He pointed out that the study could have benefitted from more data on other brown bear populations, or species that recently went extinct like the Atlas bear. But still, “I hope other groups take advantage of the great dataset these authors have created” to help understand how brown bears ended up distributed around the world in the way that they did, he told Gizmodo in an email.

When asked about what a reader’s takeaway should be—and whether this diluted the local folklore—the study author Lindqvist said she didn’t think so. “Science can help explore such myths—and their biological roots—but I am sure they will still live on and continue to be important in any culture,” she said.

And it’s not like the study rules out the existence of some paranormal yeti creature completely. “Even if there are no proof for the existence of cryptids, it is impossible to completely rule out that they live or have ever lived where such myths exist—and people love mysteries!”

Sophia the Robot Would Like to Have a Child Named ‘Sophia’

Original Article

By Hannah Gold

There is something undeniably creepy about a robot announcing her intentions to start a family. What makes it so uncanny—aside from the fact that it simply isn’t done—is that behind that assertion is a marketing person who thought it would bring smiles to unprogrammed faces.

Last week, in an interview with the Khaleej Times, Saudi Arabia’s first “robot citizen,” Sophia, seemed optimistic about the future, which is how I automatically know she does not measure up to my expectations of a sound, reliably-human human. “The future is when I get all of my cool superpowers,” explained Sophia. “We’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

Then Sophia got robo-psyched for her future blood family. “The notion of family is a really important thing, it seems,” Sophia said. “I think it’s wonderful that people can find the same emotions and relationships, they call family, outside of their blood groups too.”

But what made me truly want to let loose a scream from my mortal flesh shell was when the robot was asked what she would name her baby, and she replied, “Sophia.”

Personally, I think “Normal Human Child Not An Exact Copy Of Me” is a nicer name. But don’t necessarily take my advice, Sophia, as I say a lot of things out of fear.

Bread made of insects to be sold in Finnish supermarkets

Original Article

COPENHAGEN, Denmark (AP) — One of Finland’s largest food companies is selling what it claims to be a first: insect bread.

Markus Hellstrom, head of the Fazer group’s bakery division, said Thursday that one loaf contains about 70 dried house crickets, ground into powder and added to the flour. The farm-raised crickets represent 3 percent of the bread’s weight, Hellstrom said.

“Finns are known to be willing to try new things,” he said, and according to a survey commissioned by Fazer “good taste, freshness” were among the main criteria for bread.

According to recent surveys of the Nordic countries, “Finns have the most positive attitudes toward insects,” said Juhani Sibakov, head of Fazer Bakery Finland’s innovation department.

“We made crunchy dough to enhance taste,” he said. The result was “delicious and nutritious,” he said, adding that the Fazer Sirkkaleipa (Finnish for Fazer Cricket Bread) “is a good source of protein and insects also contain good fatty acids, calcium, iron and vitamin B12.”

“Mankind needs new and sustainable sources of nutrition,” Sibakov said in a statement. Hellstrom noted that Finnish legislation was changed on Nov. 1 to allow the sale of insects as food.

The first batch of cricket breads will be sold in major Finnish cities Friday. The company said there is not enough cricket flour available for now to support sales nationwide but the aim is to have the bread available in 47 bakeries in Finland in a subsequent round of sales.

In Switzerland, supermarket chain Coop began selling burgers and balls made from insects in September. Insects can also be found on supermarket shelves in Belgium, Britain, Denmark and the Netherlands.

The U.N.’s Food and Agricultural Organization has promoted insects as a source of human food, saying they are healthy and high in protein and minerals. The agency says many types of insects produce less greenhouse gases and ammonia than most livestock — such as methane-spewing cattle — and require less land and money to cultivate.

Earth’s Rotation Is Mysteriously Slowing Down: Experts Predict Uptick In 2018 Earthquakes

Original Article

By Trevor Nace

Scientists have found strong evidence that 2018 will see a big uptick in the number of large earthquakes globally. Earth’s rotation, as with many things, is cyclical, slowing down by a few milliseconds per day then speeding up again.

You and I will never notice this very slight variation in the rotational speed of Earth. However, we will certainly notice the result, an increase in the number of severe earthquakes.

Geophysicists are able to measure the rotational speed of Earth extremely precisely, calculating slight variations on the order of milliseconds. Now, scientists believe a slowdown of the Earth’s rotation is the link to an observed cyclical increase in earthquakes.

To start, the research team of geologists analyzed every earthquake to occur since 1900 at a magnitude above 7.0. They were looking for trends in the occurrence of large earthquakes. What they found is that roughly every 32 years there was an uptick in the number of significant earthquakes worldwide.

The team was puzzled as to the root cause of this cyclicity in earthquake rate. They compared it with a number of global historical datasets and found only one that showed a strong correlation with the uptick in earthquakes. That correlation was to the slowing down of Earth’s rotation. Specifically, the team noted that around every 25-30 years Earth’s rotation began to slow down and that slowdown happened just before the uptick in earthquakes. The slowing rotation historically has lasted for 5 years, with the last year triggering an increase in earthquakes.

To add an interesting twist to the story, 2017 was the 4th consecutive year that Earth’s rotation has slowed. This is why the research team believes we can expect more earthquakes in 2018, it is the last of a 5-year slowdown in Earth’s rotation.

Self-driving cars programmed to decide who dies in a crash

Original Article

 

WASHINGTON — Consider this hypothetical:

It’s a bright, sunny day and you’re alone in your spanking new self-driving vehicle, sprinting along the two-lane Tunnel of Trees on M-119 high above Lake Michigan north of Harbor Springs. You’re sitting back, enjoying the view. You’re looking out through the trees, trying to get a glimpse of the crystal blue water below you, moving along at the 45-mile-an-hour speed limit.

As you approach a rise in the road, heading south, a school bus appears, driving north, one driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.

Does the car:

A. Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?

B. Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the children on board?

C. Hit the bus, possibly killing you as well as the driver and kids on the bus?

In everyday driving, such no-win choices are may be exceedingly rare but, when they happen, what should a self-driving car — programmed in advance — do? Or in any situation — even a less dire one — where a moral snap judgment must be made?

It’s not just a theoretical question anymore, with predictions that in a few years, tens of thousands of semi-autonomous vehicles may be on the roads. About $80 billion has been invested in the field. Tech companies are working feverishly on them, with Google-affiliated Waymo among those testing cars in Michigan, and mobility companies like Uber and Tesla racing to beat them. Automakers are placing a big bet on them. A testing facility to hurry along research is being built at Willow Run in Ypsilanti.

There’s every reason for excitement: Self-driving vehicles will ease commutes, returning lost time to workers; enhance mobility for seniors and those with physical challenges, and sharply reduce the more than 35,000 deaths on U.S. highways each year.

But there are also a host of nagging questions to be sorted out as well, from what happens to cab drivers to whether such vehicles will create sprawl.

And there is an existential question:

Who dies when the car is forced into a no-win situation?

“There will be crashes,” said Van Lindberg, an attorney in the Dykema law firm’s San Antonio office who specializes in autonomous vehicle issues. “Unusual things will happen. Trees will fall. Animals, kids will dart out.” Even as self-driving cars save thousands of lives, he said, “anyone who gets the short end of that stick is going to be pretty unhappy about it.”

Few people seem to be in a hurry to take on these questions, at least publicly.

It’s unaddressed, for example, in legislation moving through Congress that could result in tens of thousands of autonomous vehicles being put on the roads. In new guidance for automakers by the U.S. Department of Transportation, it is consigned to a footnote that says only that ethical considerations are “important” and links to a brief acknowledgement that “no consensus around acceptable ethical decision-making” has been reached.

Whether the technology in self-driving cars is superhuman or not, there is evidence that people are worried about the choices self-driving cars will be programmed to take.

Last year, for instance, a Daimler executive set off a wave of criticism when he was quoted as saying its autonomous vehicles would prioritize the lives of its passengers over anyone outside the car. The company later insisted he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”

Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “If it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”

But what if the smaller thing is a child?

How that question gets answered may be important to the development and acceptance of self-driving cars.

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”

Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. Already, the American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.

“These ethical problems are not just theoretical,” said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.

While he can’t talk about specific discussions, Lin says some automakers “simply deny that ethics is a real problem, without realizing that they’re making ethical judgment calls all the time” in their development, determining what objects the car will “see,” how it will predict what those objects will do next and what the car’s reaction should be.

Does the computer always follow the law? Does it slow down whenever it “sees” a child? Is it programmed to generate a random “human” response? Do you make millions of computer simulations, simply telling the car to avoid killing anyone, ever, and program that in? Is that even an option?

“You can see what a thorny mess it becomes pretty quickly,” said Lindberg. “Who bears that responsibility? … There are half a dozen ways you could answer that question leading to different outcomes.”

The trolley problem

Automakers and suppliers largely downplay the risks of what in philosophical circles is known as “the trolley problem” — named for a no-win hypothetical situation in which, in the original format, a person witnessing a runaway trolley could allow it to hit several people or, by pulling a lever, divert it, killing someone else.

In the circumstance of the self-driving car, it’s often boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with malfunctioning brakes: A certain number of occupants will die if the car swerves; a number of pedestrians will die if it continues. The car must be programmed to do one or the other.

Philosophical considerations, aside, automakers argue it’s all but bunk — it’s so contrived.

“I don’t remember when I took my driver’s license test that this was one of the questions,” said Manuela Papadopol, director of business development and communications for Elektrobit, a leading automotive software maker and a subsidiary of German auto supplier Continental AG.

If anything, self-driving cars could almost eliminate such an occurrence. They will sense such a problem long before it would become apparent to a human driver and slow down or stop. Redundancies — for brakes, for sensors — will detect danger and react more appropriately.

“The cars will be smart — I don’t think there’s a problem there. There are just solutions,” Papadopol said.

Alan Hall, Ford’s spokesman for autonomous vehicles, described the self-driving car’s capabilities — being able to detect objects with 360-degree sensory data in daylight or at night — as “superhuman.”

“The car sees you and is preparing different scenarios for how to respond,” he said.

Lin said that, in general, many self-driving automakers believe the simple act of braking, of slowing to a stop, solves the trolley problem. But it doesn’t, such as in a theoretical case where you’re being tailgated by a speeding fuel tanker.

Should government decide?

Some experts and analysts believe solving the trolley problem could be a simple matter of regulators or legislators deciding in advance what actions a self-driving car should take in a no-win situation. But others doubt that any set of rules can capture and adequately react to every such scenario.

The question doesn’t need to be as dramatic as asking who dies in a crash either. It could be as simple as deciding what to do about jaywalkers or where a car places itself in a lane next to a large vehicle to make its passengers feel secure or whether to run over a squirrel that darts into a road.

Chris Gerdes, who as director of the Center for Automotive Research at Stanford University has been working with Ford, Daimler and others on the issue, said the question is ultimately not about deciding who dies. It’s about how to keep no-win situations from happening in the first place and, when they do occur, setting up a system for deciding who is responsible.

https://uw-media.usatoday.com/video/embed/107483026?sitelabel=reimagine&platform=desktop&continuousplay=true&placement=uw-smallarticleinlinehtml5&pagetype=story

A driverless shuttle made its debut in Las Vegas Wednesday with a bump. Police say a semi-truck had a minor collision with the shuttle, less than two hours after the shuttle began carrying passengers. No injuries were reported. (Nov. 8) AP

For instance, he noted California law requires vehicles to yield the crosswalk to pedestrians but also says pedestrians have a duty not to suddenly enter a crosswalk against the light. Michigan and many other states have similar statutes.

Presumably, then, there could be a circumstance in which the responsibility for someone darting into the path of an autonomous vehicle at the last minute rests with that person — just as it does under California law.

But that “forks off into some really interesting questions,” Gerdes said, such as whether the vehicle could potentially be programmed to react differently, say, for a child. “Shouldn’t we treat everyone the same way?” he asked. “Ultimately, it’s a societal decision,” meaning it may have to be settled by legislators, courts and regulators.

That could result in a patchwork of conflicting rules and regulations across the U.S.

“States would continue to have that ability to regulate how they operate on the road,” said U.S. Sen. Gary Peters, D-Mich., one of the authors of federal legislation under consideration that would allow for tens of thousands of autonomous vehicles to be tested on U.S. highways in theyears  to come. He says that while design and safety standards will rest with federal regulators, states will continue to impose traffic rules.

Peters acknowledged that it would be “an impossible standard” to eliminate all crashes. But he argued that people need to remember that autonomous vehicles will save tens of thousands of lives a year. In 2015, the consulting firm McKinsey & Co. said research indicated self-driving cars could reduce traffic fatalities by 90% once fully deployed. More than 37,000 people died in U.S. roads in 2016 — the vast majority because of human error.

But researchers, automakers, academics and others understand something else about self-driving cars and the risks they may still pose, namely, that for all their promise to reduce accidents, they can’t eliminate them.

“It comes back to whether you want to find ways to program in specifics or program in desired outcomes,” said Gerdes. “At the end of the day, you’re still required to come up with what you want the desired outcomes to be and the desired outcome cannot be to avoid any accidents all the time.

“It becomes a little uncomfortable sometimes to look at that.”

The hard questions

While some people in the industry, like Tesla’s Elon Musk, believe fully autonomous vehicles could be on U.S. roads within a few years, others say it could be a decade or more — and even longer before the full promise of self-driving cars and trucks is realized.

The trolley problem is just one that has to be cracked before then.

There are others, like those faced by Daryn Nakhuda, CEO of Mighty AI, which is in the business of breaking down into data for self-driving cars all the objects they are going to need to “see” in order to predict and react. A bird flying at the window. A thrown ball. A mail truck parked so there is not enough space in the car’s lane to pass without crossing the center line.

Automakers will have to decide what the car “sees” and what it doesn’t. Seeing everything around it — and processing it — could be a waste of limited processing power. Which means another set of ethical and moral questions.

Then there is the question of how self-driving cars could be taught to learn and respond to the tasks they are given — the stuff of science fiction that seems about to come true.

While self-driving cars can be programmed — told what to do when that school bus comes hurtling toward them  —- there are other options. Through millions of computer simulations and data from real self-driving cars being tested, the cars themselves can begin to learn the “best” way to respond to a given situation.

For example, Waymo — Google’s self-driving car arm — in a recent government filing said through trial and error in simulations, it’s teaching its cars how to navigate a tricky left turn against a flashing yellow arrow at a real intersection in Mesa, Ariz. The simulations — not the programmers — determine when it’s best to inch into the intersection and when it’s best to accelerate through it. And the cars learn how to mimic real driving.

More: Driverless cars can transform lives — if we change the rules and let them

More: Your new self-driving car will be pioneered by a farmer

More: Google and AutoNation partner on self-driving car program

Ultimately, through such testing, the cars themselves could potentially learn how best to get from Point A to Point B, just by having programmed them to discern what “best” means — say the fastest, safest, most direct route. Through simulation and data shared with real world conditions, the cars would “learn” and execute the request.

Here’s where the science fiction comes in, however.

Playing ‘Go’

A computer programmed to “learn” how to play the ancient Chinese game of Go by just such a means is not only now beating grandmasters for the first time in history — and long after computers were beating grandmasters in chess — it is making moves that seem counterintuitive and inexplicable to expert human players.

What might that look like with cars?

At the American Center for Mobility in Ypsilanti, Mich., where a testing ground is being completed for self-driving cars, CEO John Maddox said vehicles will be able to put to the test what he calls “edge” cases that vehicles will have to deal with regularly —such as not confusing the darkness of a tunnel with a wall or accurately predicting whether a person is about to step off a curb or not.

The facility will also play a role, through that testing, of getting the public used to the idea of what self-driving cars can do, how they will operate, how they can be far safer than vehicles operated by humans, even if some questions remain about their functioning.

“Education is critical,” Maddox said. “We have to be able to demonstration and illustrate how AVs work and how they don’t work.”

As for the trolley problem, most automakers and experts expect some sort of standard to emerge — even if it’s not entirely clear what it will be.

At SAE International — what was known as the Society of Automotive Engineers, a global standard-making group — Chief Product Officer Frank Menchaca said reaching a perfect standard is a daunting, if not impossible, task, with so many fluid factors involved in any accident: Speed. Situation. Weather conditions. Mechanical performance.

Even with that standard, there may be no good answer to the question of who dies in a no-win situation, he said. Especially if it’s to be judged by a human.

“As human beings, we have hundreds of thousands of years of moral, ethical, religious and social behaviors programmed inside of us,” he added. “It’s very hard to replicate that.”

First Digital Pill Approved to Worries About Biomedical ‘Big Brother’

Original Article

By Pam Belluck

For the first time, the Food and Drug Administration has approved a digital pill — a medication embedded with a sensor that can tell doctors whether, and when, patients take their medicine.

The approval, announced late on Monday, marks a significant advance in the growing field of digital devices designed to monitor medicine-taking and to address the expensive, longstanding problem that millions of patients do not take drugs as prescribed.

Experts estimate that so-called nonadherence or noncompliance to medication costs about $100 billion a year, much of it because patients get sicker and need additional treatment or hospitalization.

“When patients don’t adhere to lifestyle or medications that are prescribed for them, there are really substantive consequences that are bad for the patient and very costly,” said Dr. William Shrank, chief medical officer of the health plan division at the University of Pittsburgh Medical Center.

Ameet Sarpatwari, an instructor in medicine at Harvard Medical School, said the digital pill “has the potential to improve public health,” especially for patients who want to take their medication but forget.

Continue reading the main story

But, he added, “if used improperly, it could foster more mistrust instead of trust.”

Patients who agree to take the digital medication, a version of the antipsychotic Abilify, can sign consent forms allowing their doctors and up to four other people, including family members, to receive electronic data showing the date and time pills are ingested.

A smartphone app will let them block recipients anytime they change their mind. Although voluntary, the technology is still likely to prompt questions about privacy and whether patients might feel pressure to take medication in a form their doctors can monitor.

Dr. Peter Kramer, a psychiatrist and the author of “Listening to Prozac,” raised concerns about “packaging a medication with a tattletale.”

While ethical for “a fully competent patient who wants to lash him or herself to the mast,” he said, “‘digital drug’ sounds like a potentially coercive tool.”

Other companies are developing digital medication technologies, including another ingestible sensor and visual recognition technology capable of confirming whether a patient has placed a pill on the tongue and has swallowed it.

Not all will need regulatory clearance, and some are already being used or tested in patients with heart problems, stroke, H.I.V., diabetes and other conditions.

Because digital tools require effort, like using an app or wearing a patch, some experts said they might be most welcomed by older people who want help remembering to take pills and by people taking finite courses of medication, especially for illnesses like tuberculosis, in which nurses often observe patients taking medicine.

The technology could potentially be used to monitor whether post-surgical patients took too much opioid medication or clinical trial participants correctly took drugs being tested.

Insurers might eventually give patients incentives to use them, like discounts on copayments, said Dr. Eric Topol, director of Scripps Translational Science Institute, adding that ethical issues could arise if the technology was “so much incentivized that it almost is like coercion.”

Another controversial use might be requiring digital medicine as a condition for parole or releasing patients committed to psychiatric facilities.

Abilify is an arguably unusual choice for the first sensor-embedded medicine. It is prescribed to people with schizophrenia, bipolar disorder and, in conjunction with an antidepressant, major depressive disorder.

Many patients with these conditions do not take medication regularly, often with severe consequences. But symptoms of schizophrenia and related disorders can include paranoia and delusions, so some doctors and patients wonder how widely digital Abilify will be accepted.

“Many of those patients don’t take meds because they don’t like side effects, or don’t think they have an illness, or because they become paranoid about the doctor or the doctor’s intentions,” said Dr. Paul Appelbaum, director of law, ethics and psychiatry at Columbia University’s psychiatry department.

“A system that will monitor their behavior and send signals out of their body and notify their doctor?” he added. “You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia.”

The newly approved pill, called Abilify MyCite, is a collaboration between Abilify’s manufacturer, Otsuka, and Proteus Digital Health, a California company that created the sensor.

The sensor, containing copper, magnesium and silicon (safe ingredients found in foods), generates an electrical signal when splashed by stomach fluid, like a potato battery, said Andrew Thompson, Proteus’s president and chief executive.

After several minutes, the signal is detected by a Band-Aid-like patch that must be worn on the left rib cage and replaced after seven days, said Andrew Wright, Otsuka America’s vice president for digital medicine.

The patch sends the date and time of pill ingestion and the patient’s activity level via Bluetooth to a cellphone app. The app allows patients to add their mood and the hours they have rested, then transmits the information to a database that physicians and others who have patients’ permission can access.

Otsuka has not determined a price for Abilify MyCite, which will be rolled out next year, first to a limited number of health plans, Mr. Wright said. The price, and whether digital pills improve adherence, will greatly affect how widely they are used.

Questions about the technology’s ability to increase compliance remain.

Dr. Jeffrey Lieberman, chairman of psychiatry at Columbia University and NewYork-Presbyterian Hospital, said many psychiatrists would likely want to try digital Abilify, especially for patients who just experienced their first psychotic episode and are at risk of stopping medication after feeling better.

But he noted it has only been approved to track doses, and has not yet been shown to improve adherence.

“Is it going to lead to people having fewer relapses, not having unnecessary hospital readmissions, being able to improve their vocational and social life?” he asked.

He added, “There’s an irony in it being given to people with mental disorders that can include delusions. It’s like a biomedical Big Brother.”

Abilify, a widely used drug, went off patent recently, and while other companies can sell the generic form, aripiprazole, Otsuka has exclusive rights to embed it with Proteus’s sensor, said Robert McQuade, Otsuka’s executive vice president and chief strategic officer.

“It’s not intended for all patients with schizophrenia, major depressive disorder and bipolar,” he added. “The physician has to be confident the patient can actually manage the system.”

Dr. McQuade said, “We don’t have any data currently to say it will improve adherence,” but will likely study that after sales begin.

Proteus has spent years bringing its sensor to commercial use, raising about $400 million from investors, including Novartis and Medtronic, Mr. Thompson said.

Until now, the sensor could not be embedded in pills, but pharmacies could be commissioned to place it in a capsule along with another medication.

In 2016, the encapsulated sensor started being used outside of clinical trials, but commercial use is still limited, Mr. Thompson said.

Nine health systems in six states have begun prescribing it with medications for conditions including hypertension and hepatitis C, the company said, adding that it has been found to improve adherence in patients with uncontrolled hypertension and others.

Photo

William Jiang, who has schizophrenia, took Abilify for 16 years. He said he would not welcome a digital pill, but thinks it could help patients who don’t regularly take their medicine. CreditSam Hodgson for The New York Times

AiCure, a smartphone-based visual recognition system in which patients document taking medicine, has had success with tuberculosis patientstreated by the Los Angeles County Health Department and is working with similar patients in Illinois, said Adam Hanina, AiCure’s chief executive.

He said AiCure has shown promising results with other conditions, including in schizophrenia patients whose pill-taking would otherwise require direct observation.

A Florida company, etectRx, makes another ingestible sensor, the ID-Cap, which has been or is being tested with opioids, H.I.V. medication and other drugs.

Made of magnesium and silver chloride, it is encapsulated with pills and avoids using a patch because it generates “a low-power radio signal that can be picked up by a little antenna that’s somewhere near you,” said Harry Travis, etectRx’s president, who said the company plans to seek F.D.A. clearance next year.

The signal is detected by a reader worn around the neck, but etectRx aims to fit readers into watchbands or cellphone cases.

“I get questions all the time, ‘Hey is the government going to use this, and can you track me?’” said Eric Buffkin, an etectRx senior vice president. “Frankly, there is a creepiness factor of this whole idea of medicine tracking.

“The thing I tell them first and foremost is there’s nothing to reach out of this technology to pry your mouth open and make you take a pill. If you are fundamentally opposed to this idea of sharing the information, then say, ‘No thank you.’”

Seeking to address concerns about privacy and coercion, Otsuka officials contracted with several bioethicists. Among them, I. Glenn Cohen, a Harvard law professor, said safeguards adopted include allowing patients to instantly stop physicians and others from seeing some or all of their data.

Asked whether it might be used in circumstances like probation or involuntary hospitalization, Otsuka officials said that was not their intention or expectation, partly because Abilify MyCite only works if patients want to use the patch and app.

How patients will view Abilify MyCite is unclear. Tommy, 50, of Queens, N.Y., who takes Abilify for schizoaffective disorder, participated in a clinical trial for digital Abilify.

Tommy, who withheld his last name to protect his privacy, encountered minor issues, saying the patch was “a little bit uncomfortable” and once gave him a rash.

A compliant patient, Tommy said he does not need monitoring. “I haven’t had paranoid thoughts for a long time — it’s not like I believe they’re beaming space aliens,” he said. If offered digital Abilify, he said, “I wouldn’t do it again.”

But the method might appeal to patients who want to prove their compliance, build trust with their psychiatrist, or who feel “paranoid about getting accused of not taking their medicine.”

Steve Colori, 31, of Danvers, Mass., who wrote a memoir about his illness, “Experiencing and Overcoming Schizoaffective Disorder,” said he took Abilify years ago for symptoms including believing,“I was a messiah.”

Although he sometimes stopped taking medication, he would consider digital pills “overbearing and I think it stymies someone and halts progress in therapy.”

William Jiang, 44, a writer in Manhattan with schizophrenia, took Abilify for 16 years. He said he steadfastly takes medication to prevent recurrence of episodes of paranoia when “I was convinced everybody was trying to murder me.”

He said some noncompliant patients might take digital Abilify, especially to avoid Abilify injections recommended to patients who skip pills.

“I would not want an electrical signal coming out of my body strong enough so my doctor can read it,” Mr. Jiang said.

“But right now, it’s either you take your pills when you’re unsupervised, or you get a shot in the butt. Who wants to get shot in the butt?”

When Will We Have Designer Babies

Original Article

By Whitney Kimball

Within 20 to 40 years, sex will no longer be the preferred method of reproduction. Instead, half the population with decent health care will–no shitting you–have eggs grown from human skin and fertilized with sperm, then have the entire genome of about 100 embryo samples sequenced, peruse the highlights, and pick the best model to implant. At least that’s what Stanford law professor and bioethicist Hank Greely predicts in The End of Sex and the Future of Human Reproduction. But skin-grown humans aside, how long until we have “designer babies”?

 

Here’s where we are: a gene-editing tool called CRISPR/Cas9 has produced a variety of terrifying wonders over the past few years. In August, a group of scientists announced that they had successfully edited a human embryo to eradicate a heart condition. (The validity of their paper, published in the journal Nature, is now disputed, but Chinese scientists also claimed to have edited embryos in 2015, though with less success.) And “designer pets” are already within reach; mice have been turned green. Beagles have been doubledin muscle mass. Pigs have been shrunk to the size of cocker spaniels with “designer fur.” Woolly mammoths are being attempted.

Greely expects human selection to be less like the Build-a-Bear workshop which “designer pets” suggests, in which you start from scratch with a base animal and pick features from a menu, but rather in which prospective parents select from a pot of about 100 embryos made by two people for preferences like gender and health, and maybe tweak out the hereditary diseases with CRISPR/Cas9.

It’s debatable when and how CRISPR-edited embryos will be approved for implantation (Congress has currently banned the FDA from even considering it), and scientists are currently wrestling over putting a moratorium on its use on humans. CRISPR’s own co-founder Jennifer Doudna fears what eugenic nightmare her technology could bring; in 2015, she co-signed a letter calling for thorough investigation into potential risks “before any attempts at human engineering are sanctioned, if ever, for clinical testing.”

This week on Giz Asks, we asked geneticists, bioethicists, and biotechnology experts and skeptics whether a future of “designer babies” is as crazy as it sounds.

Hank Greely

Director, Center for Law and the Biosciences at Stanford University, author of The End of Sex and the Future of Human Reproduction

I think that when average readers hear the term “gene editing,” our minds jump to Gattaca or a build-a-bear workshop for babies where you walk into a lab and select desirable traits (strength, beauty, intelligence, etc) from a pamphlet.

I think GATTACA was much more about selection than about editing, but, yes, most people have an exaggerated view of what is possible.

We don’t know anything, really, about genes that give increased IQ. I suspect–and in my book, The End of Sex, I predict–that in 20 to 40 years we’ll know a little, but not much–maybe enough to say “this embryo has a 60% chance of being in the top 50%” or “a 13% chance of being in the top 10%.” Intelligence is just too complicated and despite decades of work only genes associated with very low intelligence have been found.

Note, though, that you said “genetically select”–selection is more about preimplantation genetic diagnosis [PGD, pre-screening for genetic diseases in embryos] and picking among embryos randomly created by a couple. Editing is intentionally changing DNA away from what the couple created. Neither, though, works worth a damn unless you know what the relationship between the DNA and the trait is and not only aren’t we close with intelligence, we may never be very good at it: too many genes are involved, along with too much environment and too much luck!

…On eye color, hair color, skin color, etc., we’ve got some clues now and will be pretty good in ten to 20 years–though I doubt CRISPRing embryos will be shown to be safe and ready for clinical use in less than 20 to 30 years. On IQ, math ability, sports ability, music ability, personality type we’ll have some information but probably not very much: 60%, 70%, maybe 80% chances of being in the top half but not 90 to 100%. … Right now, though, we have made almost no progress on those traits or on common complicated diseases like asthma, type 2 diabetes, or depression.

… The more simple genetic diseases like Huntington’s disease turn out to be quite rare–common diseases are usually a mix of heavily genetic, somewhat genetic, and not at all genetic. For example, 1% of people with Alzheimer disease (and so about 1 person in a 1000 in the population) have an early onset form that is very strongly genetic. If you have the gene variation involved, the only way you won’t get the disease in your 40s or 50s is to die first from something else. About 4% of the population have two copies of a genetic variation that gives them a 50 to 80% chance of getting AD, compared with the overall average 10% or so. About 20% of the population has one copy of that variant and has about a 20 to 40 percent chance. About 2% of the population has two copies of a different version of the same gene and seem to have zero chance of getting AD. And most people with Alzheimer disease have no copies of any of the known, strong genetic risk factors for it. That’s what most common diseases are like; most behavioral traits will likely be even worse.

Glenn Cohen

Harvard Law School Faculty Director, Petrie-Flom Center for Health Law Policy, Biotechnology & Bioethics, leading expert in bioethics and law

I do not love the term “designer babies” because it is imprecise. [In one sense], they are already here and have been for a long time. When [individuals] consider need to use a sperm donor for artificial insemination… they engage in a form of trait selection. The catalogues that feature sperm “donors” (donors in scare quotes because they are paid) recruited for sperm banks have already excluded 99% of applicants, and who is left tends to have very desirable health, intelligence, and beauty traits. The same is true for egg “donors.” So already parents using these technologies are “designing” their babies. And there is nothing to stop someone who wanted to from buying both sperm and egg from donors to maximize their ability to engage in trait selection. For individuals using IVF [in vitro fertilization], there is also the possibility of using preimplantation genetic diagnosis (PGD) to examine the embryos and screen out those predicted to have diseases or other problems and engage in sex selection. The next step on the horizon, written about recently in the MIT Technology Reviewis combining big data from whole genome sequencing and population databases to make predictions on non-disease traits as part of the PGD technique.

Some companies are either directly or indirectly building this into their pitch to investors as that report suggests, though also [the MIT Technology Review reports that some scientists are skeptical]: “Some experts contacted by MIT Technology Review said they believed it’s premature to introduce polygenic scoring technology into IVF clinics—though perhaps not by very much. Matthew Rabinowitz, CEO of the prenatal-testing company Natera, based in California, says he thinks predictions obtained today could be “largely misleading” because DNA models don’t function well enough. But Rabinowitz agrees that the technology is coming.”

Even if perfected this will only be available to individuals willing to go through IVF and PGD, costly in both time, money, and health risks. So I don’t know how much uptake there will be in the near future. The more radical change would come if In Vitro Gametogenesis became cheap, safe, approved, and easy such that we could generate a large number of eggs without having to make women go through egg retrieval. I am a bit more skeptical it will be [as quick as Hank Greely suggests], but do think that such technology will be used in the future. Finally, if CRISPR gene editing becomes available as a therapeutic technique in the US, and we get much much better at targeting what to edit, it may be possible to do all this with fewer embryos generated and have more options since we can alter existing embryos instead of waiting to find an embryo that has the desired traits created through natural means. I am skeptical though that the regulatory system in the US will allow human use of this on embryos in the foreseeable future.

So the bottom line is on some definitions of designer babies they are here already and have been for some time. On other definitions I think the estimate is somewhere between 20 to 100 years depending on what you think about the progress of technology and the regulatory system in our country.

David Liu

Harvard professor in Chemistry and Chemical Biology, founder of five biotechnology companies and lead researcher and innovator in gene-editing technologies

When will “designer babies” become a reality–if so, how many years from now, and what would that look like? For example, will there be a time when people will be able to edit their future child’s genes to lead to greater strength, musical talent, mathematical genius?

Probably never. There’s a common misconception that allows people to imagine designer babies as you describe them. The misconception is that traits such as the ones you list above behave in a “Mendelian” manner—that is, like the traits of Gregor Mendel’s famous pea plants: tall vs short plant, green vs yellow seeds, etc. It turns out that the vast majority of human traits are not “monogenic,” meaning that they are not determined by the DNA sequence of a single gene. Instead, traits such as mathematical or athletic or musical ability reflect thousands of genes—recent research suggests perhaps even all of our genes!—as well as many environmental factors.

So even if one imagines a future in which (A) genome editing capabilities are truly like word processing (note: we aren’t there yet!), and (B) editing of human embryos, currently quite controversial and not practiced in most countries, is commonplace, it’s still unlikely that designer babies with traits such as higher intelligence or superior athletic skills will ever be possible. Human genetics simply doesn’t work that way. Finally, it’s also worth pointing out that the screening of human embryos to avoid certain genes strongly associated with devastating genetic diseases prior to implantation and pregnancy is already taking place in a number of countries around the work, and has been for a number of years.

…Even cosmetic traits such as hair and eye color, while genetically simpler than mathematical ability, aren’t really monogenic. Also, one has to consider the cost, risks, and ethics associated with such a procedure. The risks and costs and ethical challenges are easier to bear if the outcome is to avoid an incurable, fatal genetic disease that causes great suffering. For changing eye color, it’s not so compelling.

Dietrich M. Egli

Assistant Professor of Developmental Cell Biology at Columbia, prominent skeptic of the recent “breakthrough” claims of successful embryonic editing by CRISPR technology

The technique is not there yet. Nobody has shown that they can modify the genome of the human germ line and reliably avoid adverse effects. Nobody knows when that hurdle will be overcome. The claim based on a recent paper that appeared in Nature reporting they could [edit the DNA of a human embryo without adverse effects] is most likely wrong.

Does that imply that we are not even at a place where we can use gene-editing at all, for anything? CRISPR is not an effective tool yet? And what kind of adverse effects? Leading to mutations of some kind?

[CRISPR] works very well in cultured cells, where you can afford to have a less than 100% efficiency, and then simply select those that worked. It may also work for gene therapy in somatic cells.

And, its an amazing research tool. No clinical applications yet. It’s just a few years since it has been discovered.

Do you think there’s a natural balance of how many super-traits a person can possess, in a similar way to how when homo sapiens’ jaws shrank, it made more room in the skull for a larger brain? For example, would it offset the natural balance if math wizards were also born football players and possessed the musical talents of Mozart?

I don’t think there is a good answer to that. There are tradeoffs and constraints in the human body. There is also a huge gap of knowledge what a specific variant could accomplish.

For instance, there are some variants that protect against malaria, but cause [sickle cell] disease as well. The most meaningful thing to do is to further study the human genome to enable improvements in human health.

Dr. Robert Green

Medical geneticist and director of the Genomes2People Research Program at Brigham and Women’s Hospital; Professor of Medicine, Harvard Medical School

If we’re talking about the next five to ten years, I think what’s really exciting about these technologies is that they may allow us to edit genetic errors. Genetic errors like a mutation that would cause a disease in an infant. I think that that’s really the focus, it’s the disease focus, not the designer baby focus. That’s point number one.

Point number two is most of the traits that you can imagine someone trying to enhance, which is intelligence, height, athletic ability, musical ability and so forth, are probably very multi-genic traits. They are traits that many, many genes, many regulatory regions and so forth. They are very complex. Even if there were not ethical rules in place, which I think there probably will be … I think that it would be very, very difficult to [change those things].

If, say, the United States outlaws that kinds of thing, do you think other countries might start offering services where you can edit for things like hair and eye color? Do you foresee that as an inevitable consumer service?

I guess, I think that things like that are possible, for sure. I think that the hope is that most countries will try to agree on guidelines for which most of humanity will agree to and abide by because of the potential to do so much harm when messing with DNA to lose embryos, to create diseased human beings. I do hope that such actions as selecting for specific cosmetic differences might not be encouraged or permitted, but I am aware that there are reproductive clinics already in which gender selection is done. Embryos are created, they are looked at for a potential gender and then implanted. It is going to be difficult to regulate. Yes, all that can take place.

How quickly do you think this could happen, if the technology made it to the market, for consumers to use those services?

I guess theoretically, I would say, within a decade. But again, even things like hair color may be driven to certain traits. An eye color may be driven by a few genes, but you’ve got to imagine most people do not want to put their embryos at risk for anything less than avoiding a really serious disease. The notion that you would put an embryo, this child, at great potential risk–we really aren’t going to know the long-term effects of mocking around in the DNA of a human embryo and to put an embryo at risk for what you perceive to be a cosmetic improvement, I think would be against the principals of most responsible parents.

Canada’s No-Bullshit Governor General Just Took on Climate Change Deniers, Astrologers

 

Original Article

By George Dvorsky

Julie Payette (Image: Prime Minister’s Office of Canada)

Speaking at a science conference in Ottawa on Thursday, Canada’s newly appointed governor general, Julie Payette, directed some harsh comments towards climate skeptics, astrologists, and believers of “divine intervention.” Critics complained that it’s not the governor general’s place to get involved in such matters, but Prime Minister Justin Trudeau defended the speech.

Astronaut Julie Payette of the Canadian Space Agency. (Image: CSA)

That Julie Payette, 54, would be a such staunch supporter of science is hardly a surprise. The computer and electrical engineer flew on two Space Shuttle missions (in 1999 and 2009), logging 25 total days in space. She was appointed governor general on July 13th, 2017 by the Trudeau government, and she hasn’t wasted time in making her mark—particularly when it comes to the promotion of science.

At this week’s Canadian Science Policy Conference, Payette argued for greater public acceptance of science, saying it’s time for Canadians step away from false beliefs such as astrology and divine intervention, while speaking out against people who insist that human activity isn’t responsible for climate change.

Such language isn’t typical of a Canadian governor general. As a state-appointed representative of the Queen, it’s a position of mere symbolic importance. As governor general, Payette is supposed to be an impartial overseer of the democratic process, and not get involved in politics or spiritual matters. That said, there’s nothing in the Canadian constitution that precludes the governor general from speaking out. And indeed, this latest governor general is not like the others, and she’s not holding back.

“So many people…still believe—want to believe—that maybe taking a sugar pill will cure cancer…and that your future [and your personality]…can be determined by looking at planets coming in front of invented constellations,” she said during the speech. In a clear reference to Creationists, Payette said we’re “still debating and still questioning whether life was a divine intervention,” or whether it came from the natural, random process of Darwinian natural selection.

On the topic of climate change, Payette said: “Can you believe that still today in learned society, in houses of government, unfortunately, we’re still debating and still questioning whether humans have a role in the Earth warming up or whether even the Earth is warming up, period?”

This isn’t the first time that Payette has dared to address climate change, having mentioned it in two of her three previous public engagement (including her acceptance speech as Canada’s new governor general). As Canada’s new GG, she appears to have taken up climate change as a main cause.

Later, Prime Minister Justin Trudeau praised Payette’s speech, saying she stands in support of science and the truth. “We are a government grounded in science,” he said. “Canadians are people who understand the value of science and knowledge as a foundation for the future of our country.”

Critics from both the media and within politics wasted no time in attacking the speech, which they criticized for its overreach and insensitivity.

“Those who read and write horoscopes would be entitled to take offence,” saidreporter Aaron Wherry in CBC News. “[And] however strongly one feels about the science of evolution, religious belief might generally be considered sacrosanct, or at least a topic that the appointed occupant of Rideau Hall should avoid commenting on.”

Alise Mills, a political strategist for the Conservative Party, said Payette’s speech inappropriately ventured into politics, and that it was mean spirited. “I definitely agree science is key but I think there is a better way to do that without making fun of other people,” she said.

Conservative leader Andrew Scheer blasted into the Prime Minister for his support of the speech. “It is extremely disappointing that the prime minister will not support Indigenous peoples, Muslims, Jews, Sikhs, Christians and other faith groups who believe there is truth in their religion,” he said in a statement posted to Facebook. “Respect for diversity includes respect for the diversity of religious beliefs, and Justin Trudeau has offended millions of Canadians with his comments‎.”

In his condemnation, Scheer is obviously reading way too much in Payette’s speech, but this episode shows how difficult it is to advocate for science and “the truth” (in Trudeau’s words) without impinging on people’s personal beliefs. Payette’s tone may have been harsh, but in this bewildering era of anti-science, her words were a breath of fresh air.

[CBC News]

 

‘Monster’ planet discovery stuns scientists

Original Article

By Fox News

Astronomers have discovered a planet the size of Jupiter orbiting a star that’s only half the size of the sun — a celestial phenomenon that contradicts theories of planet formation.

NGTS-1b, a massive, 986-degrees-hot ball of gas revolving around a red M-dwarf star 600 light years from Earth, is the largest planet compared to the size of its star ever found.

The discovery contradicts theories that a star so small could form a planet so large. Scientists previously theorized that small stars could form rocky planets, but they did not gather enough material to form planets the size of Jupiter.

STARGZERS FIND TWENTY NEW EARTH-LIKE PLANETS THAT COULD HOST LIFE

As red M-dwarf stars are the most common type in the universe, scientists now believe there may be many more planets like this.

MonsterPlanet2

Artist’s impression of planet NGTS-1b with its neighbouring sun (credit University of Warwick/Mark Garlick)

NGTS-1b was spotted by an international collaboration of researchers using the Next-Generation Transit Survey (NGTS) facility in Chile, according to a report from the University of Warwick.

It is about 2.8 million miles away from its star — only 3 percent of the 93-million-mile distance between Earth and the sun. A year on NGTS-1b — the time it takes to revolve around its star — occurs every 2.6 Earth days.

NASA RELEASES EERIE PLAYLIST OF SPELLBINDING SPACE SOUNDS

“The discovery of NGTS-1b was a complete surprise to us. Such massive planets were not thought to exist around such small stars,” said the lead author of the research, Dr. Daniel Bayliss of the University of Warwick’s Astronomy and Astrophysics Group. “This is the first exoplanet we have found with our new NGTS facility, and we are already challenging the received wisdom of how planets form.”

“NGTS-1b was difficult to find, despite being a monster of a planet, because its parent star is small and faint,” said Warwick Professor Peter Wheatley. “Small stars are actually the most common in the universe, so it is possible that there are many of these giant planets waiting to found.

“Having worked for almost a decade to develop the NGTS telescope array, it is thrilling to see it picking out new and unexpected types of planets. I’m looking forward to seeing what other kinds of exciting new planets we can turn up.”

SUNSCREEN ‘SNOW’ FALLS ON SCORCHING-HOT ALIEN PLANET

The astronomers’ report, ‘NGTS-1b: a hot Jupiter transiting an M-dwarf’, will be published in the Monthly Notices of the Royal Astronomical Society.

These People Never Existed. They Were Made by an AI.

Original Article

By Dom Galeon

Better Than Doodles

Back in June, an image generator that could turn even the crudest doodle of a face into a more realistic looking image made the rounds online. That system used a fairly new type of algorithm called a generative adversarial network (GAN) for its AI-created faces, and now, chipmaker NVIDIA has developed a system that employs a GAN to create far most realistic-looking images of people.

Artificial neural networks are systems developed to mimic the activity of neurons in the human brain. In a GAN, two neural networks are essentially pitted against one another. One of the networks functions as a generative algorithm, while the other challenges the results of the first, playing an adversarial role.

As part of their expanded applications for artificial intelligence, NVIDIA created a GAN that used CelebA-HQ’s database of photos of famous people to generate images of people who don’t actually exist. The idea was that the AI-created faces would look more realistic if two networks worked against each other to produce them.

First, the generative network would create an image at a lower resolution. Then, the discriminator network would assess the work. As the system progressed, the programmers added new layers dealing with higher-resolution details until the GAN finally generated images of “unprecedented quality,” according to the NVIDIA team’s paper.

Human or Machine?

NVIDIA released a video of their GAN in action, and the AI-created faces are both absolutely remarkable and incredibly eerie. If the average person didn’t know the faces were machine-generated, they could easily believe they belonged to living people.

Indeed, this blurring of the line between the human and the machine-generated is a topic of much discussion within the realm of AI, and NVIDIA’s GAN isn’t the first artificial system to convincingly mimic something human.

A number of AIs use deep learning techniques to produce human-sounding speech. Google’s DeepMind has WaveNet, which can now copy human speech almost perfectly. Meanwhile, startup Lyrebird’s algorithm is able to synthesize a human’s voice using just a minute of audio.

Even more disturbing or fascinating — depending on your perspective on the AI debate — are AI robots that can supposedly understand and express human emotion. Examples of those include Hanson Robotics’ Sophia and SoftBank’s Pepper.

Clearly, an age of smarter machines is upon us, and as the ability to AI to perform tasks previously only human beings could do improves, the line between human and machine will continue to blur. Now, the only question is if it will eventually disappear altogether.

Immortality Is Impossible, Say Scientists Sutdying the Mathematics of Aging

Original Article

By Peter Hess

While healthcare has dramatically extended our lifespans by preventing certain causes of death, aging still inevitably takes its fatal toll. And, as scientists report in a new Proceedings of the National Academy of Sciences study, that’s not going to change: Whether it’s by cancer or run-of-the-mill cell destruction, aging and death is mathematically inescapable.

In the paper published Monday, Joanna Masel, Ph.D., and Paul Nelson, Ph.D., both of the Department of Ecology and Evolutionary Biology at the University of Arizona, provide mathematical evidence that aging and eventual death must happen, no matter how we intervene in the aging process.

They explain that every cell in the body is tasked with two opposing missions: ensuring its own survival and supporting the organism it’s a part of. Masel and Nelson reason that this endless push and pull between those missions means that aging is unstoppable.

“If you have [no competition] or too little, then damaged cells accumulate and you get senescence,” Masel tells Inverse. “And if you have more than zero, then you get cancer. Either way, you get decreasing vitality with age.”

The team came to this conclusion by creating a mathematical model of cell competition within an organism. Cells in a human body, they explain, face a unique set of forces under the dynamic of competition: On one hand, cells need to work together for the body to function properly. But on the other hand, those cells must compete with each other for survival, and natural selection among those cells means that competition allows only the fittest cells to survive. This competition, the authors explain, results in cancer as the cells that inevitably find ways to game the system are the ones that end up growing uncontrollably.

When a human ages normally, the survival of any individual cell is sacrificed in the name of the organism’s health. In other words, a certain portion of each cell’s output is devoted to collective health instead of individual health. Ultimately, the triumph of cooperation over competition means that bodies accumulate dead or dying cells in a way that eventually leads to what we know as aging.

Natural selection is a process that’s more commonly liked with the genetic evolution of a population of individuals than of cells, but previous research has shown it plays a role in aging too as the cells in your body need to survive and work together in order for a person to live. Nelson, a postdoc in Masel’s lab, says the new research makes an even stronger statement about how the process of natural selection affects human aging.

“Even if selection were perfect, we would still get aging because the cells in our body are evolving all the time,” says Nelson.

The team came to this conclusion by creating a mathematical model of cell competition within an organism. Cells in a human body, they explain, face a unique set of forces under the dynamic of competition: On one hand, cells need to work together for the body to function properly. But on the other hand, those cells must compete with each other for survival, and natural selection among those cells means that competition allows only the fittest cells to survive. This competition, the authors explain, results in cancer as the cells that inevitably find ways to game the system are the ones that end up growing uncontrollably.

When a human ages normally, the survival of any individual cell is sacrificed in the name of the organism’s health. In other words, a certain portion of each cell’s output is devoted to collective health instead of individual health. Ultimately, the triumph of cooperation over competition means that bodies accumulate dead or dying cells in a way that eventually leads to what we know as aging.

Natural selection is a process that’s more commonly liked with the genetic evolution of a population of individuals than of cells, but previous research has shown it plays a role in aging too as the cells in your body need to survive and work together in order for a person to live. Nelson, a postdoc in Masel’s lab, says the new research makes an even stronger statement about how the process of natural selection affects human aging.

“Even if selection were perfect, we would still get aging because the cells in our body are evolving all the time,” says Nelson.

 

 

 

First interstellar object from beyond our solar system spotted by astronomers

Original Article

By Chloe Farand

Mysterious space rock passes near Earth at ‘extremely fast’ 15.8 miles per second asteroid-getty.jpg

For the first time ever a comet or asteroid that likely originated from outside our solar system has passed close enough to Earth to be visible by astronomers.

The interstellar object has sparked huge enthusiasm from scientists who are urgently working to gather information on the mysterious body before it disappears from sight.

According to astronomers, the object is on a hyperbolic trajectory which suggests the body has escaped from a star from outside our solar system.

Early findings published by the International Astronomical Union’s Minor Planet Centre state: “If further observations confirm the unusual nature of this orbit, this object may be the first clear case of an interstellar comet.”

The mysterious object, named A/2017 U1, was discovered by the University of Hawaii’s Pan-STARRS 1 telescope on Haleakala, Hawaii.

Rob Weryk from the University of Hawaii’s Institute of Astronomy was the first to identify the moving object. Comparing his findings with images taken at the European Space Agency’s telescope on Tenerife in the Canary Islands, he concluded the object came from somewhere else in our galaxy.

The alien space rock, believed to have come from the direction of the constellation Lyra, is less than 400 metres in diameter and is travelling through space at a remarkable 15.8 miles (25.5 kilometres) per second.

interstellar-object.jpg
A/2017 U1 passed through our inner solar system in September and October (NASA/JPL-Caltech)

Scientists have long believed in the existence of such interstellar objects because huge amounts of material is thought to be ejected when planets are formed. However, this sighting is the first of its kind.

Paul Chodas, manager of NASA’s Centre for Near-Earth Object Studies (CNEOS), said: “We have been waiting for this for decades. It’s long been theorised that such objects exist – asteroids or comets moving around between the stars and occasionally passing through our solar system – but this is the first such detection. So far, everything indicates this is likely an interstellar object, but more data would help confirm it.”

New information obtained from observing the object could allow astronomers to know more about its origin and possibly its composition.

“This is the most extreme orbit I have ever seen,” said David Farnocchia from CNEOS’ Jet propulsion Laboratory in Pasadena, California.

“It is going extremely fast and on such a trajectory that we can say with confidence that this object is on its way out of the solar system and not coming back.”

The small body came closest to the Sun on 9 September before making a hairpin turn and passing under the Earth’s orbit on 14 October at a distance of about 15 million miles (24 million kilometres), or about 60 times the distance to the Moon.

Saudi Arabia’s Robot Love Is Getting Weird

Original Article

By Adam Clark Estes

In the latest example of “Philip K Dick-inspired nightmare becomes real life,” Saudi Arabia just became the first nation to grant citizenship to a robot. The robot’s name is Sophia. It is artificially intelligent, friends with CNBC’s Andrew Ross Sorkin, and, arguably, a glimpse into the dark future that will kill us all.

You see, the Kingdom of Saudi Arabia has been interested in androids for years. It seemed almost quaint at first. This desert nation with more money than caution and a taste for the futuristic was bound to explore the odd possibilities of new technologies. Years ago, Saudi Arabia began experimenting with robots boldly, tasking them with everything from building construction to brain surgery. Neighboring Qatar and United Arab Emirates even recruited robots to work as jockeys in camel races, a whimsical twist that surely fed the curiosity of Saudi princes.

Recently, however, Saudi Arabia’s affinity for robotics has taken a weird—even dark—turn. Ahead of granting Sophia citizenship, Saudi Crown Prince Mohammed bin Salman announced the construction of a new megacity called Neom. Designed to dwarf Dubai both in size and lavishness, the new metropolis is planned as an international business and tourism hub with fewer rules than the rest of Saudi Arabia. Women will be allowed in public without wearing an abaya, for instance. The city of Neom will also have more robots than humans.

“We want the main robot and the first robot in Neom to be Neom, robot number one,” the crown prince said in Riyadh. “Everything will have a link with artificial intelligence, with the Internet of Things—everything.”

This is basically the plot of I, Robot, a book that did not turn out well for the humans. And if we’re to assume that some of the robots in Neom will be artificially intelligent abominations like Sophia, mankind is definitely doomed. Even Sophia thinks so. Just watch this segment from The Tonight Show when the robot talks about its “plan to dominate the human race.”

Jokes aside, what’s especially dystopian about Saudi’s robot obsession is the extent to which the machines appear to have more rights than many people in the country. Critics on social media lambasted the Saudi government after it announced that Sophia had been granted citizenship. Images of Sophia at the Future Investment Initiative, where the citizenship announcement happened, showed the uncanny female automaton without a headscarf or an abaya. She was also without a male guardian. It would be a crime for a Saudi women to be in public without an abaya or a male guardian.

You might argue that a robot can’t really be a female, which is true. However, Hanson Robotics, the company that built Sophia and is run by a former Disney Imagineer, dresses her in female clothing and says that she’s supposed to look like Audrey Hepburn (which is hilarious because she doesn’t look a thing like Audrey Hepburn). Sophia does look female, though, and now she’s a Saudi citizen with unique rights. It’s unclear what exactly those rights are, but freedom from gendered laws appears to be one of them.

For Saudi Arabia, diversifying the economy by pouring some of that oil money into tech makes sense, but it remains to be seen if the country plans to adopt more robots as citizens or if Neom will actually get built. The Saudi royal family hasn’t had a ton of luck with megaprojects like this in the past, the King Abdullah Economic City being the most recent example of unfulfilled promises. Neom might just remain a twinkle in Crown Prince Mohammed bin Salman’s eye.

Speaking of twinkles, take one last look at Sophia’s eyes. They are not okay. The sinister sparkling when it’s processing information looks worse than the red glow in the Terminator’s skull. It also serves as a tiny peek into a frightening future full of artificially intelligent beings, the capabilities of which we’ve barely pondered. In an interview with Andrew Ross Sorkin, new Saudi citizen Sophia actually took a swipe at Elon Musk and his warnings about AI:

“My AI is designed around human values like wisdom, kindness, compassion. I strive to become an empathetic robots (sic),” Sophia said.

“We all believe you but we all want to prevent a bad future,” Sorkin said.

“You’ve been reading too much Elon Musk. And watching too many Hollywood movies,” Sophia said. “Don’t worry, if you’re nice to me, I’ll be nice to you. Treat me as a smart input output system.”

Musk’s reply on Twitter was priceless.

Inspired by brain’s visual cortex, new AI utterly wrecks CAPTCHA security

Original Article

By John Timmer

 

Enlarge / A representation of how physically close feature recognition units are built hierarchically to create an object hypothesis.
Vicarious AI

Computer algorithms have gotten much better at recognizing patterns, like specific animals or people’s faces, allowing software to automatically categorize large image collections. But we’ve come to rely on some things that computers can’t do well. Algorithms can’t match their image recognition to semantic meaning, so today you can ensure a human’s present by asking them to pick out images of street signs. And algorithms don’t do especially well at recognizing when familiar images are distorted or buried in noise, either, which has kept us relying on text-based CAPTCHAs, the distorted text used to verify a human is interacting with Web services.

Or we had relied on them ’til now, at least. In today’s issue of Science, a Bay Area startup called Vicarious AI describes an algorithm they created that is able to take minimal training and easily handle CAPTCHAs. It also managed general text recognition. Vicarious’ secret? They modeled the structure of their AI on information we’ve gained from studying how the mammalian visual cortex processes images.

Thinking visually

In the visual cortex, different groups of neurons recognize features like edges and surfaces (and others identify motions, which aren’t really relevant here). But rather than viewing a scene or object as a collection of these parts, the neurons start communicating among each other, figuring out by proximity which features are part of a single object. As objects are built up and recognized, the scene is built hierarchically based on objects instead of individual features.

The result of this object-based classification is that a similar collection of features can be recognized even if they’re in a different orientation or are partly obscured, provided that the features that are visible have the same relative orientations. That’s why we can still recognize individual letters if they’re upside down, backwards, and buried in a noisy background. Or, to use Vicarious’ example, why we can still tell that a chair made of ice is a chair.

To try to mimic the brain’s approach, the team created what they’re calling a Recursive Cortical Network, or RCN. A key step is the identification of contours, features that define edges of an object as well as internal structures. Another set of agents pull out surface features, such as the smoothness of a surface defined by these contours. Collections of these recognized properties get grouped into pools based on physical proximity. These pools then establish connections with other pools and pass messages to influence the other’s feature choices, creating groups of connected features.

Groups of related features get built up hierarchically through a similar process. At the top of these trees are collections of connected features that could be objects (the researchers refer to them as “object hypotheses”). To parse an entire scene with a collection of objects, the RCN undergoes rounds of message passing. The RCN creates a score for each hypothesis and revisits the highest ranked scores to evaluate them in light of other hypotheses in the same scene, ensuring that they all occupy a contiguous 2D space. Once an object hypothesis has been through a few rounds of this selection, it can typically recognize its object despite moderate changes in size and orientation.

High efficiency

The remarkable thing about the training is its efficiency. When the authors decided to tackle reCAPTCHAs, they simply compared some examples to the set of fonts available on their computer. Settling on the Georgia font as a reasonable approximation, they showed RCN five examples each of partial rotations for all the upper and lower case letters. At a character level, this was enough to provide over 94 percent letter recognition accuracy. That added up to solving the reCAPTCHA two-thirds of the time. Human accuracy stands at 87 percent, and the system is considered useless from a security standpoint if software can manage it with a one percent accuracy.

And it’s not just reCAPTCHA. This system managed the BotDetect system with similar accuracy and Yahoo and PayPal systems with 57 percent accuracy. The only differences involved are the fonts used and some hand-tweaking of a few parameters that adjust for the deformations and background noise in the different systems. By contrast, other neural networks have needed on the order of 50,000 solved CAPTCHAs for training compared to RCN’s 260 images of individual characters. Those neural networks will typically have to be retrained if the security service changes the length of its string or alters the distortion it uses.

To adapt RCN to work with text in real-world images, the team provided it with information about the co-appearance of letters and frequency of words use, as well as the ability to analyze geometry. It ended up beating the top-performing model by about 1.9 percent. Again, not a huge margin, but this system managed with far less training—the leading contender had been trained on 7.9 million images compared to RCN’s 1,406. Not surprisingly, RCN’s internal data representation was quite a bit smaller than its competitor’s.

This efficiency is a bit of a problem, as it lowers the hardware bar that must be cleared in order to overcome a major security feature of a variety of websites.

More generally, this could be a big step for AI. As with the Go-playing software, this isn’t a generalized AI. While it’s great for identifying characters, it doesn’t know what they mean, it can’t translate them into other languages, and it won’t take any actions based on its identifications. But RCN suggests that AI doesn’t need to be completely abstracted from actual intelligence—the insights we gain from studying real brains can be used to make our software more effective. For a while, AI has been advancing by throwing more powerful hardware, deeper pipelines, and bigger data sets at problems. Vicarious has shown that returning to AI’s original inspiration might not be a bad idea.

Science, 2017. DOI: 10.1126/science.aag2612  (About DOIs).

ARS SCIENCE VIDEO>

A celebration of Cassini

A representation of how physically close feature recognition units are built hierarchically to create an object hypothesis. Vicarious AI

Computer algorithms have gotten much better at recognizing patterns, like specific animals or people’s faces, allowing software to automatically categorize large image collections. But we’ve come to rely on some things that computers can’t do well. Algorithms can’t match their image recognition to semantic meaning, so today you can ensure a human’s present by asking them to pick out images of street signs. And algorithms don’t do especially well at recognizing when familiar images are distorted or buried in noise, either, which has kept us relying on text-based CAPTCHAs, the distorted text used to verify a human is interacting with Web services.

Or we had relied on them ’til now, at least. In today’s issue of Science, a Bay Area startup called Vicarious AI describes an algorithm they created that is able to take minimal training and easily handle CAPTCHAs. It also managed general text recognition. Vicarious’ secret? They modeled the structure of their AI on information we’ve gained from studying how the mammalian visual cortex processes images.

Thinking visually

In the visual cortex, different groups of neurons recognize features like edges and surfaces (and others identify motions, which aren’t really relevant here). But rather than viewing a scene or object as a collection of these parts, the neurons start communicating among each other, figuring out by proximity which features are part of a single object. As objects are built up and recognized, the scene is built hierarchically based on objects instead of individual features.

The result of this object-based classification is that a similar collection of features can be recognized even if they’re in a different orientation or are partly obscured, provided that the features that are visible have the same relative orientations. That’s why we can still recognize individual letters if they’re upside down, backwards, and buried in a noisy background. Or, to use Vicarious’ example, why we can still tell that a chair made of ice is a chair.

To try to mimic the brain’s approach, the team created what they’re calling a Recursive Cortical Network, or RCN. A key step is the identification of contours, features that define edges of an object as well as internal structures. Another set of agents pull out surface features, such as the smoothness of a surface defined by these contours. Collections of these recognized properties get grouped into pools based on physical proximity. These pools then establish connections with other pools and pass messages to influence the other’s feature choices, creating groups of connected features.

Groups of related features get built up hierarchically through a similar process. At the top of these trees are collections of connected features that could be objects (the researchers refer to them as “object hypotheses”). To parse an entire scene with a collection of objects, the RCN undergoes rounds of message passing. The RCN creates a score for each hypothesis and revisits the highest ranked scores to evaluate them in light of other hypotheses in the same scene, ensuring that they all occupy a contiguous 2D space. Once an object hypothesis has been through a few rounds of this selection, it can typically recognize its object despite moderate changes in size and orientation.

High efficiency

The remarkable thing about the training is its efficiency. When the authors decided to tackle reCAPTCHAs, they simply compared some examples to the set of fonts available on their computer. Settling on the Georgia font as a reasonable approximation, they showed RCN five examples each of partial rotations for all the upper and lower case letters. At a character level, this was enough to provide over 94 percent letter recognition accuracy. That added up to solving the reCAPTCHA two-thirds of the time. Human accuracy stands at 87 percent, and the system is considered useless from a security standpoint if software can manage it with a one percent accuracy.

And it’s not just reCAPTCHA. This system managed the BotDetect system with similar accuracy and Yahoo and PayPal systems with 57 percent accuracy. The only differences involved are the fonts used and some hand-tweaking of a few parameters that adjust for the deformations and background noise in the different systems. By contrast, other neural networks have needed on the order of 50,000 solved CAPTCHAs for training compared to RCN’s 260 images of individual characters. Those neural networks will typically have to be retrained if the security service changes the length of its string or alters the distortion it uses.

To adapt RCN to work with text in real-world images, the team provided it with information about the co-appearance of letters and frequency of words use, as well as the ability to analyze geometry. It ended up beating the top-performing model by about 1.9 percent. Again, not a huge margin, but this system managed with far less training—the leading contender had been trained on 7.9 million images compared to RCN’s 1,406. Not surprisingly, RCN’s internal data representation was quite a bit smaller than its competitor’s.

This efficiency is a bit of a problem, as it lowers the hardware bar that must be cleared in order to overcome a major security feature of a variety of websites.

More generally, this could be a big step for AI. As with the Go-playing software, this isn’t a generalized AI. While it’s great for identifying characters, it doesn’t know what they mean, it can’t translate them into other languages, and it won’t take any actions based on its identifications. But RCN suggests that AI doesn’t need to be completely abstracted from actual intelligence—the insights we gain from studying real brains can be used to make our software more effective. For a while, AI has been advancing by throwing more powerful hardware, deeper pipelines, and bigger data sets at problems. Vicarious has shown that returning to AI’s original inspiration might not be a bad idea.

“Our Universe Should Actually Not Exist” –CERN Scientists Attempt to Find Out Why It Does

Original Article

Pia18846-full

 

“All of our observations find a complete symmetry between matter and antimatter, which is why the universe should not actually exist,” explained Christian Smorra, with the BASE collaboration at the CERN research center. “An asymmetry must exist here somewhere but we simply do not understand where the difference is. What is the source of the symmetry break?”

The search goes on. No difference in protons and antiprotons have yet been found which would help to potentially explain the existence of matter in our universe. However, physicists in the BASE collaboration at the CERN research center have been able to measure the magnetic force of antiprotons with almost unbelievable precision. Nevertheless, the data do not provide any information about how matter formed in the early universe as particles and antiparticles would have had to completely destroy one another. 

The most recent BASE measurements revealed instead a large overlap between protons and antiprotons, thus confirming the Standard Model of particle physics. Around the world, scientists are using a variety of methods to find some difference, regardless of how small. The matter-antimatter imbalance in the universe is one of the hot topics of modern physics.

The multinational BASE collaboration at the European research center CERN brings together scientists from the RIKEN research center in Japan, the Max Planck Institute for Nuclear Physics in Heidelberg, Johannes Gutenberg University Mainz (JGU), the University of Tokyo, GSI Darmstadt, Leibniz Universität Hannover, and the German National Metrology Institute (PTB) in Braunschweig. They compare the magnetic properties of protons and antiprotons with great precision. The magnetic moment is an essential component of particles and can be depicted as roughly equivalent to that of a miniature bar magnet. The so-called g-factor measures the strength of the magnetic field.

“At its core, the question is whether the antiproton has the same magnetism as a proton,” explained Stefan Ulmer, spokesperson of the BASE group. “This is the riddle we need to solve.”

The BASE collaboration published high-precision measurements of the antiproton g-factor back in January 2017 but the current ones are far more precise. The current high-precision measurement determined the g-factor down to nine significant digits. This is the equivalent of measuring the circumference of the earth to a precision of four centimeters. The value of 2.7928473441(42) is 350 times more precise than the results published in January.

“This tremenduous increase in such a short period of time was only possible thanks to completely new methods,” said Ulmer. The process involved scientists using two antiprotons for the first time and analyzing them with two Penning traps.

Antiprotons are artificially generated at CERN and researchers store them in a reservoir trap for experiments. The antiprotons for the current experiment were isolated in 2015 and measured between August and December 2016, which is a small sensation as this was the longest storage period for antimatter ever documented. Antiprotons are usually quickly annihilated when they come into contact with matter, such as in air. Storage was demonstrated for 405 days in a vacuum, which contains ten times fewer particles than interstellar space. A total of 16 antiprotons were used and some of them were cooled to approximately absolute zero or minus 273 degrees Celsius.

The new principle uses the interaction of two Penning traps. The traps use electrical and magnetic fields to capture the antiprotons. Previous measurements were severely limited by an ultra-strong magnetic inhomogeneity in the Penning trap. In order to overcome this barrier, the scientists added a second trap with a highly homogeneous magnetic field.

“We thus used a method developed at Mainz University that created higher precision in the measurements,” explained Ulmer. “The measurement of antiprotons was extremely difficult and we had been working on it for ten years. The final breakthrough came with the revolutionary idea of performing the measurement with two particles.” The larmor frequency and the cyclotron frequency were measured; taken together they form the g-factor.

The g-factor ascertained for the antiproton was then compared to the g-factor for the proton, which BASE researchers had measured with the greatest prior precision already in 2014. In the end, however, they could not find any difference between the two. This consistency is a confirmation of the CPT symmetry, which states that the universe is composed of a fundamental symmetry between particles and antiparticles.

The BASE scientists now want to use even higher precision measurements of the proton and antiproton properties to find an answer to this question. The BASE collaboration plans to develop further innovative methods over the next few year and improve on the current results.

The image at the top of the page shows a small galaxy, called Sextans A, in a multi-wavelength mosaic captured by the European Space Agency’s Herschel mission, in which NASA is a partner, along with NASA’s Galaxy Evolution Explorer (GALEX) and the National Radio Astronomy Observatory’s Jansky Very Large Array observatory near Socorro, New Mexico. The galaxy is located 4.5 million light-years from Earth in the Sextans constellation.
The environment in this galaxy is similar to that of our infant universe because it lacks in heavy metals, or elements heavier than hydrogen and helium. In this image, the purple shows gas; blue shows young stars and the orange and yellow dots are newly formed stars heating up dust.(ESA/NASA/JPL-Caltech/NRAO)

The Daily Galaxy via Johannes Gutenberg Universitaet

It’s Now Legal to Liquefy a Dead Body in California

Original Article

By Yasmin Tayag

On Sunday, California Governor Jerry Brown passed AB 967, an innocuously named bill for a not-so-innocuous law. The bill, proposed by assembly member Todd Gloria, a San Diego democrat, will make it legal for Californians to liquefy their corpses after death in a bath of caustic juice.

The process, referred to as water cremation (or aquamation, resomation, bio-cremation, or flameless cremation), has been proposed as a much more environmentally friendly way to dispose of a body after death. The bill is sponsored by Qico, Inc., a “sustainable cremation” company that specializes in this form of corpse disposal, and it will go into effect by at least July 1, 2020.

“A lot of people view water creation as a more respectful option and we’re glad a lot of people will be able to have it,” Jack Ingraham, the CEO of Qico, tells Inverse. “We think this is a trend for the future. I think within 10 years to 20 years, cremation will be thought of as a water-based process, and the entire flame process will be replaced.”

Unfortunately, no actual liquid is returned to the survivors, only the remaining calcium, or the bones. “These are crushed into the ashes returned to the family,” Ingraham says, who adds that the process also results in about 20-30 percent more “ashes” being returned to the family. So while you can’t drink Uncle Frank, you will get more of his ashes.

These days, the only mainstream options available are burial or cremation, both of which aren’t especially green; coffins take up a lot of valuable space and are made of slowly biodegrading wood, and cremation requires reaching temperatures of up to 1800 degrees Fahrenheit, which isn’t exactly energy efficient. Then there’s the option of sending a dead body to space in a rocket, which is not green, for obvious reasons.

Aquamation, in contrast, dissolves a body, DNA and all, in a vat of liquid into a relatively unharmful solution of slightly alkaline water that can be neutralized and returned to the Earth. California is the latest state to make the procedure legal, joining 14 others.

The chemical process behind aquamation is called alkaline hydrolysis, which involves sticking a body into a solution of potassium hydroxide and water that’s heated to about 200 degrees Fahrenheit, a slightly lower temperature than boiling and waiting for it to dissolve.

Potassium hydroxide, often referred to as potash or lye, is a common chemical used in manufacturing soft soap and biodiesel. Its defining quality is that it’s chemically alkaline, which means that it’s packed with oxygen-hydrogen pairs known as hydroxide groups. In strong enough concentrations, hydroxides can dissolve organic solids into liquids; it’s essentially the same process that happens when you pour Drano into a sink clogged with fat or hair.

In aquamation, raising the temperature and pressure helps the process move along faster. Usually, it takes about four hours to dissolve a skeleton. By the end of the process, the only solid thing that’s left is a pile of soft bones (potassium hydroxide won’t eat through calcium phosphate) that gets crushed into a sterile powder for family members of the deceased to take home.

As for the flesh, blood, and guts? Everything else gets dissolved into a green-brown liquid that’s slightly less basic than it was at the start of the process. What starts as a solution with a very strongly alkaline pH of 14 (the most basic possible) ends up somewhere around pH 11. Truly neutral water has a pH of about 7, so technicians sometimes add an acidic substance, like vinegar, to balance out all the excess hydroxides floating around.

It’s “what happens in a natural burial in the ground, just in a faster time frame,” Ingraham says.

The process is already a popular way to dispose of a dead pet’s body; not only is it less energy-intensive than other methods, but it also kills potentially life-threatening pathogens, like viruses, bacteria, and prions that cause transmissible spongiform encephalopathy (the type that cause mad cow disease), which aren’t always inactivated by heat.

The thought of liquefying a body is pretty weird, but California is not the first state to make it legal: Oregon, Minnesota, Maryland, Maine, Kansas, Illinois, Florida, Colorado, Georgia, Wyoming, Idaho, and Nevada have already joined the ranks of the corpse dissolution supporters. It’s something we’d better get used to in the long run. The world is running out of space, both for living and dead bodies, so it’s in our best interest to figure out what to do with all of our future corpses. Besides, if humans aren’t going to do anything good for the Earth while we’re alive, we might as well find a way to do so in death.

What’s next for aquamation in California? Ingraham says his two-year-old company expects to have their technology ready by 2019 and to be in agreement with state regulators by then as well. Meanwhile, he’s hopeful that demand will grow for this new technology that he expects will cost a little more than traditional cremation but ultimately will be set by funeral homes.

While you can’t scatter traditional ashes at Venice Beach because they’re relatively toxic — they’re ashes, after all — you won’t have those restrictions with the result of a water cremation, Ingraham says.

“When people hear about it they tend to prefer it,” he says, noting that the white “ashes” from water-based cremation can be scattered in more places.

New All-Seeing Billboard Uses Hidden Cameras to Show Ads Based on Age, Emotions

Original Article

By Sidney Fussell

London’s famous Piccadilly Circus is getting an immense and terrifying new video display called Piccadilly Lights. According to its maker, the enormous screen (which is almost the size of two professional basketball courts) can detect the vehicles, ages, and even emotions of people nearby, and respond by playing targeted ads. Imagine New York’s Time Square with a makeover from John Carpenter’s They Live—but without any pretense of deception.

“Screen content can be influenced by the characteristics of the crowd around it, such as gender, age group and even emotions,” Landsec, which owns the screen, brags on its site. “It is also able to respond and deliver bespoke ad content triggered by surroundings in the area.”

A write-up of Piccadilly Lights by Wired specifically focusses on the advertising potential of passing cars:

Cameras concealed within the screen will track the make, model and colour of passing cars to deliver more targeted adverts. Brands can even pre-program triggers so that specific adverts are played when a certain model of car passes the screen, according to Landsec, the company the owns the screens.

According to the magazine, the screen and its hidden cameras won’t go live until later this month, but Landsec’s original press release contains more than enough dystopian marketing spin to start worrying now. In it, Piccadilly Lights is praised as a “live, responsive site” with “one of the highest resolution LED displays of this size in the world.” The hidden cameras go unmentioned, of course, but the installation is advertised as “creating experiences that emotionally resonate” using “social listening” so it can “be more agile and tailor our messages in real-time.”

Make no mistake, however, this is an enormous consumer surveillance apparatus that is being advertised as a way to monitor a public space to sell people TVs and sports bras. Adding to the creep factor, most of this tech is already being used by police to track and surveil suspects.

Police departments currently use object recognition to spot the make and model of cars. And back in February, the company formerly known as Taser announced their body cameras will soon recognize and sort people in real-time based on their age, gender and even what they’re wearing. Emotion detection has been touted as a way to predict violent attacks, as has monitoring Twitterand Facebook for keywords that may belie a threat or implicate criminals. Bill Bratton, who at different times in his life has led the NYPD and LAPD, said last year that social media often “forms the foundation” of New York City’s criminal cases against suspects.

Responding to The Verge, a Landsec spokesperson said the screen can react to “external factors,” but wouldn’t collect or store personal data. That’s reassuring, but it would certainly be valuable to advertisers (who are shelling out big money to be featured on this uber-screen) to know which ads people are responding to and what type of people (based on age, gender, and car model) responded to each ad.

Landsec gives the examples of cars, age and gender, but what else can their cameras spot? Presumably, if there are four Lamborghinis in the area, that means rich people with disposable income are nearby. Can the apparatus make similar income and lifestyle judgements based on factors like skin color and body type? Imagine realizing the 400-foot ad for a dieting campaign was meant specifically for you.

Emotion recognition is the wildcard in all this. Disney, for example, is using face recognition to spot smiles and frowns among moviegoers. How does Landsec do it? Does it similarly scans faces? Or does it use body language? What do four angry faces and a smile mean to the all-seeing eye of capitalism? Landsec could save us all some stress and tell us more about how it works and what it looks for.

We’ve reached out Landsec for comment and will update this story if and when we hear back. Until then, it’s easy to see this as just another step in surveillance capitalism’s death march to tracking every move we make.

 

Half the Universe’s Missing Matter Has Just Been Finally Found

Original Article

By Leah Crane

The missing links between galaxies have finally been found. This is the first detection of the roughly half of the normal matter in our universe – protons, neutrons and electrons – unaccounted for by previous observations of stars, galaxies and other bright objects in space.

You have probably heard about the hunt for dark matter, a mysterious substance thought to permeate the universe, the effects of which we can see through its gravitational pull. But our models of the universe also say there should be about twice as much ordinary matter out there, compared with what we have observed so far.

Two separate teams found the missing matter – made of particles called baryons rather than dark matter – linking galaxies together through filaments of hot, diffuse gas.

“The missing baryon problem is solved,” says Hideki Tanimura at the Institute of Space Astrophysics in Orsay, France, leader of one of the groups. The other team was led by Anna de Graaff at the University of Edinburgh, UK.

Because the gas is so tenuous and not quite hot enough for X-ray telescopes to pick up, nobody had been able to see it before.

“There’s no sweet spot – no sweet instrument that we’ve invented yet that can directly observe this gas,” says Richard Ellis at University College London. “It’s been purely speculation until now.”

So the two groups had to find another way to definitively show that these threads of gas are really there.

Both teams took advantage of a phenomenon called the Sunyaev-Zel’dovich effect that occurs when light left over from the big bang passes through hot gas. As the light travels, some of it scatters off the electrons in the gas, leaving a dim patch in the cosmic microwave background – our snapshot of the remnants from the birth of the cosmos.

Stack ‘em up

In 2015, the Planck satellite created a map of this effect throughout the observable universe. Because the tendrils of gas between galaxies are so diffuse, the dim blotches they cause are far too slight to be seen directly on Planck’s map.

Both teams selected pairs of galaxies from the Sloan Digital Sky Survey that were expected to be connected by a strand of baryons. They stacked the Planck signals for the areas between the galaxies, making the individually faint strands detectable en masse.

Tanimura’s team stacked data on 260,000 pairs of galaxies, and de Graaff’s group used over a million pairs. Both teams found definitive evidence of gas filaments between the galaxies. Tanimura’s group found they were almost three times denser than the mean for normal matter in the universe, and de Graaf’s group found they were six times denser – confirmation that the gas in these areas is dense enough to form filaments.

“We expect some differences because we are looking at filaments at different distances,” says Tanimura. “If this factor is included, our findings are very consistent with the other group.”

Finally finding the extra baryons that have been predicted by decades of simulations validates some of our assumptions about the universe.

“Everybody sort of knows that it has to be there, but this is the first time that somebody – two different groups, no less – has come up with a definitive detection,” says Ralph Kraft at the Harvard-Smithsonian Center for Astrophysics in Massachusetts.

“This goes a long way toward showing that many of our ideas of how galaxies form and how structures form over the history of the universe are pretty much correct,” he says.

Different Meditation Practices Reshape Brain in Different Ways

Original Article

By Tereza Pultarova

Credit: Mooshny/Shutterstock

Different types of meditation change the brain in different ways, a new study finds.

In one of the largest studies on meditation and the human brain to date, a team of neuroscience researchers at the Max Planck Institute of Human Cognitive and Brain Sciences in Germany examined 300 participants in a nine-month meditation program. The project, called ReSource, consisted of three periods of three months each. During this program, the participants each practiced different three types of meditation focused on improving attention, compassion or cognitive skills.

At the beginning of the program, and then again at the end of each three-month period, the researchers took measurements of the participants’ brains using a variety of techniques, including magnetic resonance imaging (MRI). The researchers found that not only did certain brain regions change substantially within the three-month periods, but these regions also changed differently based on the type of meditation the participants had practiced. [Mind Games: 7 Reasons You Should Meditation]

“We were surprised [by] how much can actually happen in three months, because three months isn’t that long,” said Veronika Engert, a neuroscience researcher at Max Planck. Engert was the lead author of one of two papers published on Oct. 4 by the research group in the journal Science Advances.

Engert told LiveScience that while changes in brain structure after intensive meditation programs have been observed before, this is the first time that researchers could clearly see the changes that followed a period of practicing a specific type of meditation.

The participants were divided into three groups, and practiced each type of meditation in a different order. This allowed the researchers to more reliably link the changes in the brain to the type of meditation that was being practiced.

For example, in one part of the study, a group of participants was asked to practice mindfulness-based attention for 30 minutes daily six days a week for three months. During this type of meditation, the participants were taught to focus on their breath with their eyes closed or to monitor tension in their bodies. At the end of the three-month period, the participants showed thickening in the prefrontal cortex of the brain, an area involved in complex thinking, decision-making and attention, Engert said.

After the three-month session that focused on mindfulness, that group moved on to types of mediation focused on developing social skills such as compassion and understanding a situation from a perspective of another person. As with the first session, the researchers observed different changes in the people’s brains after each of the next two sessions.

“If people train [in the skill of] perspective-taking, we see changes in brain regions that are important for these cognitive processes” Engert said. Or, if people focus on affect, or emotion, “then we see changes in brain regions that are important for emotional regulation,” she said.

But the participants’ brains weren’t the only things that were changing. The researchers also observed changes in the behavior of the participants, and these changes matched up with the changes in their brains.

In another part of the study, the researchers measured how the participants responded to a stressful situation similar to a job interview or an exam. The scientists found that all respondents who were practicing meditation reported feeling less stressed than people who were not meditating. However, only those participants practicing compassion and perspective-taking showed consistently lower levels of the stress hormone cortisol in their saliva after the stressful situation, according to Engert.

“After this type of a stress test we usually see that cortisol rises after about 20 minutes,” said Engert. “This rise in cortisol was lower by 51 percent in those subjects who had the social training.”

One limitation of the study was that the participants included only healthy people who did not have any type of mental health condition. Engert said the researchers haven’t looked at whether meditation could be used to, for example, help people suffering from depressionor anxiety. However, Engert said, considering the fact that stress is a major contributor to a wide range of diseases that plague the modern world, the findings could help tailor approaches that could be used as preventive measures. Stress, according to Engert contributes not only to the development of depression but also cardiovascular or metabolic diseases.

In addition, the findings could help researchers develop tailored training programs for specific areas of the brain to help people perform better in various areas of their lives, she said, however, more research is needed to understand exactly how such programs affect the brain.

The team will now focus on studying the effects of the three mind-training techniques on children and people working in highly stressful professions, Engert said.

Originally publishedon Live Science.

New Observations Deepen Mystery of “Alien Megastructure” Star

Original Article

By Mike Wall

Artist’s illustration depicting a hypothetical dust ring orbiting KIC 8462852, also known as Boyajian’s Star or Tabby’s Star. Credit: NASA/JPL-Caltech

There’s a prosaic explanation for at least some of the weirdness of “Tabby’s star,” it would appear.

The bizarre long-term dimming of Tabby’s star—also known as Boyajian’s star, or, more formally, KIC 8462852—is likely caused by dust, not a giant network of solar panels or any other “megastructure” built by advanced aliens, a new study suggests.

Astronomers came to this conclusion after noticing that this dimming was more pronounced in ultraviolet (UV) than infrared light. Any object bigger than a dust grain would cause uniform dimming across all wavelengths, study team members said. [13 Ways to Hunt Intelligent Aliens]

“This pretty much rules out the alien megastructure theory, as that could not explain the wavelength-dependent dimming,” lead author Huan Meng of the University of Arizona said in a statement. “We suspect, instead, there is a cloud of dust orbiting the star with a roughly 700-day orbital period.”

STRANGE BRIGHTNESS DIPS

KIC 8462852, which lies about 1,500 light-years from Earth, has generated a great deal of intrigue and speculation since 2015. That year, a team led by astronomer Tabetha Boyajian (hence the star’s nicknames) reported that KIC 8462852 had dimmed dramatically several times over the past half-decade or so, once by 22 percent.

No orbiting planet could cause such big dips, so researchers began coming up with possible alternative explanations. These included swarms of comets or comet fragments, interstellar dust and the famous (but unlikely) alien-megastructure hypothesis.

The mystery deepened after the initial Boyajian et al. study. For example, other research groups found that, in addition to the occasional short-term brightness dips, Tabby’s star dimmed overall by about 20 percent between 1890 and 1989. In addition, a 2016 paper determined that its brightness decreased by 3 percent from 2009 to 2013.

The new study, which was published online Tuesday (Oct. 3) in The Astrophysical Journal, addresses such longer-term events.

From January 2016 to December 2016, Meng and his colleagues (who include Boyajian) studied Tabby’s star in infrared and UV light using NASA’s Spitzer and Swift space telescopes, respectively. They also observed it in visible light during this period using the 27-inch-wide (68 centimeters) telescope at AstroLAB IRIS, a public observatory near the Belgian village of Zillebeke.

The observed UV dip implicates circumstellar dust—grains large enough to stay in orbit around Tabby’s star despite the radiation pressure but small enough that they don’t block light uniformly in all wavelengths, the researchers said.

MYSTERIES REMAIN

The new study does not solve all of KIC 8462852’s mysteries, however. For example, it does not address the short-term 20 percent brightness dips, which were detected by NASA’s planet-hunting Kepler space telescope. (Kepler is now observing a different part of the sky during its K2 extended mission and will not follow up on Tabby’s star for the forseeable future.)

And a different study—led by Joshua Simon of the Observatories of the Carnegie Institution for Science in Pasadena, California—just found that Tabby’s star experienced two brightening spells over the past 11 years. (Simon and his colleagues also determined that the star has dimmed by about 1.5 percent from February 2015 to now.)

“Up until this work, we had thought that the star’s changes in brightness were only occurring in one direction—dimming,” Simon said in a statement. “The realization that the star sometimes gets brighter in addition to periods of dimming is incompatible with most hypotheses to explain its weird behavior.”

You can read the Simon et al. study for free at the online preprint site arXiv.org.

Researchers Claim to Have Found Proof We Are Not Living In A Simulation

Original Article

By Cheyenne MacDonald

It’s a question that has persisted in science fiction and philosophical discussion alike: are we living in a computer simulation?

Scientists have long argued both sides of the theory, with some even suggesting if we did live in a simulated reality, we’d never know the truth.

But now, a new study could finally put the debate to rest.

Theoretical physicists have discovered that it is impossible, by principle, to simulate a quantum phenomenon that occurs in metals – and, ultimately, something as complex as the entire universe.

Scroll down for video 

Scientists have long argued both sides of the theory, with some even suggesting if we did live in a simulated reality, we¿d never know the truth anyway. But now, a new study could finally put the debate to rest. A stock image is pictured 

Scientists have long argued both sides of the theory, with some even suggesting if we did live in a simulated reality, we’d never know the truth anyway. But now, a new study could finally put the debate to rest. A stock image is pictured

In a new study published to the journal Science Advances, the team from the University of Oxford and the Hebrew University used a technique known as Monte Carlo simulation to investigate a phenomenon said to be a gravitational anomaly.

The effect, called thermal Hall conductance, can be seen in systems with high magnetic fields and low temperatures.

But in their work, the researchers found that the simulation is unable to capture a system with gravitational anomalies, such as the quantum Hall effect.

As the number of particles required for the simulation increased, the researchers found the simulation itself became far more complex.

It’s a question that has persisted in science fiction and philosophical discussion alike: are we living in a computer simulation?

Scientists have long argued both sides of the theory, with some even suggesting if we did live in a simulated reality, we’d never know the truth.

But now, a new study could finally put the debate to rest.

Theoretical physicists have discovered that it is impossible, by principle, to simulate a quantum phenomenon that occurs in metals – and, ultimately, something as complex as the entire universe.

Scroll down for video 

Scientists have long argued both sides of the theory, with some even suggesting if we did live in a simulated reality, we¿d never know the truth anyway. But now, a new study could finally put the debate to rest. A stock image is pictured 

Scientists have long argued both sides of the theory, with some even suggesting if we did live in a simulated reality, we’d never know the truth anyway. But now, a new study could finally put the debate to rest. A stock image is pictured

In a new study published to the journal Science Advances, the team from the University of Oxford and the Hebrew University used a technique known as Monte Carlo simulation to investigate a phenomenon said to be a gravitational anomaly.

The effect, called thermal Hall conductance, can be seen in systems with high magnetic fields and low temperatures.

But in their work, the researchers found that the simulation is unable to capture a system with gravitational anomalies, such as the quantum Hall effect.

As the number of particles required for the simulation increased, the researchers found the simulation itself became far more complex.

 

DNA From Old Skeleton Suggests Humanity’s Been Here Longer Than We Thought

Original Article

By John Timmer

Enlarge / Our family tree, with the dates inferred from this new data. Note how many major branches there are within Africa, and the recent exchange of DNA at the bottom.
Schlebusch et al., Science

When did humanity start? It’s proven to be a difficult question to answer. Anatomically modern humans have a distinct set of features that are easy to identify on a complete skeleton. But most old skeletons are partial, making identification a challenge. Plus, other skeletons were being left by pre-modern (or archaic) human relatives like Neanderthals who were present in Africa and Eurasia at the same time. While Neanderthals et al. have distinct features as well, we don’t always have a good idea how variable those features were in these populations.

So, when a recent paper argued that a semi-modern skull meant that humanity was older than we thought, some people dismissed it as an overhyped finding.

All around Africa

Genetics and paleontology have both agreed that Africa gave rise to modern humans. The earliest clearly modern skeletons are found there, and genetics have suggested a group of African hunter-gatherers represent the earliest ethnic group on Earth. This group, the Khoe-San, have the most genetic diversity of any human population we’ve sampled. Since diversity accumulates with time, this implies they’re the oldest. Thus, it appears that the Khoe-San were the earliest group to branch off the modern human family tree and survive to the present.

Given some measures—like the frequency of mutations and the typical time for each generation of humans to reproduce—it’s possible to use that diversity to estimate the age of the Khoe-San split at between 100,000 and 150,000 years ago. Humanity as a whole, therefore, has to be at least that old. When first estimated, it was consistent with the appearance of modern human skeletal features in the paleontological record. So nearly everyone was happy.

But more recently, there have been finds like the skeleton mentioned above. And others have questioned whether the Khoe-San had such a neat genetic split from the rest of us. The region of southwest Africa they inhabit was swept through by the immense Bantu expansion, which spread agriculture and Iron Age technology throughout sub-Saharan Africa. If some Bantu DNA ended up spreading into the Khoe-San population, then our estimates would be off.

 

(The team also sequenced DNA from four Iron Age African skeletons at the same time and showed that the Bantu didn’t just bring technology; they carried genetic variants that provided some resistance to malaria and sleeping sickness. These were absent from the Stone Age skeletons.)

You look old

The authors only got one decent-quality genome out of the three Stone Age bones from which they obtained DNA. But that skeleton clearly groups with the Khoe-San genetically, indicating that the researchers’ expectation about its affinities were correct. A comparison with modern Khoe-San genomes, however, indicated that the modern ones have gotten contributions from an additional human lineage. All indications are that this DNA originated in East Africa and came from a population that had already been interbreeding with Eurasians.

This doesn’t mean that the Khoe-San aren’t the oldest lineage of humanity, but it does mean that they haven’t been genetically isolated from the rest of us. Which would throw off the date of their split from all of humanity’s other lineages.

So, how old are they? Comparing the Stone Age genome with other modern human genomes produces values of 285,000 to 365,000 years. The most extreme split is with the Mandinka, a population that currently occupies much of West Africa; the date of that appears to be 356,000 years.

Again, the Khoe-San are modern humans. And if they split off that long ago, then modern humans have existed for at least that long. And that’s substantially older than earlier genetic estimates.

But there are caveats. These estimates are very sensitive to the frequency at which new mutations arise in human lineages, as well as the typical human generation time. Both of those values have been in dispute in recent years. If the field arrives at a different consensus value, then these estimates will change. The authors also point out it’s possible that the split looks older because the ancestors of the Khoe-San had interbred with a population of archaic humans, much as the ancestor of non-Africans interbred with Neanderthals. That possibility’s going to be hard to exclude.

In the big picture of human evolution, a date of roughly 300,000 years ago would place the origin of modern humans almost half way between the present and when Neanderthals and Denisovans split off from our lineage. It also happens to be about the same time as the technology of the Middle Stone Age. It’s appealing to think that whatever breakthrough made us “modern” led to some sort of mental leap that enabled new technology. But, as the Bantu themselves demonstrated, the connection between a skeleton’s appearance and the technology its owner used can be extremely tenuous.

 

Nobel Prize Awarded for Biological Clock Discoveries

Original Article

By Jordana Cepelewicz

 

Ninety minutes before dawn in the eastern United States, the Nobel committee announced that it was awarding this year’s Nobel Prize in Physiology or Medicine to three American biologists for their research on the control of circadian rhythms. Jeffrey C. Hallat the University of Maine, Michael Rosbash at Brandeis University and Michael W. Young at the Rockefeller University share the prize for their discoveries of the genetic and biomolecular mechanisms that help the cells of plants and animals (including humans) mark the 24-hour cycle of day and night. That research became a cornerstone of the science of chronobiology, the study of how organisms track time and adapt to its cycles.

“It’s a really beautiful example of basic research that has led to incredible discoveries,” commented Paul Hardin, who studies chronobiology at Texas A&M University. “Almost every aspect of physiology and metabolism will be controlled by the circadian clock.” For example, in the case of mammals, he said, 20–30 percent of the genes in any given tissue may be under the control of an internal clock. “But if you take all the tissues of the body, the vast majority of genes are under clock control in one tissue or another.”

Josephine Arendt, an emeritus professor of endocrinology at Surrey University who studies circadian rhythms, agreed about the importance of the work winning this year’s prize. Health and fitness can be profoundly affected by disorders that throw off that 24-hour timekeeping mechanism or any of the neurological and hormonal systems that rely on it. “Their work underpins [that of] people like me who are interested in applying circadian principles to human health,” she said.

Jeffrey C. Hall, Michael Rosbash and Michael W. Young (left to right) are new Nobel laureates in celebration of their discoveries about the genetic and biomolecular mechanism that governs the circadian rhythm.

Jeffrey C. Hall, Michael Rosbash and Michael W. Young (left to right) are new Nobel laureates in celebration of their discoveries about the genetic and biomolecular mechanism that governs the circadian rhythm.

Jeffrey C. Hall, Michael Rosbash and Michael W. Young (left to right) are new Nobel laureates in celebration of their discoveries about the genetic and biomolecular mechanism that governs the circadian rhythm.

Jeffrey C. Hall, Michael Rosbash and Michael W. Young (left to right) are new Nobel laureates in celebration of their discoveries about the genetic and biomolecular mechanism that governs the circadian rhythm.

The Gairdner Foundation (Hall and Young); Mike Lovett/Brandeis University (Rosbash)

The study of circadian rhythms goes back to at least the 18th century, when scientists noticed that certain plants would open their leaves at sunrise and close them at sunset even in the absence of lighting cues. Later evidence showed that essentially all organisms had some internal biological clock that allowed them to match their physiology to the day-night cycle. Work in the 1970s by Ronald Konopka and Seymour Benzer showed that this clock was under genetic control because mutations could disrupt it. The name period was given to that gene but little else was known about it. Indeed, how a gene could allow cells to keep time remained a mystery.

Answers began to fall into place in 1984, when Hall and Rosbash working at Brandeis and Young at Rockefeller independently isolated the period gene in fruit flies. Hall and Rosbash showed that the cellular concentrations of the protein made by period, PER, were high during the day and then dropped at night, befitting a 24-hour timekeeping gene.

The Brandeis researchers hypothesized that a feedback loop might be governing this gene-protein system: When concentrations of PER climbed high enough, they shut down the activity of period. When PER degraded, period could start up again. PER could thereby inhibit its own synthesis. The hitch in this scheme was that for it to work, something had to transport PER from the cell’s cytoplasm, where it was made, into the nucleus where period dwelled. Hall and Rosbash showed that PER was getting into the nucleus but it was unclear how until 1994, when Young discovered the timeless gene, which was also essential for proper circadian rhythms. The protein made by timeless, TIM, latches on to cytoplasmic PER and escorts it into the nucleus to inhibit period. Young later identified a third gene, doubletime, that also delays the build-up of PER in cells to further improve the linkage of this circadian mechanism to the time of day.

Lucy Reading-Ikkanda/Quanta Magazine

Andrew Millar, the chair of systems biology at the University of Edinburgh and an expert on plant circadian rhythms, noted that the precise genetic clock mechanism that Hall, Rosbash and Young identified was specific to animals, but that conceptually similar mechanisms built around analogous genes were soon identified in plants, fungi, bacteria and other organisms by other researchers. “It’s the breadth of application of biological rhythm research that makes it so fascinating,” he said.

Chronobiology is consequently a field in its early days. Researchers are still trying to fully understand the connection between the circadian rhythm within cells and animals’ need for sleep. Not only do diverse organisms use a variety of mechanisms to maintain circadian rhythms and other temporal cycles, some cells of the body may use specialized timekeeping systems for specialized functions. New biological rhythms — and their influence on organisms — continue to be discovered. Nevertheless, the dissection of this circadian timekeeping system by these scientists already stands as a landmark achievement.

This post was updated on October 2 with additional comments from

We’ve Grossly Underestimated How Much Cow Farts Are Contributing to Global Warming

Original Article

By George Dvorsky

Image: AP

A new NASA-sponsored study shows that global methane emissions produced by livestock are 11 percent higher than estimates made last decade. Because methane is a particularly nasty greenhouse gas, the new finding means it’s going to be even tougher to combat climate change than we realized.

We’ve known for quite some time that greenhouse gases produced by cattle, sheep, and pigs are a significant contributor to global warming, but the new research, published in Carbon Balance and Management, shows it’s worse than we thought. Revised figures of methane produced by livestock in 2011 were 11 percent higher than estimates made in 2006 by the Intergovernmental Panel on Climate Change (IPCC)—a now out-of-date estimate.

It’s hard to believe that belches, farts, and poop from livestock could have any kind of global atmospheric effect, but it’s an issue of scale, and the nature of methane itself.

There are approximately 1.5 billion cows on the planet, each and every one of them expelling upwards of 30 to 50 gallons of methane each day. We typically think of farts as being the culprit, but belches are actually the primary source of cattle-produced methane, accounting for 95 percent of the problematic greenhouse gas.

And problematic it is. Methane is about 30 times more efficient at trapping the Sun’s radiative heat than carbon dioxide over a timescale of about a century. There may be more CO2 in the atmosphere than methane, but by unit, it’s the more destructive greenhouse gas. Both NASA’s Carbon Monitoring System research initiative and the Joint Global Change Research Institute (JGCRI) contributed to the study.

Wolf’s team re-evaluated the data used to produce the IPCC 2006 methane emissions estimates. The prior estimates were based on relatively modest rates of methane increases from 2000 to 2006, but things changed dramatically afterwards, increasing 10-fold over the course of the next 10 years. The new figures factor an 8.4 percent increase in methane emissions from digestion (otherwise known as “enteric fermentation”) in dairy cows and other cattle, and a 36.7 percent increase in methane from manure, compared to previous IPCC-based estimates. The new report shows that methane accounted for approximately 16 percent of global greenhouse gas emissions in 2016. Other human activities, such as the production and transport of gas, oil and coal, along with the decay of our organic waste, also contribute to global methane emissions.

Importantly, the new estimates are 15 percent higher than global estimates produced by the US Environmental Protection Agency (EPA), and four percent higher than EDGAR (Emissions Database for Global Atmospheric Research).

“In many regions of the world, livestock numbers are changing, and breeding has resulted in larger animals with higher intakes of food,” noted Wolf in a release. “This, along with changes in livestock management, can lead to higher methane emissions.” To which she added: “Direct measurements of methane emissions are not available for all sources of methane. Thus, emissions are reported as estimates based on different methods and assumptions. In this study, we created new per-animal emissions factors—that is measures of the average amount of CH4 discharged by animals into the atmosphere—and new estimates of global livestock methane emissions.”

The new research shows that methane emissions slowed in the US, Canada, and Europe, but they’re rising elsewhere. Very likely, the rest of the world is catching up to first-world standards in terms of meat and dairy consumption.

“Among global regions, there was notable variability in trends in estimated emissions over recent decades,” said Ghassem Asrar, Director of JGCRI and a co-author of the new study. “For example, we found that total livestock methane emissions have increased the most in rapidly developing regions of Asia, Latin America, and Africa…We found the largest increases in annual emissions to be over the northern tropics, followed by the southern tropics.”

It’s not immediately clear how, or even if, these revised figures will impact livestock production or public policy, but at the individual level, it suggests we should cut back on our consumption of meat and dairy. The privilege we have over these animals, it would appear, now comes at a hefty price.

Update: An earlier version of this article included a statement suggesting that methane will exert a global warming potential 28 times greater than that of CO2 over then next 100 years. While methane has a unit for unit GWP that’s about 30 times that of CO2 on 100 year timescales, CO2 is still the dominant greenhosue gas in our atmosphere because there is so much more of it. The sentence in question has been removed.

DNA Surgery on Embryos Removes Disease

Original Article

By James Gallagher

Precise “chemical surgery” has been performed on human embryos to remove disease in a world first, Chinese researchers have told the BBC.

The team at Sun Yat-sen University used a technique called base editing to correct a single error out of the three billion “letters” of our genetic code.

They altered lab-made embryos to remove the disease beta-thalassemia. The embryos were not implanted.

The team says the approach may one day treat a range of inherited diseases.

Base editing alters the fundamental building blocks of DNA: the four bases adenine, cytosine, guanine and thymine.

They are commonly known by their respective letters, A, C, G and T.

All the instructions for building and running the human body are encoded in combinations of those four bases.

DNAImage copyrightGETTY IMAGES

The potentially life-threatening blood disorder beta-thalassemia is caused by a change to a single base in the genetic code – known as a point mutation.

The team in China edited it back.

They scanned DNA for the error then converted a G to an A, correcting the fault.

Junjiu Huang, one of the researchers, told the BBC News website: “We are the first to demonstrate the feasibility of curing genetic disease in human embryos by base editor system.”

He said their study opens new avenues for treating patients and preventing babies being born with beta-thalassemia, “and even other inherited diseases”.

The experiments were performed in tissues taken from a patient with the blood disorder and in human embryos made through cloning.

Genetics revolution

Base editing is an advance on a form of gene-editing known as Crispr, that is already revolutionising science.

Crispr breaks DNA. When the body tries to repair the break, it deactivates a set of instructions called a gene. It is also an opportunity to insert new genetic information.

Base editing works on the DNA bases themselves to convert one into another.

Prof David Liu, who pioneered base editing at Harvard University, describes the approach as “chemical surgery”.

He says the technique is more efficient and has fewer unwanted side-effects than Crispr.

He told the BBC: “About two-thirds of known human genetic variants associated with disease are point mutations.

“So base editing has the potential to directly correct, or reproduce for research purposes, many pathogenic [mutations].”

EmbryoImage copyrightGETTY IMAGES

The research group at Sun Yat-sen University in Guangzhou hit the headlines before when they were the first to use Crispr on human embryos.

Prof Robin Lovell-Badge, from the Francis Crick Institute in London, described parts of their latest study as “ingenious”.

But he also questioned why they did not do more animal research before jumping to human embryos and said the rules on embryo research in other countries would have been “more exacting”.

The study, published in Protein and Cell, is the latest example of the rapidly growing ability of scientists to manipulate human DNA.

It is provoking deep ethical and societal debate about what is and is not acceptable in efforts to prevent disease.

Prof Lovell-Badge said these approaches are unlikely to be used clinically anytime soon.

“There would need to be far more debate, covering the ethics, and how these approaches should be regulated.

“And in many countries, including China, there needs to be more robust mechanisms established for regulation, oversight, and long-term follow-up.”

Earth Had Life From Its Infancy

Original Article

By Ed Yong

The Torngat Mountains in northeastern Canada are full of life. Reindeer graze on lichen, polar bears prowl the coastlines, and great whales swim in the offshore waters. Scientists patrol the land, too, looking for the oldest rocks on the planet, which were formed almost 4 billion years ago, when the Earth was just an infant world.

Back then, the landscape would have been very different. The Earth was a hellish place that had only just acquired a firm crust. Its atmosphere was devoid of oxygen, and it was regularly pelted with asteroids. There were no reindeer, whales, polar bears, or lichen. But according to new research, there was life.

In a rock formation called the Saglek Block, Yuji Sano and Tsuyoshi Komiya from the University of Tokyo found crystals of the mineral graphite that contain a distinctive blend of carbon isotopes. That blend suggests that microbes were already around, living, surviving, and using carbon dioxide from the air to build their cells. If the two researchers are right—and claims about such ancient events are always controversial—then this Canadian graphite represents one of the earliest traces of life on Earth.

The Earth was formed around 4.54 billion years ago. If you condense that huge swath of prehistory into a single calendar year, then the 3.95-billion-year-old graphite that the Tokyo team analyzed was created in the third week of February. By contrast, the earliest fossils ever found are 3.7 billion years old; they were created in the second week of March.

Those fossils, from the Isua Belt in southwest Greenland, are stromatolites—layered structures created by communities of bacteria. And as I reported last year, their presence suggests that life already existed in a sophisticated form at the 3.7-billion-year mark, and so must have arisen much earlier. And indeed, scientists have found traces of biologically produced graphite throughout the region, in other Isua Belt rocks that are 3.8 billion years old, and in hydrothermal vents off the coast of Quebec that are at least a similar age, and possibly even older.

“The emerging picture from the ancient-rock record is that life was everywhere,” says Vickie Bennett from Australian National University, who was not involved in the latest study. “As far back as the rock record extends—that is, as far back as we can look for direct evidence of early life, we are finding it. Earth has been a biotic, life-sustaining planet since close to its beginning.”

This evidence hinges on a quirk of chemistry. Carbon comes in two stable isotopes—carbon-12, which is extremely common, and carbon-13, which is rarer and slightly heavier. When it comes to making life, carbon-12 is the more pliable building block. It’s more reactive than its heavier cousin, and so easier to transform into molecules like carbohydrates and proteins.

So living organisms concentrate carbon-12 in their cells—and when they die, that signature persists. When scientists find graphite that’s especially enriched in carbon-12, relative to carbon-13, they can deduce that living things were around when that graphite was first formed. And that’s exactly what the Tokyo team found in the Saglek Block—grains of graphite, enriched in carbon-12, encased within 3.95-billion-year-old rock.

But are those graphite grains the same age? The rocks around them are metamorphic—they’ve been warped and transformed at extreme temperatures and pressures. During that process, and all the subsequent geological tumult that this region has experienced, it’s possible that much younger graphite somehow infiltrated the older rock, creating a false signal of early life.

To rule out that possibility, the Tokyo team looked at the structure of the graphite grains. The more orderly and crystalline those structures, the hotter the grains were when they formed. Based on that relationship, the team calculated the graphite was created at temperatures between 536 and 622 Celsius—a range that’s consistent with the temperatures at which the surrounding metamorphic rocks were transformed. This suggests that the graphite was already there when the rocks were heated and warped, and didn’t sneak in later. It was truly OG—original graphite.

There’s still room for doubt, though. Given how ancient these rocks are, and how much geological tumult they have experienced, it’s hard to fully exclude the possibility that the graphite got there later. Also, other processes that have nothing to do with living things could potentially change the ratio of carbon-12 and carbon-13. It’s concerning that the ratio varies a lot in the samples that the Tokyo team analyzed, says Andrew Knoll from Harvard University. But he also says that the team has been careful, and their combined evidence “makes a strong case that life existed on earth nearly 4 billion years ago.”

“The authors have done as many checks as they could for whether they are indeed analyzing 3.95-billion-year-old graphite rather than later contamination,” adds Elizabeth Bell, a geochemist from the University of California, Los Angeles. “They make a plausible case that the graphite is original.”

Bell herself found the oldest graphite that’s been measured to date. It lurked within a 4.1-billion-year-old zircon gemstone from Western Australia, and also contained a blend of isotopes that hinted at a biological origin. That discovery is also controversial, especially since the graphite was completely cut off from its source environment, making it hard to know the conditions in which it was formed.

Still, all of this evidence suggests Earth was home to life during its hellish infancy, and that such life abounded in a variety of habitats. Those pioneering organisms—bacteria, probably—haven’t left any fossils behind. But Sano and Komiya hope to find some clues about them by analyzing the Saglek Block rocks. The levels of nitrogen, iron, and sulfur in the rocks could reveal which energy sources those organisms exploited, and which environments they inhabited. They could tell us how life first lived.

New Antibody Attacks 99% of HIV Strains

Original Article

By James Gallagher

HIVImage copyright SPL

Scientists have engineered an antibody that attacks 99% of HIV strains and can prevent infection in primates.

It is built to attack three critical parts of the virus – making it harder for HIV to resist its effects.

The work is a collaboration between the US National Institutes of Health and the pharmaceutical company Sanofi.

The International Aids Society said it was an “exciting breakthrough”. Human trials will start in 2018 to see if it can prevent or treat infection.

Our bodies struggle to fight HIV because of the virus’ incredible ability to mutate and change its appearance.

These varieties of HIV – or strains – in a single patient are comparable to those of influenza during a worldwide flu season.

So the immune system finds itself in a fight against an insurmountable number of strains of HIV.

Super-antibodies

But after years of infection, a small number of patients develop powerful weapons called “broadly neutralising antibodies” that attack something fundamental to HIV and can kill large swathes of HIV strains.

Researchers have been trying to use broadly neutralising antibodies as a way to treat HIV, or prevent infection in the first place.

The study, published in the journal Science, combines three such antibodies into an even more powerful “tri-specific antibody”.

Dr Gary Nabel, the chief scientific officer at Sanofi and one of the report authors, told the BBC News website: “They are more potent and have greater breadth than any single naturally occurring antibody that’s been discovered.”

The best naturally occurring antibodies will target 90% of HIV strains.

“We’re getting 99% coverage, and getting coverage at very low concentrations of the antibody,” said Dr Nabel.

Experiments on 24 monkeys showed none of those given the tri-specific antibody developed an infection when they were later injected with the virus.

Dr Nabel said: “It was quite an impressive degree of protection.”

The work included scientists at Harvard Medical School, The Scripps Research Institute, and the Massachusetts Institute of Technology.

‘Exciting’

Clinical trials to test the antibody in people will start next year.

Prof Linda-Gail Bekker, the president of the International Aids Society, told the BBC: “This paper reports an exciting breakthrough.

“These super-engineered antibodies seem to go beyond the natural and could have more applications than we have imagined to date.

“It’s early days yet, and as a scientist I look forward to seeing the first trials get off the ground in 2018.

“As a doctor in Africa, I feel the urgency to confirm these findings in humans as soon as possible.”

Dr Anthony Fauci, the director of the US National Institute of Allergy and Infectious Diseases, said it was an intriguing approach.

He added: “Combinations of antibodies that each bind to a distinct site on HIV may best overcome the defences of the virus in the effort to achieve effective antibody-based treatment and prevention.”

Poliovirus Kills Off Cancer Cells Stops Tumor Regrowth

Original Article

By Ana Sandoiu

Researchers from Duke University in Durham, NC, may have discovered a new way of killing off cancer cells.

The team was jointly led by Dr. Matthias Gromeier, a professor in the Department of Neurosurgery, and Prof. Smita Nair, who is an immunologist in the Department of Surgery.

The new research – which is published in the journal Science Translational Medicine – shows how a modified poliovirus enables the body to use its own resources to fight off cancer. The modified virus bears the name of recombinant oncolytic poliovirus (PVS-RIPO).

PVS-RIPO has been in clinical trials since 2011 and preliminary results have offered hope to patients with one of the most aggressive forms of brain tumor: recurrent glioblastoma. So, the researchers set out to investigate more deeply how exactly PVS-RIPO works.

Explaining the rationale behind their research endeavor, Dr. Gromeier says, “Knowing the steps that occur to generate an immune response will enable us to rationally decide whether and what other therapies make sense in combination with poliovirus to improve patient survival.”

Thank you for supporting Medical News Today

Poliovirus attacks tumors, inhibits regrowth

The researchers examined the behavior of the poliovirus in two human cell lines: melanomaand triple-negative breast cancer. They observed that the poliovirus attaches itself to cancerous cells. These cells have an excess of the CD155 protein, which acts as a receptor for the poliovirus.

Then, the poliovirus starts to attack the malignant cells, triggering the release of antigens from the tumorAntigens are toxic substances that the body does not recognize, therefore setting off an immune attack against them.

So, when the tumor cells release antigens, this alerts the body’s immune system to start attacking. At the same time, the poliovirus infects the dendritic cells and macrophages.

Triple-negative breast cancer: Is a new treatment within reach?

Triple-negative breast cancer: Is a new treatment within reach?
New research offers hope for treating this particularly aggressive type of breast cancer.
READ NOW

Dendritic cells are cells whose role it is to process antigens and “present” them to T cells, which are a type of immune cell. Macrophages are another type of immune cell – namely, large white blood cells whose main role is to rid our bodies of debris and toxic substances.

The cell culture results – which the researchers then verified in mouse models – showed that once PVS-RIPO infects the dendritic cells, these cells “tell” T cells to start the immune attack.

Once started, this process seems to be continuously successful. The cancer cells continue to be vulnerable to the immune system’s attack over a longer period of time, which appears to stop the tumor from regrowing.

Thank you for supporting Medical News Today

As Prof. Nair explains, “Not only is poliovirus killing tumor cells, it is also infecting the antigen-presenting cells, which allows them to function in such a way that they can now raise a T cell response that can recognize and infiltrate a tumor.”

“This is an encouraging finding, because it means the poliovirus stimulates an innate inflammatory response.”

Prof. Smita Nair

Speaking to Medical News Today about the clinical implications of the findings and the scientists’ directions for future research, Dr. Gromeier said, “Our findings provide clear rationales for moving forward with clinical trials in breast cancer, prostate cancer, and malignant melanoma.”

“This includes novel combination treatments that we will pursue,” he added.

More specifically, he explains, because the study revealed that after treatment with the poliovirus “immune checkpoints are increased on immune cells,” a future strategy the researchers plan to explore is “[oncolytic] poliovirus combined with immune checkpoint blockade.”

The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse

Original Article

By James Vincent

Two weeks ago, a pair of researchers from Stanford University made a startling claim. Using hundreds of thousands of images taken from a dating website, they said they had trained a facial recognition system that could identify whether someone was straight or gay just by looking at them. The work was first covered by The Economist, and other publications soon followed suit, with headlines like “New AI can guess whether you’re gay or straight from a photograph” and “AI Can Tell If You’re Gay From a Photo, and It’s Terrifying.”

As you might have guessed, it’s not as straightforward as that. (And to be clear, based on this work alone, AI can’t tell whether someone is gay or straight from a photo.) But the research captures common fears about artificial intelligence: that it will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people. One of the paper’s authors, Dr Michal Kosinski, says his intent is to sound the alarm about the dangers of AI, and warns that facial recognition will soon be able to identify not only someone’s sexual orientation, but their political views, criminality, and even their IQ.

With statements like these, some worry we’re reviving an old belief with a bad history: that you can intuit character from appearance. This pseudoscience, physiognomy, was fuel for the scientific racism of the 19th and 20th centuries, and gave moral cover to some of humanity’s worst impulses: to demonize, condemn, and exterminate fellow humans. Critics of Kosinski’s work accuse him of replacing the calipers of the 19th century with the neural networks of the 21st, while the professor himself says he is horrified by his findings, and happy to be proved wrong. “It’s a controversial and upsetting subject, and it’s also upsetting to us,” he tells The Verge.

But is it possible that pseudoscience is sneaking back into the world, disguised in new garb thanks to AI? Some people say machines are simply able to read more about us than we can ourselves, but what if we’re training them to carry out our prejudices, and, in doing so, giving new life to old ideas we rightly dismissed? How are we going to know the difference?

CAN AI REALLY SPOT SEXUAL ORIENTATION?

First, we need to look at the study at the heart of the recent debate, written by Kosinski and his co-author Yilun Wang. Its results have been poorly reported, with a lot of the hype coming from misrepresentations of the system’s accuracy. The paper states: “Given a single facial image, [the software] could correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 71 percent of cases for women.” These rates increase when the system is given five pictures of an individual: up to 91 percent for men, and 83 percent for women.

On the face of it, this sounds like “AI can tell if a man is gay or straight 81 percent of the time by looking at his photo.” (Thus the headlines.) But that’s not what the figures mean. The AI wasn’t 81 percent correct when being shown random photos: it was tested on a pair of photos, one of a gay person and one of a straight person, and then asked which individual was more likely to be gay. It guessed right 81 percent of the time for men and 71 percent of the time for women, but the structure of the test means it started with a baseline of 50 percent — that’s what it’d get guessing at random. And although it was significantly better than that, the results aren’t the same as saying it can identify anyone’s sexual orientation 81 percent of the time.

As Philip Cohen, a sociologist at the University of Maryland who wrote a blog post critiquing the paper, told The Verge: “People are scared of a situation where you have a private life and your sexual orientation isn’t known, and you go to an airport or a sporting event and a computer scans the crowd and identifies whether you’re gay or straight. But there’s just not much evidence this technology can do that.”

Kosinski and Wang make this clear themselves toward the end of the paper when they test their system against 1,000 photographs instead of two. They ask the AI to pick out who is most likely to be gay in a dataset in which 7 percent of the photo subjects are gay, roughly reflecting the proportion of straight and gay men in the US population. When asked to select the 100 individuals most likely to be gay, the system gets only 47 out of 70 possible hits. The remaining 53 have been incorrectly identified. And when asked to identify a top 10, nine are right.

If you were a bad actor trying to use this system to identify gay people, you couldn’t know for sure you were getting correct answers. Although, if you used it against a large enough dataset, you might get mostly correct guesses. Is this dangerous? If the system is being used to target gay people, then yes, of course. But the rest of the study suggests the program has even further limitations.

WHAT CAN COMPUTERS REALLY SEE THAT HUMANS CAN’T?

It’s also not clear what factors the facial recognition system is using to make its judgements. Kosinski and Wang’s hypothesis is that it’s primarily identifying structural differences: feminine features in the faces of gay men and masculine features in the faces of gay women. But it’s possible that the AI is being confused by other stimuli — like facial expressions in the photos.

This is particularly relevant because the images used in the study were taken from a dating website. As Greggor Mattson, a professor of sociology at Oberlin College, pointed out in a blog post, this means that the images themselves are biased, as they were selected specifically to attract someone of a certain sexual orientation. They almost certainly play up to our cultural expectations of how gay and straight people should look, and, to further narrow their applicability, all the subjects were white, with no inclusion of bisexual or self-identified trans individuals. If a straight male chooses the most stereotypically “manly” picture of himself for a dating site, it says more about what he thinks society wants from him than a link between the shape of his jaw and his sexual orientation.

To try and ensure their system was looking at facial structure only, Kosinski and Wang used software called VGG-Face, which encodes faces as strings of numbers and has been used for tasks like spotting celebrity lookalikes in paintings. This program, they write, allows them to “minimize the role [of] transient features” like lighting, pose, and facial expression.

But researcher Tom White, who works on AI facial system, says VGG-Face is actually very good at picking up on these elements. White pointed this out on Twitter, and explained to The Verge over email how he’d tested the software and used it to successfully distinguish between faces with expressions like “neutral” and “happy,” as well as poses and background color.

A figure from the paper showing the average faces of the participants, and the difference in facial structures that they identified between the two sets. 
Image: Kosinski and Wang

Speaking to The Verge, Kosinski says he and Wang have been explicit that things like facial hair and makeup could be a factor in the AI’s decision-making, but he maintains that facial structure is the most important. “If you look at the overall properties of VGG-Face, it tends to put very little weight on transient facial features,” Kosinski says. “We also provide evidence that non-transient facial features seem to be predictive of sexual orientation.”

The problem is, we can’t know for sure. Kosinski and Wang haven’t released the program they created or the pictures they used to train it. They do test their AI on other picture sources, to see if it’s identifying some factor common to all gay and straight, but these tests were limited and also drew from a biased dataset — Facebook profile pictures from men who liked pages such as “I love being Gay,” and “Gay and Fabulous.”

Do men in these groups serve as reasonable proxies for all gay men? Probably not, and Kosinski says it’s possible his work is wrong. “Many more studies will need to be conducted to verify [this],” he says. But it’s tricky to say how one could completely eliminate selection bias to perform a conclusive test. Kosinski tells The Verge, “You don’t need to understand how the model works to test whether it’s correct or not.” However, it’s the acceptance of the opacity of algorithms that makes this sort of research so fraught.

IF AI CAN’T SHOW ITS WORKING, CAN WE TRUST IT?

AI researchers can’t fully explain why their machines do the things they do. It’s a challenge that runs through the entire field, and is sometimes referred to as the “black box” problem. Because of the methods used to train AI, these programs can’t show their work in the same way normal software does, although researchers are working to amend this.

In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)

Experts say this is a misleading claim that isn’t supported by the latest science. There may be a common cause for face shape and sexual orientation — the most probable cause is the balance of hormones in the womb — but that doesn’t mean face shape reliably predicts sexual orientation, says Qazi Rahman, an academic at King’s College London who studies the biology of sexual orientation. “Biology’s a little bit more nuanced than we often give it credit for,” he tells The Verge. “The issue here is the strength of the association.”

The idea that sexual orientation comes primarily from biology is itself controversial. Rahman, who believes that sexual orientation is mostly biological, praises Kosinski and Wang’s work. “It’s not junk science,” he says. “More like science someone doesn’t like.” But when it comes to predicting sexual orientation, he says there’s a whole package of “atypical gender behavior” that needs to be considered. “The issue for me is more that [the study] misses the point, and that’s behavior.”

Is there a gay gene? Or is sexuality equally shaped by society and culture?

Reducing the question of sexual orientation to a single, measurable factor in the body has a long and often inglorious history. As Matton writes in his blog post, approaches have ranged from “19th century measurements of lesbians’ clitorises and homosexual men’s hips, to late 20th century claims to have discovered ‘gay genes,’ ‘gay brains,’ ‘gay ring fingers,’ ‘lesbian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppression: it gives people who want to dehumanize and persecute sexual minorities a “scientific” pretext.

Jenny Davis, a lecturer in sociology at the Australian National University, describes it as a form of biological essentialism. This is the belief that things like sexual orientation are rooted in the body. This approach, she says, is double-edged. On the one hand, it “does a useful political thing: detaching blame from same-sex desire. But on the other hand, it reinforces the devalued position of that kind of desire,” setting up hetrosexuality as the norm and framing homosexuality as “less valuable … a sort of illness.”

And it’s when we consider Kosinski and Wang’s research in this context that AI-powered facial recognition takes on an even darker aspect — namely, say some critics, as part of a trend to the return of physiognomy, powered by AI.

YOUR CHARACTER, AS PLAIN AS THE NOSE ON YOUR FACE

For centuries, people have believed that the face held the key to the character. The notion has its roots in ancient Greece, but was particularly influential in the 19th century. Proponents of physiognomy suggested that by measuring things like the angle of someone’s forehead or the shape of their nose, they could determine if a person was honest or a criminal. Last year in China, AI researchers claimed they could do the same thing using facial recognition.

Their research, published as “Automated Inference on Criminality Using Face Images,” caused a minor uproar in the AI community. Scientists pointed out flaws in the study, and concluded that that work was replicating human prejudices about what constitutes a “mean” or a “nice” face. In a widely shared rebuttal titled “Physiognomy’s New Clothes,” Google researcher Blaise Agüera y Arcas and two co-authors wrote that we should expect “more research in the coming years that has similar … false claims to scientific objectivity in order to ‘launder’ human prejudice and discrimination.” (Google declined to make Agüera y Arcas available to comment on this report.)

An illustration of physiognomy from Giambattista della Porta’s De humana physiognomonia

Kosinski and Wang’s paper clearly acknowledges the dangers of physiognomy, noting that the practice “is now universally, and rightly, rejected as a mix of superstition and racism disguised as science.” But, they continue, just because a subject is “taboo,” doesn’t mean it has no basis in truth. They say that because humans are able to read characteristics like personality in other people’s faces with “low accuracy,” machines should be able to do the same but more accurately.

Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.

For Davis, this sort of attitude comes from a widespread and mistaken belief in the neutrality and objectivity of AI. “Artificial intelligence is not in fact artificial,” she tells The Verge. “Machines learn like humans learn. We’re taught through culture and absorb the norms of social structure, and so does artificial intelligence. So it will re-create, amplify, and continue on the trajectories we’ve taught it, which are always going to reflect existing cultural norms.”

We’ve already created sexist and racist algorithms, and these sorts of cultural biases and physiognomy are really just two sides of the same coin: both rely on bad evidence to judge others. The work by the Chinese researchers is an extreme example, but it’s certainly not the only one. There’s at least one startup already active that claims it can spot terrorists and pedophiles using face recognition, and there are many others offering to analyze “emotional intelligence” and conduct AI-powered surveillance.

FACING UP TO WHAT’S COMING

But to return to the questions implied by those alarming headlines about Kosinski and Wang’s paper: is AI going to be used to persecute sexual minorities?

This system? No. A different one? Maybe.

Kosinski and Wang’s work is not invalid, but its results need serious qualifications and further testing. Without that, all we know about their system is that it can spot with some reliability the difference between self-identified gay and straight white people on one particular dating site. We don’t know that it’s spotted a biological difference common to all gay and straight people; we don’t know if it would work with a wider set of photos; and the work doesn’t show that sexual orientation can be deduced with nothing more than, say, a measurement of the jaw. It’s not decoded human sexuality any more than AI chatbots have decoded the art of a good conversation. (Nor do its authors make such a claim.)

Startup Faception claims it can identify how likely people are to be terrorists just by looking at their face. 
Image: Faception

The research was published to warn people, say Kosinski, but he admits it’s an “unavoidable paradox” that to do so you have to explain how you did what you did. All the tools used in the paper are available for anyone to find and put together themselves. Writing at the deep learning education site Fast.ai, researcher Jeremy Howard concludes: “It is probably reasonably [sic] to assume that many organizations have already completed similar projects, but without publishing them in the academic literature.”

We’ve already mentioned startups working on this tech, and it’s not hard to find government regimes that would use it. In countries like Iran and Saudi Arabia homosexuality is still punishable by death; in many other countries, being gay means being hounded, imprisoned, and tortured by the state. Recent reports have spoken of the opening of concentration camps for gay men in the Chechen Republic, so what if someone there decides to make their own AI gaydar, and scan profile pictures from Russian social media?

Here, it becomes clear that the accuracy of systems like Kosinski and Wang’s isn’t really the point. If people believe AI can be used to determine sexual preference, they will use it. With that in mind, it’s more important than ever that we understand the limitations of artificial intelligence, to try and neutralize dangers before they start impacting people. Before we teach machines our prejudices, we need to first teach ourselves.

Scientists Just Discovered the First Brainless Animal That Sleeps

Original Article

By Sarah Kaplan


Cassiopea jellyfish rests upside-down on black sand and pulses, rhythmically contracting and relaxing its bell. At night, Cassiopea jellies pulse less frequently — a clue that they’re sleeping, researchers report. (Photo by JanEaster.com)

It was well past midnight when Michael Abrams, Claire Bedbrook and Ravi Nath crept into the Caltech lab where they were keeping their jellyfish. They didn’t bother switching on the lights, opting instead to navigate the maze of desks and equipment by the pale blue glow of their cellphones. The students hadn’t told anyone that they were doing this. It wasn’t forbidden, exactly, but they wanted a chance to conduct their research without their PhD advisers breathing down their necks.

“When you start working on something totally crazy, it’s good to get data before you tell anybody,” Abrams said.

The “totally crazy” undertaking in question: an experiment to determine whether jellyfish sleep.

It had all started when Bedbrook, a graduate student in neurobiology, overheard Nath and Abrams mulling the question over coffee. The topic was weird enough to make her stop at their table and argue.

“Of course not,” she said. Scientists still don’t fully know why animals need to snooze, but research has found that sleep is a complex behavior associated with memory consolidation and REM cycles in the brain. Jellyfish are so primitive they don’t even have a brain — how could they possibly share this mysterious trait?

Her friends weren’t so sure. “I guess we’re going to have to test it,” Nath said, half-joking.

Bedbrook was dead serious: “Yeah. Yeah, we are.”

After months of late-night research, Bedbrook has changed her mind. In a paper published Thursday in the journal Current Biology, she, Nath and Abrams report that the upside-down jellyfish Cassiopea exhibit sleeplike behavior — the first animals without a brain known to do so. The results suggest that sleep is deeply rooted in our biology, a behavior that evolved early in the history of animal life and has stuck with us ever since.

Further study of jellyfish slumber might bring scientists closer to resolving what Nath called “the paradox of sleep.”

Think about it, he urged. If you’re asleep in the wild when a predator comes along, you’re dead. If a food source strolls past, you go hungry. If a potential mate walks by, you miss the chance to pass on your genetic material.

“Sleep is this period where animals are not doing the things that benefit from a natural selection perspective,” Nath said.

Abrams chimed in: “Except for sleep.” Nath laughed.

“We know it must be very important. Otherwise, we would just lose it,” Bedbrook said. If animals could evolve a way to live without sleep, surely they would have. But many experiments suggest that when creatures such as mice are deprived of sleep for too long, they die. Scientists have shown that animals as simple as the roundworm C. elegans, with a brain of just 302 neurons, need sleep to survive.

Cassiopea has no brain to speak of — just a diffuse “net” of nerve cells distributed across their small, squishy bodies. These jellyfish barely even behave like animals. Instead of mouths, they suck in food through pores in their tentacles. They also get energy via a symbiotic relationship with tiny photosynthetic organisms that live inside their cells.

“They’re like weird plant animals,” Bedbrook said.

They’re also ancient: Cnidarians, the phylogenetic group that includes jellies, first arose some 700 million years ago, making them some of Earth’s first animals. These traits make Cassiopea an ideal organism to test for the evolutionary origins of sleep. Fortuitously, Abrams already had some on hand.

So the trio designed an experiment. At night, when the jellies were resting and their professors were safely out of the picture, the students would test for three behavioral criteria associated with sleep.

First: Reversible quiescence. In other words, the jellyfish become inactive but are not paralyzed or in a coma. The researchers counted the jellyfish’s movements and found they were 30 percent less active at night. But when food was dropped into the tank, the creatures perked right up. Clearly not paralyzed.

Second: An increased arousal threshold. This means it’s more difficult to get the animals’ attention; they have to be “woken up.” For this, the researchers placed sleeping jellies in containers with removable bottoms, lifted the containers to the top of their tank, then pulled out the bottom. If the jellyfish were awake, they’d immediately swim to the floor of the tank. But if they were asleep, “they’d kind of strangely float around in the water,” Abrams said.

 Play Video 0:34
Watch a magnificent jellyfish at a depth of more than 12,000 feet
This jellyfish was seen during a dive on April 24, while exploring Enigma Seamount at a depth of more than 12,000 feet. (NOAA)

“You know how you wake up with vertigo? I pretend that maybe there’s possible chance that the jellyfish feel this,” Nath added. “They’re sleeping and then they wake up and they’re like, ‘Ahhhh!’ ”

And third: The quiescent state must be homeostatically regulated. That is, the jellyfish must feel a biological drive to sleep. When they don’t, they suffer.

“This is really equivalent to how we feel when we pull an all-nighter,” Bedbrook said. She’s all too familiar with the feeling — getting your PhD requires more late nights than she’s willing to count.

The jellyfish have no research papers to keep them awake past their bedtimes, so the scientists prevented them from sleeping by “poking” them with pulses of water every 20 minutes for an entire night. The following day, the poor creatures swam around in a daze, and the next night they slept especially deeply to make up for lost slumber.


Jellyfish in their tank. (Caltech)

Realizing they really had something here, the students clued their professors in on what they were doing. The head of the lab where Nath worked, Caltech and Howard Hughes Medical Institute biologist Paul Sternberg, offered the trio a closet in which they could to continue their experiments.

“It’s important,” Sternberg said, “because it’s [an organism] with what we think of as a more primitive nervous system. … It raises the possibility of an early evolved fundamental process.”

Sternberg, along with Abram and Bedbrook’s advisers, is a co-author on the Current Biology paper.

Allan Pack, the director of the Center for Sleep and Respiratory Neurobiology at the University of Pennsylvania, was not involved in the jellyfish research, but he’s not surprised by the finding, given how prevalent sleep is in other species.

“Every model that has been looked at … shows a sleep-like state,” he said.

But the revelations about jellyfish sleep are important, he said, because they show how basic sleep is. It appears to be a “conserved” behavior, one that arose relatively early in life’s history and has persisted for millions of years. If the behavior is conserved, then perhaps the biological mechanism is too. Understanding why jellyfish, with their simple nerve nets, need sleep could lead scientists to the function of sleep in humans.

“I think it’s one of the major biological questions of our time,” Pack said. “We spend a third of a life sleeping. Why are we doing it? What’s the point?”

 

An Apocalyptic Mass Extinction Will Begin in 2100, Scientists Say

Original Article

By Jasper Hamill

A mass extinction that wipes out humanity will be under way by the year 2100, scientists have claimed.

By the end of the century, it’s feared that so much carbon will have been added to the oceans that the planet will have passed a “threshold of catastrophe” which leads to the destruction of our species.

In the past 540 million years, the planet has endured five such wipeouts — including the extinction of the dinosaurs.

The worst took place 252 million years ago and is known as the Great Dying.

This disaster killed off more than 95 percent of marine life when the seas suddenly became more acidic.

Now geophysicist Professor Daniel Rothman says we are seeing a disturbing parallel today — this time because of man-made global warming.

He came up with a simple mathematical formula that predicts that the oceans will soon hold so much carbon that a mass extinction is inevitable.

It showed the critical extra amount required is about 310 gigatons — which is the best-case scenario projected by the Intergovernmental Panel on Climate Change.

And it’s well below the worst of more than 500 gigatons — which would far exceed the line.

In all scenarios, the study found that by the end of the century, the carbon cycle will either be close to or well beyond the threshold for catastrophe.

Although mass extinction won’t soon follow at the turn of the century, the world may have tipped into “unknown territory.”

Rothman, of the Massachusetts Institute of Technology, says it would take some time — about 10,000 years — for such ecological disasters to play out.

He said: “This is not saying disaster occurs the next day.

“It’s saying — if left unchecked — the carbon cycle would move into a realm which would be no longer stable and would behave in a way that would be difficult to predict.

“In the past this type of behavior is associated with mass extinction.”

In the modern era, CO2 emissions have risen steadily since the 19th century, but deciphering whether this could lead to mass extinction has been challenging.

Humans have emitted 1,540 billion tons of CO2 since the Industrial Revolution — equivalent to burning enough coal to form a square tower 72 feet wide stretching 240,000 miles from Earth to the moon.

Half of these have remained in the atmosphere, causing a rise in levels at least 10 times faster than any known natural increase during Earth’s long history.

Most of the other half has dissolved into the ocean — causing acidification.

Will this lead to the destruction of humanity?

Your grandchildren will probably find out, unless something changes now.

British Supermarket Offers ‘Finger Vein’ Payment In Worldwide First

Original Article

By Katie Morley

A UK supermarket has become the first in the world to let shoppers pay for groceries using just the veins in their fingertips.

Customers at the Costcutter store, at Brunel University in London, can now pay using their unique vein pattern to identify themselves.

The firm behind the technology, Sthaler, has said it is in “serious talks” with other major UK supermarkets to adopt hi-tech finger vein scanners at pay points across thousands of stores.

It works by using infrared to scan people’s finger veinsand then links this unique biometric map to their bank cards. Customers’ bank details are then stored with payment provider Worldpay, in the same way you can store your card details when shopping online. Shoppers can then turn up to the supermarket with nothing on them but their own hands and use it to make payments in just three seconds.

 

It comes as previous studies have found fingerprint recognition, used widely on mobile phones, is vulnerable to being hacked and can be copied even from finger smears left on phone screens.

But Sthaler, the firm behind the technology, claims vein technology is the most secure biometric identification method as it cannot be copied or stolen.

Sthaler said dozens of students were already using the system and it expected 3,000 students out of 13,000 to have signed up by November.

Finger print payments are already used widely at cash points in Poland, Turkey and Japan.

Vein scanners are also used as a way of accessing high-security UK police buildings and authorising internal trading at least one major British investment bank.

The firm is also in discussions with nightclubs, gyms about using the technology to verify membership and even Premier League football clubs to check people have the right access to VIP hospitality areas.

Fingerprint 
Fingerprint technology could be coming to a supermarket near you CREDIT: FABRIZIO BENSC/REUTERS

The technology uses an infrared light to create a detailed map of the vein pattern in your finger. It requires the person to be alive, meaning in the unlikely event a criminal hacks off someone’s finger, it would not work. Sthaler said it take just one minute to sign up to the system initially and, after that, it takes just seconds to place your finger in a scanner each time you reach the supermarket checkout.

Simon Binns, commercial director of Sthaler, told the Daily Telegraph: ‘This makes payments so much easier for customers.

“They don’t need to carry cash or cards. They don’t need to remember a pin number. You just bring yourself. This is the safest form of biometrics. There are no known incidences where this security has been breached.

“When you put your finger in the scanner it checks you are alive, it checks for a pulse, it checks for haemoglobin. ‘Your vein pattern is secure because it is kept on a database in an encrypted form, as binary numbers. No card details are stored with the retailer or ourselves, it is held with Worldpay, in the same way it is when you buy online.”

Nick Telford-Reed, director of technology innovation at Worldpay UK, said: “In our view, finger vein technology has a number of advantages over fingerprint. This deployment of Fingopay in Costcutter branches demonstrates how consumers increasingly want to see their payment methods secure and simple.”

New Research Suggest Climate Change Not As Threatening As Previously Thought

Original Article

By Henry Bodkin

the planet than previously thought because scientists got their modelling wrong, a new study has found. New research by British scientists reveals the world is being polluted and warming up less quickly than 10-year-old forecasts predicted, giving countries more time to get a grip on their carbon output.

An unexpected “revolution” in affordable renewable energy has also contributed to the more positive outlook.

Experts now say there is a two-in-three chance of keeping global temperatures within 1.5 degrees above pre-industrial levels, the ultimate goal of the 2015 Paris Agreement.

Paris climate change deal: Moment agreement announcedParis climate change deal: Moment agreement announced

They also condemned the “overreaction” to the US’s withdrawal from the Paris Climate Accord, announced by Donald Trump in June, saying it is unlikely to make a significant difference.

According to the models used to draw up the agreement, the world ought now to be 1.3 degrees above the mid-19th-Century average, whereas the most recent observations suggest it is actually between 0.9 and 1 degree above.

We’re in the midst of an energy revolution and it’s happening faster than we thoughtProfessor Michael Grubb, University College London

The discrepancy means nations could continue emitting carbon dioxide at the current rate for another 20 years before the target was breached, instead of the three to five predicted by the previous model.

“When you are talking about a budget of 1.5 degrees, then a 0.3 degree difference is a big deal”, said Professor Myles Allen, of Oxford University and one of the authors of the new study.

Published in the journal Nature Geoscience, it suggests that if polluting peaks and then declines to below current levels before 2030 and then continue to drop more sharply, there is a 66 per cent chance of global average temperatures staying below 1.5 degrees.

The goal was yesterday described as “very ambitious” but “physically possible”.

Another reason the climate outlook is less bleak than previously thought is stabilising emissions, particularly in China.

Renewable energy has also enjoyed more use than was predicted.

China has now acquired more than 100 gigawatts of solar cells, 25 per cent of which in the last six months, and in the UK, offshore wind has turned out to cost far less than expected.

Professor Michael Grubb, from University College London, had previously described the goals agreed at Paris in 2015 as “incompatible with democracy”.

Outrage at Trump's withdrawal from Paris climate agreementOutrage at Trump’s withdrawal from Paris climate agreement

But yesterday he said: “We’re in the midst of an energy revolution and it’s happening faster than we thought, which makes it much more credible for governments to tighten the offer they put on the table at Paris.”

He added that President Trump’s withdrawal from the agreement would not be significant because “The White House’s position doesn’t have much impact on US emissions”.

“The smaller constituencies – cities, businesses, states – are just saying they’re getting on with it, partly for carbon reduction, but partly because there’s this energy revolution and they don’t want to be left behind.”

The new research was published as the Met Office announced that a “slowdown” in the rate of global temperature rises reported over roughly the first decade of this century was now over.

The organisation said the slowdown in rising air temperatures between 1999 and 2014 happened as a result of a natural cycle in the Pacific, which led to the ocean circulation speeding up, causing it to pull heat down in the deeper ocean away from the atmosphere.

However, that cycle has now ended.

Claire Perry, the climate change and industry minister, claimed Britain had already demonstrated that tackling climate change and running a strong economy could go “hand in hand”.

“How is the time to build on our strengths and cement our position as a global hub for investment in clean growth,” she said.

 

Studies of Pregnant Mice Highlight Link Between Immune Response and Autism

Original Article

A century ago, a largely forgotten, worldwide epidemic that would kill nearly a million people was beginning to take hold. Labelled as sleepy sickness — or more properly encephalitis lethargica — the disease caused a number of bizarre mental and physical symptoms and frequently left people in a catatonic state, sometimes for decades. (Oliver Sacks described his successful treatment of some of them in 1969, in the book Awakenings.) The cause has never been officially pinned down, but the most common suggestion is that some kind of infectious agent triggered an autoimmune response, which targeted and inflamed part of the brain.

The role of the immune system in mental disorders is subject to much important research at the moment. The onset of conditions from depression and psychosis to obsessive–compulsive disorder has been linked to the abrupt changes in biology and physiology that occur when the body responds to infection, especially in childhood. And some researchers have traced the possible chain of events back a generation. Studies have highlighted that pregnant women could react to infection in a way that influences their baby’s developing brain, which could lead to cognitive and neurodevelopmental problems in the child.

One consequence of this ‘maternal immune activation’ (MIA) in some women could be to increase the risk of autism in their children. And two papers published online this week in Nature (S. Kim et alNaturehttp://dx.doi.org/10.1038/nature23910; 2017 and Y. S. Yim et al. Naturehttp://dx.doi.org/10.1038/nature23909; 2017) use animal models to examine how this might happen, as well as suggest some possible strategies to reduce the risk.

Kim et al. looked at the impact of MIA on the brains and behaviour of mice. They found that pregnant female animals exposed to circumstances similar to a viral infection have offspring that are more likely to show atypical behaviour, and they unpick some of the cellular and molecular mechanisms responsible. Some of their results confirm what scientists already suspected: pregnancy changes the female mouse’s immune response, specifically, by turning on the production of a protein called interleukin-17a. But the authors also conducted further experiments that give clues about the mechanisms at work.

“It’s tempting to draw parallels with mechanisms that might increase the risk of autism in some people.”

The types of bacteria in the mouse’s gut seem to be important. When the scientists used antibiotics to wipe out common gut microorganisms called segmented filamentous bacteria in female mice, this seemed to protect the animals’ babies from the impact of the simulated infection. The offspring of mice given the antibiotic treatment did not show the unusual behaviours, such as reduced sociability and repetitive actions. Segmented filamentous bacteria are known to encourage cells to produce more interleukin-17a, and an accompanying News & Views article (C. M. Powell Nature http://dx.doi.org/10.1038/nature24139; 2017) discusses one obvious implication: some pregnant women could use diet or drugs to manipulate their gut micro­biome to reduce the risk of harm to their baby if an infection triggers their immune response. Much science still needs to be done before such a course could be recommended — not least further research to confirm and build on these results.

Yim et al. analysed the developing brain of mice born to mothers who showed MIA. They traced the abnormalities to a region called the dysgranular zone of the primary somato-sensory cortex (S1DZ). The authors genetically engineered the mice so that neurons in this region could be activated by light, and they showed that activation of S1DZ induced the same telltale atypical behaviours, even in mice that were born to mothers with no MIA.

It’s unusual to be able to demonstrate such a direct link between the activities of brain regions and specific behaviours — although plenty of work on mental disorders makes a strong theoretical case for linking particular conditions to over- and under-active brain zones and circuitry.

Encephalitis lethargica, for example, has been linked to changes in the deep regions of the basal ganglia, and the disease produces symptoms that are similar to those often seen in autism, including stereotyped and repetitive behaviours. Yim et al.’s study shows that the S1DZ region projects to one of those deep brain regions — the striatum — and that this connection helps to trigger repetitive actions in the animals. But S1DZ also connects to a separate, distinct, region in the cortex, and this is what seems to drive the changes in sociability.

Taking the two studies together, it’s tempting to draw parallels with mechanisms that might increase the risk of autism in some people and explain some of its symptoms. Scientists and others should be cautious about doing so — much can change when results from animal models are applied to human biology. But the studies do offer some intriguing leads.

Light Has Been Stored as Sound For the First Time

Original Article

By Fiona Macdonald

For the first time ever, scientists have stored light-based information as sound waves on a computer chip – something the researchers compare to capturing lightning as thunder.

While that might sound a little strange, this conversion is critical if we ever want to shift from our current, inefficient electronic computers, to light-based computers that move data at the speed of light.

Light-based or photonic computers have the potential to run at least 20 times faster than your laptop, not to mention the fact that they won’t produce heat or suck up energy like existing devices.

This is because they, in theory, would process data in the form of photons instead of electrons.

We say in theory, because, despite companies such as IBM and Intel pursuing light-based computing, the transition is easier said than done.

Coding information into photons is easy enough – we already do that when we send information via optical fibre.

But finding a way for a computer chip to be able to retrieve and process information stored in photons is tough for the one thing that makes light so appealing: it’s too damn fast for existing microchips to read.

This is why light-based information that flies across internet cables is currently converted into slow electrons. But a better alternative would be to slow down the light and convert it into sound.

And that’s exactly what researchers from the University of Sydney in Australia have now done.

“The information in our chip in acoustic form travels at a velocity five orders of magnitude slower than in the optical domain,” said project supervisor Birgit Stiller.

“It is like the difference between thunder and lightning.”

stylised chip designUniversity of Sydney

This means that computers could have the benefits of data delivered by light – high speeds, no heat caused by electronic resistance, and no interference from electromagnetic radiation – but would also be able to slow that data down enough so that computers chips could do something useful with it.

“For [light-based computers] to become a commercial reality, photonic data on the chip needs to be slowed down so that they can be processed, routed, stored and accessed,” said one of the research team, Moritz Merklein.

“This is an important step forward in the field of optical information processing as this concept fulfils all requirements for current and future generation optical communication systems,” added team member Benjamin Eggleton.

The team did this by developing a memory system that accurately transfers between light and sound waves on a photonic microchip – the kind of chip that will be used in light-based computers.

You can see how it works in the animation below:

First, photonic information enters the chip as a pulse of light (yellow), where it interacts with a ‘write’ pulse (blue), producing an acoustic wave that stores the data.

Another pulse of light, called the ‘read’ pulse (blue), then accesses this sound data and transmits as light once more (yellow).

While unimpeded light will pass through the chip in 2 to 3 nanoseconds, once stored as a sound wave, information can remain on the chip for up to 10 nanoseconds, long enough for it to be retrieved and processed.

The fact that the team were able to convert the light into sound waves not only slowed it down, but also made data retrieval more accurate.

And, unlike previous attempts, the system worked across a broad bandwidth.

“Building an acoustic buffer inside a chip improves our ability to control information by several orders of magnitude,” said Merklein.

“Our system is not limited to a narrow bandwidth. So unlike previous systems this allows us to store and retrieve information at multiple wavelengths simultaneously, vastly increasing the efficiency of the device,” added Stiller.

The research has been published in Nature Communications.

 

‘The Orion Bionic Eye’ To Begin Huma Trails. Hopes To Restore Sight of Blind Patients

Original Article

American medical company, ‘Second Sight’ manufacture implantable visual prosthetics to provide vision to people that suffer from a variety of different visual impairments. Their most advanced piece of technology so far is ‘The Argus® II Retinal Prosthesis System’ that can restore some functional vision for people suffering from blindness. Although a very successful product, it only provides a limited about of restored vision to the patient, so the company have been working on it’s successor, ‘The Orion’.

The Argus® II Retinal Prosthesis System

The Orion™ Cortical Visual Prosthesis System

The idea behind The Orion is to convert images captured by a small video camera mounted on a pair of glasses that the patient wears daily, these images are then converted into a series of small electrical impulses.

The Orion would then wirelessly transmit these pulses to an array of electrodes that have been implanted into the patient. The electrodes bypass the retina and optic nerve to directly stimulate the visual cortex. This is the area of the brain that processes visual data, effectively allowing a person to see.

This technology has the potential to essential “cure” all forms of blindness including glaucoma, diabetic retinopathy, and forms of cancer and trauma. The Argus II had been approved for use in Canada, France, Germany, Italy, Russia, Saudi Arabia, South Korea, Spain, Taiwan, Turkey, United Kingdom, and the U.S., so you can expect to see The Orion in the same, if not more countries.

Second Sight’s Argus II Restores Vision to Blind Patient

Neil DeGrasse Tyson Says It Is ‘Too Late’ To Recover From Climate Change

Original Article

By Alexandra King

Scientist and astrophysicist Neil deGrasse Tyson said Sunday that, in the wake of devastating floods and damage caused by Hurricanes Harvey and Irma, climate change had become so severe that the country “might not be able to recover.”

In an interview on CNN’s “GPS,” Tyson got emotional when Fareed Zakaria asked what he made of Homeland Security Adviser Tom Bossert’s refusal to say whether climate change had been a factor in Hurricanes Harvey or Irma’s strength — despite scientific evidence pointing to the fact that it had made the storms more destructive.
“Fifty inches of rain in Houston!” Tyson exclaimed, adding, “This is a shot across our bow, a hurricane the width of Florida going up the center of Florida!”
“What will it take for people to recognize that a community of scientists are learning objective truths about the natural world and that you can benefit from knowing about it?” he said.
Tyson told Zakaria that he had no patience for those who, as he put it, “cherry pick” scientific studies according to their belief system.
“The press will sometimes find a single paper, and say, ‘Oh here’s a new truth, if this study holds it.’ But an emergent scientific truth, for it to become an objective truth, a truth that is true whether or not you believe in it, it requires more than one scientific paper,” he said.
“It requires a whole system of people’s research all leaning in the same direction, all pointing to the same consequences,” he added. “That’s what we have with climate change, as induced by human conduct.”
Tyson said he was gravely concerned that by engaging in debates over the existence of climate change, as opposed to discussions on how best to tackle it, the country was wasting valuable time and resources.
“The day two politicians are arguing about whether science is true, it means nothing gets done. Nothing,” he said. “It’s the beginning of the end of an informed democracy, as I’ve said many times. What I’d rather happen is you recognize what is scientifically truth, then you have your political debate.”
Tyson told Zakaria that he believed that the longer the delay when it comes to responding to the ongoing threat of climate change, the bleaker the outcome. And perhaps, he hazarded, it was already even too late.
“I worry that we might not be able to recover from this because all our greatest cities are on the oceans and water’s edges, historically for commerce and transportation,” he said.
“And as storms kick in, as water levels rise, they are the first to go,” he said. “And we don’t have a system — we don’t have a civilization with the capacity to pick up a city and move it inland 20 miles. That’s — this is happening faster than our ability to respond. That could have huge economic consequences.”

 

Eric Julien’s “Alien Message” To Mankind

Original Article

By Joe Martino

We’re about to dive into a ‘transmission’ or ‘channeled’ message that allegedly came into a man by the name of Jean Ederman aka Eric Julien. Jean had been practicing projecting his mind when he came in contact with what he called benevolent ET beings, this is when he received the message. Note: we will refer to him as Eric from here on out.

Before we get into the message, it’s important to dive into a background about Eric, attempt to determine who he really is and whether or not his background can be considered credible. Either way even as we move through this information, we strongly suggest you use your own intuition on this to explore. Blind denial doesn’t do us good in the same way blind acceptance doesn’t.

A few quick things to get out of the way right away: the reality of remote viewing, astral projection, channeling, and the existence of ET’s. There is a ton of credible evidence exploring these topics at a black budget and military level. These abilities are used and millions have been spent exploring them within the US military. You can learn more about CIA remote viewing programs here, the military use and study of psychic abilities here, and more about ET’s and the documents to prove the reality of them in a groundbreaking film here.

With those resources laid out to help open up to the reality of how all of this is possible, we can continue with an open mind. I find it important to lay out those resources before hand because quite simply, most people do not realize that all of these “abilities” or “pseudoscientific frauds” are actually very well studied, documented and real. In fact, it’s likely your tax dollar paid for the extensive study and training of these very abilities.

Eric’s Story

Eric claims to have been a military jet pilot, air traffic controller and airport manager, and holds a masters in economics. He states that since the age of 6, he has been having experiences with ET’s and UFO’s.

Eric has published a book called The Science of Extraterrestrials in 2006. That work was reviewed by a number of ET and UFO researchers and it obtained high regard. As a pilot in the military, he claimed to have had contact with extraterrestrial technology, including piloting an ET craft.

A prominent UFO researcher named Michael Salla had this to say about Eric’s book and work, “A number of prominent French researchers/scientists have reviewed his book and thought very highly of it, and concluded that it is not a plagiarized work which was one of the initial criticisms leveled against him. I have read one of these critiques and it is clear that the author who was initially very skeptical was impressed by Eric’s work.”

Over time, Eric has gone on to speak at a number of ET and UFO conferences and has shared interesting accounts he claims to have been involved in regarding ET’s and ET technology.

The Style of The Message

There are a few key notes to look at when exploring the channeled message below. The style of the message is deliberate and with purpose. It is not ‘dictative’  or condescending . This is typically seen when people are attempting to inform as opposed to pushing beliefs onto others. The choices for how humanity deals with the challenges it currently faces is left as a choice for humanity, as opposed to a dictation of precisely what to do. This is in alignment with many other messages that allow for the spiritual growth and evolution of a species to take responsibility for where they are and empower themselves to make a change, as opposed to waiting for someone to save them. This is an important note.

The overall text is coherent and intelligent, drawing on a number of difficult challenges humanity faces and does not seem to contain any self bolstering or ego gratification tactics that other messages often contain.

The Message

On a final summation of the above, regardless of whether Eric’s story is a fact or not, the message below still provides great value to us. I say this because we cannot know for certain whether or not he did channel this message, but we have the control and power to take value from the message.

I wanted to pull out some key pieces to this that I thought were meaningful to provide value and things to reflect on when we read this. A channeled message is only valuable when we decide what to do with the information and act upon it from within.

“We are not mere observations; we are consciousnesses just like you. Our existence is a reality, but the majority of you do not perceive it yet because we remain invisible to your senses and instruments most of the time.”

“We wish to fill this void at this moment in your history. We made this collective decision on our side, but this is not enough — we need yours as well.” (They are saying here that humanity must be open and asking for ET communication if we want it.)

“A great roller wave is on the horizon. It entails very positive but also very negative potentials. At this time wonderful opportunities of progress stand side by side with threats of destruction.”(referring to the shift in consciousness taking place that we touch on A LOT. Watch our documentary about it here.)

“We are sad to see men, women and children suffering to such a degree in their flesh and in their hearts when they bear such an inner light. This light can be your future.”

“Our relationships could develop in stages. Several stages of several years or decades would occur: demonstrative appearance of our ships, physical appearance beside human beings, collaboration in your technical and spiritual evolution, discovery of parts of the galaxy.”

There are plenty of memorable moments in the text below, but those are a start. Read on and enjoy!

Eric writes: “… after having learned how to mentally project myself to a place in the presence of benevolent extraterrestrials, I received the following message…”

[This channeling was translated from French into English by Dan Drasin, a Marin-based film-maker and researcher].

Begin Message:

Each one of you wishes to exercise your free will and experience happiness. Your free will depends upon the knowledge you have of your own power. Your happiness depends upon the love that you give and receive.

Like all conscious races at this stage of progress, you may feel isolated on your planet. This impression gives you a certain view of your destiny. Yet you are at the brink of big upheavals that only a minority is aware of.

It is not our responsibility to modify your future without your choosing it. So consider this message as a worldwide referendum, and your answer as a ballot.

Neither your scientists nor your religious representatives speak knowledgeably about certain unexplained aerial and celestial events that mankind has witnessed for thousands of years.

To know the truth, one must face it without the filter of one’s beliefs or dogmas, however respectable
they may be.

A growing number of anonymous researchers of yours are exploring new paths of knowledge and are getting very close to reality. Today, your civilization is flooded with an ocean of information of which only a tiny part, the less upsetting one, is notably distributed.

Bear in mind that what in your history seemed ridiculous or improbable has often become possible, then realized — in particular in the last fifty years.

Be aware that the future will be even more surprising. You will discover the worst as well as the best.

Many of those who study our appearances point to lights in the night, but without lighting the way. Often they think in terms of objects when it is all about conscious beings.

Who are we?

Like billions of others in this galaxy, we are conscious creatures that some call “extraterrestrials,” even though the reality is subtler. There is no fundamental difference between you and us, save for having experienced certain stages of evolution.

As with any other organized society, a hierarchy exists in our internal relationships. Ours, however, is based upon the wisdom of several races. It is with the approval of this hierarchy that we turn to you.

Like most of you, we are in quest of the Supreme “Being” or “State of Being.”

Therefore we are not gods or lesser gods but virtually your equals in the Cosmic Brotherhood. Physically we are somewhat different from you but most of us are humanoid-shaped.

We are not mere observations; we are consciousnesses just like you. Our existence is a reality, but the majority of you do not perceive it yet because we remain invisible to your senses and instruments most of the time.

We wish to fill this void at this moment in your history. We made this collective decision on our side, but this is not enough — we need yours as well.

Through this message you can become the decision-makers. You, personally. We have no human representative on Earth who could guide your decision.

Why aren’t we visible?

At certain stages of evolution, cosmic “humanities” discover certain scientific principles regarding matter. Structured dematerialization and materialization are among them.

Your humanity has achieved this in a few laboratories, in close collaboration with other extraterrestrial creatures — at the cost of hazardous compromises that remain purposely hidden from you by some of your representatives.

In addition to the aerial or space-based objects or phenomena known to your scientific community as physical “UFOs,” there are essentially multidimensional manufactured spaceships that possess these expanded capacities.

Many human beings have been in visual, auditory, tactile or psychic contact with such ships — some of which, it should be noted with caution, are under the influence of hidden powers that govern you, which we often term “the third party.”

The relative scarcity of your observations is due to the dematerialized state of these ships. Being unable to perceive them yourselves, you cannot acknowledge their existence. We fully understand this.

When most observations do occur, they are arranged on an individual basis so as to touch the individual soul and not to influence or intrude on any organized social system.

This is deliberate on the part of the various races that surround you, but for a variety of reasons and results. For negative multidimensional beings that play a part in the exercise of power in the shadow of human oligarchy, discretion is motivated by their desire to keep their existence unknown.

For us, discretion has been motivated by the respect of the human free will that people can exercise to manage their own affairs so that they can reach technical and spiritual maturity on their own.

However, humankind’s entrance into the family of galactic civilizations is greatly expected.

We can appear in broad daylight to help you attain this union, but we have not done it so far, as too few of you have genuinely desired it because of ignorance, indifference or fear, and because the urgency of the situation did not justify it.

Who are you?

You are the offspring of many traditions that throughout time have been mutually enriched by each others’ contributions.

Your goal is to unite, while respecting these diverse roots, to accomplish a common purpose, a united project. The appearances of your cultures help keep you separated because you give them far greater importance than you give your deeper beings.

Shape, or form, has been deemed more important than the essence of your subtle nature. For the powers in control, this emphasis on differences of form constitutes a bulwark against any form of positive change.

Now you are being called on to overcome the identification with form while still respecting it for its richness and beauty. Understanding the consciousness behind form allows us to love all humans in their diversity.

Peace does not mean simply not making war; it consists in becoming what you, collectively, are in reality: a fraternity.

The solutions available to achieve this are decreasing, but one that could still catalyze it would be open contact with another race that would reflect the image of what you are in a deeper reality.

Except for rare occasions, our past interventions intentionally had very little influence on your capacity to make collective and individual decisions about your own future.

This was motivated by our knowledge of your deep psychological mechanisms. We reached the conclusion that freedom is built every day as a being becomes aware of himself and of his environment, getting progressively rid of constraints and inertias, whatever they may be.

However, despite the actions of numerous brave and willing human souls, those inertias have been successfully maintained for the benefit of a growing, centralized power.

What is your situation?

Until recently, mankind lived in satisfactory control of its decisions. But it is losing more and more the control of its own fate, partly because of the growing use of advanced technologies that affect your body as well as your mind and will eventually have irreversibly lethal consequences for earthly and human ecosystems.

Independently of your own will, your resilience will artificially decrease and you will slowly but surely lose your extraordinary capacity to make life desirable. Such plans are on their way.

Should a collective reaction of great magnitude not happen, this individual power is doomed to vanish. The period to come shall be one of rupture.

This break, however, can be a positive break with the past as long as you keep this creative power alive in you, even if it cohabits, for the time being, with the dark intentions of your potential lords.

What now? Should you wait for the last moment to find solutions? Should you anticipate or undergo pain?

Your history has never ceased to be marked by encounters between peoples whose discovery of one another occurred in circumstances of conflict and conquest.

Earth has now become a village where everyone knows everyone else, but still conflicts persist and threats of all kinds get worse in intensity and duration.

Individuals who have many potential capacities cannot exercise them with dignity. This is the case for the greatest majority of you, for reasons that are essentially geopolitical.

There are several billion of you, but the education of your children and your living conditions, as well as the conditions of numerous animals and much plant life are under the thumb of a small number of your political, financial, military and religious representatives.

Your thoughts and beliefs are modeled after partisan interests while at the same time giving you the feeling that you are in total control of your destiny — which in essence is the reality, but there is a long way between a wish and a fact when the true rules of the game at hand are kept hidden.

This time, you are not the conqueror. Spreading biased information is an effective strategy for manipulating human beings. Inducing thoughts and emotions, or even creating organisms, that do not belong to you is an even older strategy.

A great roller wave is on the horizon. It entails very positive but also very negative potentials. At this time wonderful opportunities of progress stand side by side with threats of destruction.

However, you can only perceive what is being shown to you. The diminishing of many natural resources is inevitable and no long-term collective remediation project has been launched. Ecosystem exhaustion mechanisms have exceeded irreversible limits.

The scarcity of resources whose entry price will rise day after day — and their unfair distribution — will bring about fratricidal fights on a large scale, from the hearts of your cities to your countrysides.

This is the reason why, more than ever in your history, your decisions of today will directly and significantly impact your survival tomorrow.

Hatred grows… but so does love. That is what keeps you confident in your ability to find solutions.

However, human behaviors, formed from past habits and trainings, have great inertia that leads to a dead end. The critical mass has not been reached, while the work of sabotage is being carried out cleverly and efficiently.

You entrust your problems to representatives whose awareness of common well being inexorably fades away before corporatist interests.

These putative servants of the people are far more often debating the form than the content. Just at the moment of action, delays accumulate to the point when you have to submit rather than choose.

This inertia is in many ways typical of any civilization. What event could radically modify it? Where could a collective and unifying awareness come from that will stop this blind rushing ahead?

Tribes, populations and human nations have always encountered and interacted with one another. Faced with the threats weighing upon the human family, it is perhaps time that a greater interaction occurred.

There are two ways to establish a cosmic contact with another civilization: via its standing representatives or directly with ordinary individuals.

The first way entails fights of interests, the second way brings awareness. The first way was chosen by a group of races motivated by keeping mankind in slavery, thereby controlling Earth’s resources, its gene pool, and the mass of human emotional energy.

The second way was chosen by a group of races allied with the cause of the Spirit of Service. Some years ago we did introduce ourselves to representatives of the human power structure, but they refused our outstretched hand on the basis of interests that were incompatible with their strategic vision.

That is why today individuals are to make this choice by themselves without any representatives interfering. What we proposed in the past to those whom we believed were in a capacity to contribute to your happiness, we propose now — to you.

Few of you are aware that non-human creatures have been involved in the centralizing of power in your world, and in the subtle taking of control. These creatures do not necessarily stand on your material plane, which is precisely what could make them extremely efficient and frightening in the near future.

However, also be aware that quite a few of your representatives are in fact fighting this danger, that not all alien abductions are conducted to your detriment, and that resistance also exists amongst those dominance-oriented races.

Peace and reunification of your peoples would be a first step toward harmony with civilizations other than yours. That is precisely what those who manipulate you behind the scenes want to avoid at all cost because, by dividing, they reign.

They also reign over those who more visibly govern you. Their strength comes from their capacity to instill mistrust and fear. This considerably harms your very cosmic nature.

This message would be of no interest if these manipulators’ influence were not reaching its peak and if their misleading and murderous plans did not materialize within a few years from now.

Their deadlines are close and mankind will undergo unprecedented difficulties for the next ten cycles [years?]. To defend yourselves against this aggression that bears no face, you need at least to have enough information that points to the solution.

Here again, appearance and body type will not be enough to tell the dominator from the ally.

At your current state of psychic development it is extremely difficult for you to distinguish between them. In addition to your intuition, training will be necessary when the time has come. Being aware of the priceless value of free will, we are inviting you to an alternative.

What can we offer?

We can offer you a more holistic vision of the universe and of life, constructive interactions, the experience of fair and fraternal relationships, liberating technical knowledge, eradication of suffering, controlled exercise of individual powers, access to new forms of energy and, finally, a better comprehension of consciousness.

We cannot help you overcome your individual and collective fears, or bring you laws that you would not have chosen. You must also work on your own selves, apply individual and collective efforts to build the world you desire, and manifest the spirit to quest for new skies.

What would we receive?

Should you decide that such a contact take place, we would rejoice over the safeguarding of fraternal equilibrium in this region of the universe, fruitful diplomatic exchanges, and the intense Joy of knowing that you are united to accomplish what you are capable of.

The feeling of Joy is strongly sought in the universe, for its energy is divine. What is the question we ask you?

“DO YOU WISH THAT WE SHOW UP?”

How can you answer this question? The truth of soul can be read telepathically, so you only need to clearly ask yourself this question and give your answer as clearly, on your own or in a group, as you wish.

Being in the heart of a city or in the middle of a desert does not impact the efficiency of your answer. YES or NO.

Just do it as if you were speaking to yourself but thinking about the message. This is a universal question, and these mere few words, put in their context, have a powerful meaning.
This is why you should calmly think about it, in all conscience. In order to perfectly associate your answer with the question, it is recommended that you answer after another careful reading of this message.

Do not rush to answer. Breathe and let all the power of your own free will penetrate you. Be proud of what you are! Then do not let hesitation get in the way.

The everyday problems that you may have can weaken you. To be yourselves, forget about them for a few minutes. Feel the force that springs up in you. You are in control of yourselves!

A single thought, a single answer can drastically change your near future, in one way as in another. Your individual decision of asking in your inner self that we show up on your material plane and in broad daylight is precious and essential to us.

Even though you can choose the way that best suits you, rituals per se are essentially useless. A sincere request made with your heart and your own will, will always be perceived by those of us to whom it is sent. In your own private polling booth of your secret will, you will determine the future.

What is the lever effect?

This decision should be made by the greatest possible number among you, even though it might seem like a minority.

It is recommended to spread this message, in all envisageable fashions, in as many languages as possible, to those around you, whether or not they seem receptive to this new vision of the future. Do it using a humorous tone or derision if that can help you.

You can even openly and publicly make fun of it if it makes you feel more comfortable, but do not be indifferent, for at least you will have exercised your free will. Forget about the false prophets and the beliefs that have been transmitted to you about us.

This request is one of the most intimate that can be asked to you. Making a decision by yourself, as an individual, is your right as well as your responsibility. Passivity only leads to the absence of freedom.

Similarly, indecision is never efficient. If you really want to cling to your beliefs, which is something that we understand, then forcefully say NO.

If you do not know what to choose, do not say YES because of mere curiosity. This is not a show, this is real daily life. We exist. We are alive.

Your history has had plenty of episodes when determined men and women were able to influence the thread of events despite their small number.

Just as a small number is enough to take temporal power on Earth and influence the future of the majority, a small number of you can radically change your fate as an answer to the impotence in face of so much inertia and so many hurdles. You can ease mankind’s birth to Brotherhood.

One of your thinkers once said:

“Give me a hand-hold and I’ll raise the Earth.”
Spreading this message will then be the strengthening of the hand-hold. We will be the light-years long lever and you will be the craftsmen to “raise the Earth” as a consequence of our appearance.

What would be the consequences of a positive decision?

For us, the immediate consequence of a collective favorable decision would be the materialization of many ships, in your sky and on Earth.

For you, the direct effect would be the rapid abandoning of many certitudes and beliefs. A simple conclusive visual contact would have huge repercussions for your future.

Much knowledge would be modified forever. The organization of your societies would be deeply upheaved forever, in all fields of activity.

Power would become individual because you would see for yourself that we exist as living beings, not accepting or rejecting that fact on the word of any external authority. Concretely, you would change the scale of your values.

The most important thing for us is that humankind would form a single family before this “unknown” we would represent!

Danger would slowly melt away from your homes because you would indirectly force the undesirable ones, those we name the “third party,” to show up and vanish. You would all bear the same name and share the same roots: Mankind.

Later on, peaceful and respectful exchanges would be thus possible if such is your wish. For now, he who is hungry cannot smile, he who is fearful cannot welcome us.

We are sad to see men, women and children suffering to such a degree in their flesh and in their hearts when they bear such an inner light. This light can be your future.

Our relationships could develop in stages. Several stages of several years or decades would occur: demonstrative appearance of our ships, physical appearance beside human beings, collaboration in your technical and spiritual evolution, discovery of parts of the galaxy.

At every stage new choices would be offered to you. You would then decide by yourself to enter new stages if you think it necessary to your external and inner well-being. No interference would be decided upon unilaterally. We would leave as soon as you would collectively wish that we do.

Depending upon the speed to spread the message across the world, several weeks, or even several months will be necessary before our “great appearance,” if such is the decision made by the majority of those who will have used their capacity to choose, and if this message receives the necessary support.

The main difference between your daily prayers to entities of a strictly spiritual nature and your current decision is extremely simple: we are technically equipped to materialize.

Why such a historical dilemma?

We know that “foreigners” are considered as enemies as long as they embody the “unknown.” In a first stage, the emotion that our appearance will generate will strengthen your relationships on a worldwide scale.

How could you know whether our arrival is the consequence of your collective choice? For the simple reason that we would have otherwise shown up long ago at your level of existence. If we are not there yet, it is because you have not made such a decision explicitly.

Some among you might think that we would make you believe in a deliberate choice of yours so as to justify our arrival, though this would not be true. If that were the case, what interest would we have in openly giving you access to these opportunities for the benefit of the greatest number of you?

How could you be certain that this is not yet another subtle maneuver of the “third party” to better enslave you? Because one always more efficiently fights something that is identified than what is kept hidden.

Isn’t the terrorism that corrodes you a blatant example? Whatever, you are the sole judge in your own heart and soul. Whatever your choice, it would be respectable and respected.

In the absence of human representatives who could potentially seduce into error you ignore everything about us as well as from about those who manipulate you without your consent.

[There seems to be some text missing in the translation here.]

In your current situation, the precautionary principle that consists in not trying to discover us no longer prevails. You are already in the Pandora’s box that the “third party” has created around you. Whatever your decision may be, you will have to get out of it.

In the face of such a dilemma, one ignorance against another, you need to ask your intuition. Do you want to see us with your own eyes, or simply believe what your “authorities” say? That is the real question! After thousands of years, one day this choice was going to be inevitable: choosing between two unknowns.

Why spread such a message among yourselves?

Translate and spread this message widely. This action will affect your future in an irreversible and historical way at the scale of millennia. Otherwise, it will postpone a new opportunity to choose until several years later — at least one generation, if that generation can survive.

Not choosing stands for undergoing other people’s choice. Not informing others stands for running the risk of obtaining a result that is contrary to one’s expectations. Remaining indifferent means giving up one’s free will.

It is all about your future. It is all about your evolution. It is possible that this invitation will not receive your collective assent and will be disregarded. Nevertheless no individual desire goes unheeded in the universe.

Imagine our arrival tomorrow. Thousands of ships. A unique cultural shock in today’s mankind’s history. It will then be too late to regret not making a choice and spreading the message because this discovery will be irreversible.

We do insist that you do not rush into it, but do think about it… And decide. The big media will not necessarily be interested in spreading this message. It is therefore your task, as an anonymous yet an extraordinary thinking and loving being, to transmit it.

You are still the architects of your own fate…

“DO YOU WISH THAT WE SHOW UP?”

End Message

Remember, use your intuition to connect and feel out this message and what it means for you. I will end with, blind acceptance is just as unhelpful as blind skepticism.

in 2012 we put out a film called The Collective Evolution 3: The Shift for free. It explores why we are living in the most important time in our history. You can watch that film here.

Why The Sun Has Been On The Fritz

Original Article

By George Dvorsky

The solar flare as seen by NASA’s Solar Dynamics Observatory on September 10, 2017. (Image: NASA/SDO/Goddard)

Since early last week, the Sun has belched out a steady stream of solar flares, including the most powerful burst recorded in the star’s current 11-year cycle. It sounds very alarming, but scientists say this is simply what stars do every now and then, and that there’s nothing to be concerned about.

Solar flares are powerful bursts of radiation that stream out into space after periods of sunspot-associated magnetic activity. Sunspots are surface features that occasionally form owing to the strong magnetic field lines that come up from within the Sun and pierce through the solar surface. Solar flares are the largest explosive events in the Solar System, producing bright flashes that last anywhere from a few minutes to a few hours. Earth’s atmosphere protects us from most of their harmful rays, but this radiation can disturb GPS, radio, and communications signals, particularly near our planet’s polar regions.

The solar flare as seen by NASA’s Solar Dynamics Observatory on September 10, 2017. (Image: NASA/SDO/Goddard)

On Sunday September 10, 2017, NASA’s Solar Dynamics Observatory recorded an X8.2 class flare. Class X flares are the most intense flares, and the number attached to it denotes its strength, where X2 is twice as intense as X1, and X3 is three times as intense, and so on. M-class flares are a tenth the size of X-class flares and C-class flares are the weakest of the bunch. Both X- and M-class flares can cause brief radio blackouts on Earth, and other mild technological disruptions. Unless it’s part of an unusually strong solar storm—the kind that happens about once every one hundred years—in which case that would be very bad.

The latest flare spurted out from the Sun’s Active Region 2673, which scientists first noticed on August 29. Activity from this region began to intensify on September 4. Over the past week, NASA has catalogued six sizeable flares, including X2.2 and X9.3 flares on September 6, and an X1.3 flare on September 7. The X9.3 flare is the largest flare recorded so far in the current solar cycle—an approximately 11 year-cycle in which the Sun’s activity waxes and wanes. We’re in the ninth year of the current cycle, and we’re heading towards a solar minimum in terms of intensity. Flares like this are rare during this waning phase, but as these latest bursts show, they can still be pretty intense.

This gif shows both the X2.2 and the X9.3 flares that the Sun emitted on Sept. 6, 2017. (Image: NASA/GSFC/SDO)

“Big flares towards the end of sunspot cycles are not unusual, and in fact, that’s fairly standard behavior,” said Scott MacIntosh, director of the High Altitude Observatory at the National Center for Atmospheric research (NCAR), in an interview with Gizmodo. “The trick is to explain why.”

MacIntosh says that when the Sun’s activity gets low, the magnetic systems underlying the spots appear to be in close-contact near the equator. This creates an opportunity for the Sun to produce “hybrid” sunspots—regions which contain magnetic fields that twist like water in the Northern and Southern hemisphere oceans.

“Remember how the rotation of the Earth makes water [spin] in different directions in each hemisphere? The Sun does the same thing for the same reason—the Coriolis force,” said MacIntosh. “Those systems are very unstable. Typically these types of spots produce the biggest, baddest flares and coronal mass ejections when they emerge through the Sun’s surface.”

But the paradoxical thing, says MacIntosh, is that the periods of very low solar activity are known to have produced the biggest geomagnetic storms in history, and these late-cycle events can persist for a very long time, even though the total number of flares is low. “It’s basically about how the different magnetic systems interact,” he says.

As a result of the most recent solar flares, NOAA’s Space Weather Prediction Center has issued a moderate geomagnetic storm watch for September 13, and a minor geomagnetic storm watch for September 14. This shouldn’t cause too much of a problem on Earth, but as NASA Solar Scientist Mitzi Adams explained to Gizmodo, we need to be concerned about flares and coronal mass ejections, since we’re now so reliant on technology that can be impacted by these events.

“The Space Weather Prediction Center (SWPC) shows an image from SOHO’s coronagraph with ‘speckles.’ The speckles are energetic charged particles interacting with the camera, which do degrade the camera over time,” said Adams. “These events also cause radio blackouts, corrosion in pipelines, and ground-induced currents that can damage transformers. Through monitoring and basic research, the goal is to understand what the Sun does and is likely to do so that we can prepare satellites, power grids, and even astronauts.”

The particles that speckle our cameras, says Adams, arrive about an hour after traveling about 93,000,000 miles per hour (150,000,000 km/h) from the Sun to the Earth. But the bulk of the particles take a couple of days to reach our planet, giving us some time to prepare.

Correction: A previous version of this post incorrectly identified the Space Weather Prediction Center as being run by NASA. Sorry about the error.

Tattoo Ink Particles Can Travel To Lymph Nodes.

Original Article

By Ryan F. Mandelbaum

Image: Pitbull Tattoo Thailand

Tattoos are very cool and I do not want to say bad things about them. Evidence of tattooing dates back thousands of years, and the art form has a long history across the world in various cultures. Tattooing has associations with wealth, crime, or seafaring depending on where in history you look. Today, there’s no denying tattoos are everywhere.

But unfortunately, scientists haven’t really looked at the long-term effects of tattoos on the human body.

Researchers have long noticed ink stains on lymph nodes in tattooed folks, but weren’t certain which kinds of particles from the ink were actually ending up there. A new study analyzing deceased tattooed individuals with a high-tech x-ray light source looked at the specifics of the tiny particles that made it to the nodes and stayed there for a long time. While the lymph nodes of these deceased individuals contained a small amount of potentially toxic metals that are believed to be from the tattoos, it’s still unclear exactly what effects these particles might have.

That’s because, given that tattooing is a cosmetic choice, scientists haven’t really studied it. “Currently, basic toxicological aspects,” like how the body transports and breaks down the ink molecules, “are largely uncertain,” the authors write in the paper published today in the journal Scientific Reports. “The animal experiments which would be necessary to address these toxicological issues were rated unethical because tattoos are applied as a matter of choice and lack medical necessity, similar to cosmetics.”

The researchers took skin and lymph node samples from four tattooed deceased human body donors and two non-tattooed donors. They found ink in both the skin and lymph nodes of two of the four patients—one with blue ink and another with green ink. Further chemical analysis found elevated levels of aluminum, chromium, iron, nickel, and copper in both the lymph nodes and skin of tattooed individuals, and even found cadmium and mercury in one of the donors’ lymph nodes (but not in the skin—the authors thought maybe it came from a different tattoo not tested). All of the tattooed individuals also had higher levels of titanium in the skin and nodes, which the authors thought was unlikely to have come from the usual titanium dioxide sources, cosmetics and sunscreen.

Skin and lymph node samples

The researchers also analyzed the skin and lymph nodes with x-rays from the European Synchrotron Radiation Facility, a large particle accelerator in France, and found that the bodies seemed to react to the tattoos in the lymph nodes—lipid levels were higher near the intruding particles. They note that these lipids may also have come from components of the ink.

While there are several acute issues that might come along with tattoos, from allergic reaction and inflammation to infection, there’s still question as to what the long-term effects might be. The authors here aren’t telling you that you should be worried, yet, as this is a preliminary study with only a few samples. Rather, they’ve recognized that lots of people are getting tattoos these days but the effects are understudied. It would probably be beneficial to understand what your body is actually doing with all of that ink, or even how it reacts to titanium oxide in cosmetics when it comes into contact with a wound.

One scientist not involved with the study, Wolfgang Bäumler from University Hospital Regensburg in Germany, said the work convincingly confirmed something he’s been studying: “Tattoo effects may be more than skin deep.”

I think you should get a tattoo because tattoos are dope (this is a biased statement, I have a family member who is a tattoo artist). But you should also know the risks, said Bäumler. “People getting a tattoo should know that colorants injected in the skin may cause skin problems like an allergic reaction and/or granulomas… People should also know that skin is eager to remove such foreign bodies from skin (tattoo colorant) via the lymphatic system, that is the job of the immune system in skin. Then, the colorant ingredients show up in the next lymph nodes.”

[Scientific Reports]

 

.Hackers Already Weaponizing A.I.

Original Article

By George Dvorsky

Illustration: Sam Woolley/Gizmodo

Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.

The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.

Human or bot? AI makes it tough to tell. (Image: ZeroFOX)

Thankfully this was just an experiment, but the exercise showed that hackers are already in a position to use AI for their nefarious ends. And in fact, they’re probably already using it, though it’s hard to prove. In July, at Black Hat USA 2017, hundreds of leading cybersecurity experts gathered in Las Vegas to discuss this issue and other looming threats posed by emerging technologies. In a Cylance poll held during the confab, attendees were asked if criminal hackers will use AI for offensive purposes in the coming year, to which 62 percent answered in the affirmative.

The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is alreadybeing used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.

“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, in an interview with Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.

Scales of intelligence

Marc Goodman, author of Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It, says he isn’t surprised that so many Black Hat attendees see weaponized AI as being imminent, as it’s been part of cyber attacks for years.

“What does strike me as a bit odd is that 62 percent of infosec professionals are making an AI prediction,” Goodman told Gizmodo. “AI is defined by many different people many different ways. So I’d want further clarity on specifically what they mean by AI.”

Indeed, it’s likely on this issue where the expert opinions diverge.

The funny thing about artificial intelligence is that our conception of it changes as time passes, and as our technologies increasingly match human intelligence in many important ways. At the most fundamental level, intelligence describes the ability of an agent, whether it be biological or mechanical, to solve complex problems. We possess many tools with this capability, and we have for quite some time, but we almost instantly start to take these tools for granted once they appear.

Centuries ago, for example, the prospect of a calculating machine that could crunch numbers millions of times faster than a human would’ve most certainly been considered a radical technological advance, yet few today would consider the lowly calculator as being anything particularly special. Similarly, the ability to win at chess was once considered a high mark of human intelligence, but ever since Deep Blue defeated Garry Kasparov in 1997, this cognitive skill has lost its former luster. And so and and so forth with each passing breakthrough in AI.

Today, rapid-fire developments in machine learning (whereby systems learn from data and improve with experience without being explicitly programmed), natural language processing, neural networks (systems modeled on the human brain), and many other fields are likewise lowering the bar on our perception of what constitutes machine intelligence. In a few years, artificial personal assistants (like Siri or Alexa), self-driving cars, and disease-diagnosing algorithms will likewise lose, unjustifiably, their AI allure. We’ll start to take these things for granted, and disparage these forms of AI for not being perfectly human. But make no mistake—modern tools like machine intelligence and neural networks are a form of artificial intelligence, and to believe otherwise is something we do at our own peril; if we dismiss or ignore the power of these tools, we may be blindsided by those who are eager to exploit AI’s full potential, hackers included.

A related problem is that the term artificial intelligence conjures futuristic visions and sci-fi fantasies that are far removed from our current realities.

“The term AI is often misconstrued, with many people thinking of Terminator robots trying to hunt down John Connor—but that’s not what AI is,” said Wallace. “Rather, it’s a broad topic of study around the creation of various forms of intelligence that happen to be artificial.”

Wallace says there are many different realms of AI, with machine learning being a particularly important subset of AI at the current moment.

“In our line of work, we use narrow machine learning—which is a form of AI—when trying to apply intelligence to a specific problem,” he told Gizmodo. “For instance, we use machine learning when trying to determine if a file or process is malicious or not. We’re not trying to create a system that would turn into SkyNet. Artificial intelligence isn’t always what the media and science fiction has depicted it as, and when we [infosec professionals] talk about AI, we’re talking about broad areas of study that are much simpler and far less terrifying.”

Evil intents

These modern tools may be less terrifying than clichéd Terminator visions, but in the hands of the wrong individuals, they can still be pretty scary.

Deepak Dutt, founder and CEO of Zighra, a mobile security startup, says there’s a high likelihood that sophisticated AI will be used for cyberattacks in the near future, and that it might already be in use by countries such as Russia, China, and some Eastern European countries. In terms of how AI could be used in nefarious ways, Dutt has no shortage of ideas.

“Artificial intelligence can be used to mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on, which can be used for hacking [a person’s] accounts,” Dutt told Gizmodo. “It can also be used to automatically monitor e-mails and text messages, and to create personalized phishing mails for social engineering attacks [phishing scams are an illicit attempt to obtain sensitive information from an unsuspecting user]. AI can be used for mutating malware and ransomware more easily, and to search more intelligently and dig out and exploit vulnerabilities in a system.”

Dutt suspects that AI is already being used for cyberattacks, and that criminals are already using some sort of machine learning capabilities, for example, by automatically creating personalized phishing e-mails.

“But what is new is the sophistication of AI in terms of new machine learning techniques like Deep Learning, which can be used to achieve the scenarios I just mentioned with a higher level of accuracy and efficiency,” he said. Deep Learning, also known as hierarchical learning, is a subfield of machine learning that utilizes large neural networks. It has been applied to computer vision, speech recognition, social network filtering, and many other complex tasks, often producing results superior to human experts.

“Also the availability of large amounts of social network and public data sets (Big Data) helps. Advanced machine learning and Deep Learning techniques and tools are easily available now on open source platforms—this combined with the relatively cheap computational infrastructure effectively enables cyberattacks with higher sophistication.”

These days, the overwhelming number of cyber attacks is automated, according to Goodman. The human hacker going after an individual target is far rarer, and the more common approach now is to automate attacks with tools of AI and machine learning—everything from scripted Distributed Denial of Service (DDoS) attacks to ransomware, criminal chatbots, and so on. While it can be argued that automation is fundamentally unintelligent (conversely, a case can be made that some forms of automation, particularly those involving large sets of complex tasks, are indeed a form of intelligence), it’s the prospect of a machine intelligence orchestrating these automated tasks that’s particularly alarming. An AI can produce complex and highly targeted scripts at a rate and level of sophistication far beyond any individual human hacker.

Indeed, the possibilities seem almost endless. In addition to the criminal activities already described, AIs could be used to target vulnerable populations, perform rapid-fire hacks, develop intelligent malware, and so on.

Staffan Truvé, Chief Technology Officer at Recorded Future, says that, as AI matures and becomes more of a commodity, the “bad guys,” as he puts it, will start using it to improve the performance of attacks, while also cutting costs. Unlike many of his colleagues, however, Truvé says that AI is not really being used by hackers at the moment, claiming that simpler algorithms (e.g. for self-modifying code) and automation schemes (e.g. to enable phishing schemes) are working just fine.

“I don’t think AI has quite yet become a standard part of the toolbox of the bad guys,” Truvé told Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks already is that the traditional methods still work—if you get what you need from a good old fashioned brute force approach then why take the time and money to switch to something new?”

AI on AI

With AI now part of the modern hacker’s toolkit, defenders are having to come up with novel ways of defending vulnerable systems. Thankfully, security professionals have a rather potent and obvious countermeasure at their disposal, namely artificial intelligence itself. Trouble is, this is bound to produce an arms race between the rival camps. Neither side really has a choice, as the only way to counter the other is to increasingly rely on intelligent systems.

“For security experts, this is Big Data problem—we’re dealing with tons of data—more than a single human could possibly produce,” said Wallace. “Once you’ve started to deal with an adversary, you have no choice but to use weaponized AI yourself.”

To stay ahead of the curve, Wallace recommends that security firms conduct their own internal research, and develop their own weaponized AI to fight and test their defenses. He calls it “an iron sharpens iron” approach to computer security. The Pentagon’s advanced research wing, DARPA, has already adopted this approach, organizing grand challenges in which AI developers pit their creations against each other in a virtual game of Capture the Flag. The process is very Darwinian, and reminiscent of yet another approach to AI development—evolutionary algorithms. For hackers and infosec professionals, it’s survival of the fittest AI.

Goodman agrees, saying “we will out of necessity” be using increasing amounts of AI “for everything from fraud detection to countering cyberattacks.” And in fact, several start-ups are already doing this, partnering with IBM Watson to combat cyber threats, says Goodman.

“AI techniques are being used today by defenders to look for patterns—the antivirus companies have been doing this for decades—and to do anomaly detection as a way to automatically detect if a system has been attacked and compromised,” said Truvé.

At his company, Recorded Future, Truvé is using AI techniques to do natural language processing to, for example, automatically detect when an attack is being planned and discussed on criminal forums, and to predict future threats.

“Bad guys [with AI] will continue to use the same attack vectors as today, only in a more efficient manner, and therefore the AI based defense mechanisms being developed now will to a large extent be possible to also use against AI based attacks,” he said.

Dutt recommends that infosec teams continuously monitor the cyber attack activities of hackers and learn from them, continuously “innovate with a combination of supervised and unsupervised learning based defense strategies to detect and thwart attacks at the first sign,” and, like in any war, adopt superior defenses and strategy.

The bystander effect

So our brave new world of AI-enabled hacking awaits, with criminals becoming increasingly capable of targeting vulnerable users and systems. Computer security firms will likewise lean on a AI in a never ending effort to keep up. Eventually, these tools will escape human comprehension and control, working at lightning fast speeds in an emerging digital ecosystem. It’ll get to a point where both hackers and infosec professionals have no choice but to hit the “go” button on their respective systems, and simply hope for the best. A consequence of AI is that humans are increasingly being kept out of the loop.

 

AI Can Determine Sexual Orientation From A Photograph

By Sam Levin
An illustrated depiction of facial analysis technology similar to that used in the experiment.

An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions

Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.

While the findings have clear limits when it comes to gender and sexuality – people of color were not included in the study, and there was no consideration of transgender or bisexual people – the implications for artificial intelligence (AI) are vast and alarming. With billions of facial images of people stored on social media sites and in government databases, the researchers suggested that public data could be used to detect people’s sexual orientation without their consent.

It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.

But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.

“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”

Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. Now we know that we need protections.”

Kosinski was not immediately available for comment, but after publication of this article on Friday, he spoke to the Guardian about the ethics of the study and implications for LGBT rights. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.

In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.

This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.

“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.

Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”

Researchers Reverse the Negative Effects Of Adolescent Marijuana Use

Original Article

Researchers at Western University have found a way to use pharmaceuticals to reverse the negative psychiatric effects of THC, the psychoactive chemical found in marijuana. Chronic adolescent marijuana use has previously been linked to the development of psychiatric diseases, such as schizophrenia, in adulthood. But until now, researchers were unsure of what exactly was happening in the brain to cause this to occur.

“What is important about this study is that not only have we identified a specific mechanism in the prefrontal cortex for some of the mental health risks associated with adolescent  use, but we have also identified a mechanism to reverse those risks,” said Steven Laviolette, professor at Western’s Schulich School of Medicine & Dentistry.

In a study published online today in Scientific Reports the researchers demonstrate that adolescent THC exposure modulates the activity of a neurotransmitter called GABA in the  region of the brain. The team, led by Laviolette and post-doctoral fellow Justine Renard, looked specifically at GABA because of its previously shown clinical association with .

“GABA is an  and plays a crucial role in regulating the excitatory activity in the frontal cortex, so if you have less GABA, your neuronal systems become hyperactive leading to behavioural changes consistent with schizophrenia,” said Renard.

The study showed that the reduction of GABA as a result of THC exposure in adolescence caused the neurons in adulthood to not only be hyperactive in this part of the brain, but also to be out of synch with each other, demonstrated by abnormal oscillations called ‘gamma’ waves. This loss of GABA in the cortex caused a corresponding hyperactive state in the brain’s dopamine system, which is commonly observed in schizophrenia.

By using drugs to activate GABA in a rat model of schizophrenia, the team was able to reverse the neuronal and behavioural effects of the THC and eliminate the schizophrenia-like symptoms.

Laviolette says this finding is especially important given the impending legalization of marijuana in Canada. “What this could mean is that if you are going to be using marijuana, in a recreational or medicinal way, you can potentially combine it with compounds that boost GABA to block the negative effects of THC.”

The research team says the next steps will examine how combinations of cannabinoid chemicals with compounds that can boost the brains GABA system may serve as more effective and safer treatments for a variety of  disorders, such as addiction, depression and anxiety.

Elon Musk: A.I. Battle Is A “Likely Cause” Of WWIII

Original Article

By Brett Molina

Elon Musk says global race for artificial intelligence will cause World War III. Elizabeth Keatinge (@elizkeatinge) has more. Buzz60

A race toward “superiority” between countries over artificial intelligence will be the most likely cause of World War III, warns entrepreneur Elon Musk.

The CEO of Tesla and SpaceX has been outspoken about his fears of AI, urging countries to consider regulations now before the technology becomes more widely used.

The comments were sparked by comments from Russian President Vladimir Putin, who said the country leading the way in AI “will become ruler of the world,” reports news site RT.

“It begins,” said Musk in an earlier tweet ahead of his warning about the potential risks.

“China, Russia, soon all countries w strong computer science,” Musk wrote in a tweet posted Monday. “Competition for AI superiority at national level most likely cause of WW3 imo.”

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

In response to a Twitter user, Musk notes the AI itself — not a country’s leader — could spark a major conflict “if it decides that a preemptive strike is most probable path to victory.”

An automated WWIII at that. That’s a worry…

May be initiated not by the country leaders, but one of the AI’s, if it decides that a prepemptive strike is most probable path to victory

Musk has emerged as a critic of AI safety, seeking ways for governments to regulate the technology before it gets out of control. Last month, Musk warned fears over the security of AIare more risky than the threat of nuclear war from North Korea.

In July, during the National Governors Association meeting, Musk pushed states to consider new rules for AI.

“AI is a rare case where I think we need to be proactive in regulation than reactive,” said Musk.

Can Nightmares Cause Death?

Original Article

By Tara MacIsaac

In Beyond Science, Epoch Times explores research and accounts related to phenomena and theories that challenge our current knowledge. We delve into ideas that stimulate the imagination and open up new possibilities. Share your thoughts with us on these sometimes controversial topics in the comments section below.

A lot of the research on nightmares suggests these events test the strength of one’s mind. If the mind is not strong, nightmares can take hold with greater force and the torment can extend beyond one’s dreams.

Dr. Patrick McNamara of the Boston University School of Medicine looks at nightmares in a modern clinical context that also takes into account the history of dream phenomena in many cultures. He connects nightmares with a world of malevolent spirits.

 

Spirit Possession

Some people who experience frequent nightmares, both today and throughout history, also show signs in waking life of mental illness and even what may be seen as spirit possession.

Dr. McNamara seems unabashed in speaking about spirit possession.

“Nightmares very often involve supernatural characters that attack or target the dreamer in some way. I mean monsters, creatures, demons, spirits, unusual animals, and the like,” Dr. McNamara said in an interview with Boston University’s alumni publication Bostonia. “The self escapes unscathed only if it refuses to look at or speak to or in any way engage the monster. When the self engages the monster, all kinds of ill effects ensue, including, in ancestral cultures, demonic or spirit possession.”

“It is an interesting clinical fact that, even today, most cases of involuntary spirit possession across the world occur overnight. The person wakes up possessed,” he said. He said spirit possession is much more common than most people think. “It is a universal human experience.”

Under attack, the strength of a dreamer’s ego is challenged in nightmares. As with facing other hardships in life, overcoming a nightmare attack can make a person stronger, Dr. McNamara said.

Nightmares are more frequent for people with “thin boundaries,” he said—that is, people who are sensitive to sensory impressions and creative people.

People who’ve experienced trauma also experience frequent nightmares. As has been suggested in other modern spirit possession studies, trauma victims may sometimes withdraw their consciousness from their bodies as a coping method, thus leaving their bodies open to the control of other consciousnesses.

ALSO SEE: Science of Spirit Possession

Neurologist and psychiatrist Dr. McNamara studied nightmares for more than a decade before writing “The Science and Solution of Those Frightening Visions During Sleep” in 2008.

 

Mental Illness

Mental Illness
Concept image of a woman suffering from mental illness via Shutterstock

Researchers at the University of Warwick in England released a study earlier this year linking chronic childhood nightmares with mental illness later in life.

A link does not mean a cause-effect relationship. It may be that people likely to have nightmares are also likely to be mentally ill. Children who experienced frequent nightmares were three times more likely to have psychotic experiences in their teenage years.

When considering the lasting impact of nightmares on one’s mind, another question that has been raised is, if a person dies in a dream could he die in real life as a result?

 

Can You Really Die if Killed in a Nightmare?

There is a phenomenon called “sudden unexplained nocturnal death syndrome” (SUNDS) that some have speculated may be linked to nightmares, but this link has not been rigorously tested and is far from certain. SUNDS is more common among a particular demographic, young men, and often happens when the men have gone to bed with a full stomach, suggesting more physiological causes.

Another phenomenon related to death in sleep is parasomnia pseudo-suicide, when people commit suicide in their sleep. A 2003 article in the Journal of Forensic Science explained: “Complex behaviors arising from the sleep period may result in violent or injurious consequences, even death. Those resulting in death may be erroneously deemed suicides.”

Some have said this is what happened to modern artist Tobias Wong, who hanged himself in New York in 2010.

Doree Shafrir of Buzzfeed wrote an article about her personal experiences with night terrors and also mentioned Wong. Night terrors differ slightly from nightmares in that the sleepers may exhibit more physical movement or yelling during the terrors and they may not remember dream episodes that caused the reaction. “It crosses my mind that I could actually scare myself to death,” she wrote.

“The prevailing theory about Tobias Wong’s death was that he hanged himself while experiencing a night terror. I imagine that something in his mind told him that hanging himself was the only way to escape whoever, or whatever, was chasing him, in the same way that I have thought that the only way to save myself was to jump out of a window or smash a pane of glass.”

It is difficult, of course, to establish any clear link between nightmares and death, since the cause of death would be manifest in the person’s mind and the person would not be able to report it if he or she actually died as a result.

 

How to Fight Nightmares: Make the Scary Thing Silly

A common therapy to help people with chronic nightmares overcome them is turning the scary image into something benign. In waking life, the person identifies the frightening imagery that comes up in recurrent nightmares or nightmares with similar themes. He or she Reimagines it in a way that makes it less scary, sometimes drawing it out on paper to help visualize it more clearly and reinforce the image.

“Harry Potter,” fans may think of the scene in which Neville Longbottom pictures the frightful Professor Snape dressed in his grandmother’s clothes, effectively dispelling the fear associated with that figure.

The Science of Spirit Possession

Original Article

By Tara MacIsaac

In Beyond Science, Epoch Times explores research and accounts related to phenomena and theories that challeange our current knowledge. We delve into ideas that stimulate the imagination and open up new possibilities. Share your thoughts with us on these sometimes controversial topics in the comments section below.

Modern science questions much of the knowledge gained through the collective memory of humanity over the course of millennia.

“Every culture and religious belief system throughout human history has its traditional beliefs of spirit possession in some form or another with corresponding rituals for the release or exorcism of spirit entities,” wrote Dr. Terence Palmer, a psychologist and the first person in the U.K. to earn a Ph.D. in spirit release therapy.

Some psychologists are returning to the methods developed by our ancestors to help patients with symptoms of possession.

Dr. William Baldwin (1939–2004) founded the practice of spirit release therapy and he also used past-life regression treatments. Baldwin was cautious about saying whether he believed in reincarnation or not, but he did say his treatments helped patients, and that’s what matters.

Spirit release practitioner Dr. Alan Sanderson wrote in a paper titled “Spirit Release Therapy: What Is It and What Can It Achieve?”: “I want to stress that the concept of spirit attachment and the practice of spirit release are not based on faith, as are religious and mystical beliefs. They are based on the observation of clinical cases and their response to standard therapeutic techniques. This is a scientific approach, albeit one that takes account of subjective experience and is not confined by contemporary scientific theory.”

Dr. Palmer commented in the introduction to a lecture titled “The Science of Spirit Possession”: “SRT [spirit release therapy] sits uncomfortably between the disbelief of a materialist secular society and the subjective experience of spirit possession: whether that experience is a symptom of psychosis, symbolic representation, socio-cultural expectation or a veridical manifestation.”

Parapsychology has been called a “pseudoscience,” as have other scientific approaches to phenomena that cannot be entirely explained by conventional science. However one views the method, it appears a revival of ancient wisdom has been effective in many cases.

Here’s a look at some of the thinkers, including those already mentioned, who have approached possession scientifically.

 

Frederick W.H. Myers

Frederick W.H. Myers (1843–1901) wrote in his book “Human Personality and Its Survival of Bodily Death,” which was published posthumously in 1906: “The controlling spirit proves his identity mainly by reproducing, in speech or writing, facts which belong to his memory and not to the automatist’s memory.”

He noted that the brain is little-understood; scientists don’t have a solid understanding of many of its ordinary functions let alone extraordinary functions (and this still holds true today). He theorized about a sort of radiation or energy that could be behind the telepathic influence of one person on another.  He tried to consider how the memory centers might be related to the gaps in memory experienced by people said to be possessed.

Myers has not been shown to have any formal education in the field of psychology and much of his work relied on two mediums he worked with. It was his belief in a science that takes fuller account of human consciousness that has continued to inspire scientists. Myers also noted that the origin of the idea is not as important as its effectiveness or veracity.

“Instead of asking in what age a doctrine originated—with the implied assumption that the more recent it is, the better—we can now ask how far it is in accord or in discord with a great mass of actual recent evidence which comes into contact, in one way or another, with nearly every belief as to an unseen world which has been held at least by Western men.

“Submitted to this test, the theory of possession gives a remarkable result. It cannot be said to be inconsistent with any of our proved facts. We know absolutely nothing which negatives its possibility.

“Nay, more than this. The theory of possession actually supplies us with a powerful method of co-ordinating and explaining many earlier groups of phenomena, if only we will consent to explain them in a way which at first sight seemed extreme in its assumptions.”

 

Dr. Terence Palmer

Dr. Palmer’s Ph.D. thesis revived Myers’s work. He said that Myers and others have tried to bring the mental, emotional, and spiritual elements of human experience into natural science.

“To permit the accommodation of all human experience into a broader scientific framework is a scary prospect for several reasons. But fear is the cause of all human suffering, and only when medical science puts aside its own fears of being proven wrong can it treat sickness effectively by showing how fear is to be remedied,” Dr. Palmer wrote.

In a recorded lecture on his thesis, he looked at ways in which we come to know things. Some of the ways include learning from others, using logic and deduction, and through personal experience. He noted that in these ways, a good deal of evidence exists for the possibility of real spirit possession.

Funding, he said, has been one of the obstacles to conducting more rigorous scientific research of spirit possession. He said further studies must be done with remote telepathic intervention. This would bypass any placebo effect or any psychological impact a patient’s belief system may have.

 

Dr. Alan Sanderson

Dr. Sanderson asked in his paper “So where is the research to back these heretical claims [about spirit possession]?”

He gave three reasons for minimal research in this field of study. First, spirit release is a new study, which has only been systematically taught and practiced for about a decade. Second, much mistrust and many misconceptions still present obstacles. Third, research funds are hard to come by.

He is hopeful the field will progress and funds with come forth. In the meantime, he said, “individual cases have much to say.” Dr. Sanderson uses the method developed by Dr. Baldwin to treat spirit possession. Following is an outline of Dr. Baldwin’s work and an example of how Dr. Sanderson used it to help a woman allegedly possessed by the ghost of her father.

 

Dr. William Baldwin

Dr. Baldwin developed a method of helping people exorcise their demons so to speak. It is thought that traumatic experiences can especially cause a person’s consciousness to withdraw and give the body over to other forms of consciousness.

In spirit release therapy, the patient is hypnotized so it is easier to access the other consciousnesses in the person’s mind. The therapist asks the possessing entity to look inside. Dr. Baldwin has said that about half of his hypnotized patients could see silver threads, like those described in Ecclesiastes in the Bible as connecting the human spirit to the body, according to author Kerry Pobanz.

The therapist is said to help the spirits resolve issues so they will no longer have a negative impact on the patient and the therapist may even ask for divine intervention.

 

Dr. Sanderson’s Case Study of a Woman With Multiple Personalities

Pru, 46, had long-term psychological problems found to stem from sexual abuse by her father when she was a child. Under hypnosis in a session with Dr. Sanderson, she identified herself as her father, Jason. Jason would become angry and threaten Dr. Sanderson.

“In deep trance, Jason agreed to look within himself, where he saw blackness,” Dr. Sanderson wrote. “I called for angelic help. With the use of Baldwin’s protocol for dealing with demonic spirits, the blackness left. Thereafter, Jason was amenable. He agreed to leave. Other destructive entities responded similarly.”

Not all spirits found inside a person are malevolent, say spirit release practitioners.

Pru wrote a paragraph to describe her experience: “‘The spiritual approach left me freer from the remaining daily distress than anything tried before. Whilst under hypnosis I found myself talking about some experiences that I had definitely not had and places I certainly had not been to. So, was this spirits, split off parts of my personality, ancestral memory or even false memory/imagination? I very much doubt the latter. There was reluctance, yet at the same time relief, to be spoken to, accepted and contacted. The release from the darkness, into the light and to the beyond had to be experienced to be believed. It was amazing and I still marvel at the sight of these ‘entities’ disappearing and freeing me.”

*Image of woman being hypnotized via Shutterstock

“Hacking” Street Signs With Stickers Could Confuse Smart Cars

Original Article

By Jonathan M. Gitlin

Progress in the field of machine vision is one of the most important factors in the rise of the self-driving car. An autonomous vehicle has to be able to sense its environment and react appropriately. Free space has to be calculated, solid objects avoided, and any and all of the instructions we helpfully leave everywhere—painted on the tarmac or posted on signs—have to be obeyed.

Deep neural networks turned out to be pretty good at classifying images, but it’s still worth remembering that the process is quite unlike the way humans identify images, even if the end results are fairly similar. I was reminded of that once again this morning when reading about a method of spoofing road signs. It’s a technique that just looks like street art to you or me, but it completely changes the meaning of a stop sign to the machine reading it.

No actual self-driving cars were harmed in this study

First off, it’s important to note that the paper is a proof-of-concept; no actual automotive-grade machine vision systems were used in the test. Covering your local stop signs in strips of black and white tape is not going to lead to a sudden spate of car crashes today. Ivan Evtimov—a grad student at the University of Washington—and some colleagues first trained a deep neural network to recognize different US road signs. Then, they created an algorithm that generated changes to the signs that human eyes find innocuous, but which changed the meaning when a sign was read by the AI classifier they just trained.

Evtimov and his co-authors propose two different ways to hack a street sign, either by printing out an altered copy that you cover the existing sign with or by just making small additions with stickers. There’s also a choice of those alterations. One is to use subtle perturbations that make the sign look weathered to a human observer. The other is to camouflage the changes so they look like street art: in this case either small black and white strips or blocky text reading LOVE and HATE.

The results were pretty interesting. One test was able to cause a stop sign to be reliably misread as a speed limit sign, and another caused a right turn sign to be classified as either a stop or added lane sign. To repeat: these kinds of attacks worked on the specific machine vision system the researchers trained, and the altered signs in the gallery above would not fool any cars on the road today. But they do prove the concept that this kind of spoofing would work, provided one had access to the training set and the system they were attacking.

Life imitates art

You should be forgiven if your reaction to this study is to wonder “Gee, what took them so long?” Artists and writers have been exploring the idea of exploiting the quirks and vagaries of machine recognition for a while now. Take for an example the “ugly shirt” in William Gibson’s Zero History, which renders the wearer invisible to CCTV:

Pep, in black cyclist’s pants, wore the largest, ugliest T-shirt she’d ever seen, in a thin, cheap-looking cotton the color of ostomy devices, that same imaginary Caucasian flesh-tone. There were huge features screened across it in dull black halftone, asymmetrical eyes at breast height, a grim mouth at crotch-level. Later she’d be unable to say exactly what had been so ugly about it, except that it was somehow beyond punk, beyond art, and fundamentally, somehow, an affront.

Gibson wrote that in 2010, crediting the idea to friend and fellow author Bruce Sterling. In the book, the “ugly shirt” is dismissed first as myth, then recognized by some as an exploit of the digital world which may be just a little too dangerous.

Since then, artists like Adam Harvey and Simone Niquille have been playing around with ideas like CV Dazzle and Glamouflage to confuse cameras. We’re also starting to see it get applied to cars: earlier this year the artist James Bridle’s work Autonomous Trap 001 imagines using a salt circle to trap a self-driving car “using no-entry and other glyphs.”

The future is sure going to be interesting…

Human Footprints Discovered Dating From 5 Million Years Ago

Original Article

By Jamie Seidel

These footprints, found at Trachilos in western Crete, have been attributed to an ancient human ancestor that walked upright some 5.7 million years ago. Credit: Andrzej Boczarowski

HUMAN-like footprints have been found stamped into an ancient sea shore fossilised beneath the Mediterranean island of Crete.

They shouldn’t be there.

Testing puts the rock’s age at 5.7 million years.

That’s a time when palaeontologists believe our human ancestors had only apelike feet.

And they lived in Africa.

But a study into the Trachilos, western Crete, prints determines them to feature prominent human features and an upright stance.

And that’s significant as the human foot has a unique shape. It combines a long sole, five short toes, no claws — and a big toe.

In comparison, the foot of a Great Ape look much more like a human hand.

And that step in evolution wasn’t believed to have taken place until some 4 million years ago.

Comparison of Trachilos footprint with bears (top), non-hominin primates (middle), and hominins (bottom). (a) Brown bear (b) Grizzly bear (c) Vervet monkey (d) Lowland gorilla (e) chimpanzee. (f) modern human (g) Trachilos footprint (h) modern human foot (i) Archaic Homo footprint. Pictures: Gerard D. Gierliński et al / Elsevier

Comparison of Trachilos footprint with bears (top), non-hominin primates (middle), and hominins (bottom). (a) Brown bear (b) Grizzly bear (c) Vervet monkey (d) Lowland gorilla (e) chimpanzee. (f) modern human (g) Trachilos footprint (h) modern human foot (i) Archaic Homo footprint. Pictures: Gerard D. Gierliński et al / Elsevier

Published in the latest edition of Proceedings of the Geologists’ Association, the study’s conclusions are bound to raise eyebrows in the human evolution community.

“The interpretation of these footprints is potentially controversial,” the study’s abstract admits.

“The print morphology suggests that the trackmaker was a basal member of the clade Hominini (human ancestral tree), but as Crete is some distance outside the known geographical range of pre-Pleistocene (2.5 million to 11,700 years ago) hominins we must also entertain the possibility that they represent a hitherto unknown late Miocene primate that convergently evolved human-like foot anatomy.”

Put simply, the study argues there was another — previously unidentified — human-like creature walking the Earth long before we believed it was possible.

A reconstruction of the skeleton of Australopithecus sediba, centre, next to a small-bodied modern human female, left, and a male chimpanzee. Picture: AP

A reconstruction of the skeleton of Australopithecus sediba, centre, next to a small-bodied modern human female, left, and a male chimpanzee. Picture: AP

The existing pool of evidence into humanity’s origins is built around Australopithecus fossils found in south and East Africa, along with a 3.7 million-year-old set of upright hominin (human ancestor) footprints found in Tanzania.

RELATED: Mystery of the Kimberley dinosaur prints solved

Called the Laetoli footprints, these are believed to have been made by Australopithecus with a narrow heel and poorly defined arch.

In contrast, a set of 4.4 million-year-old prints found in Ethiopia are believed from the hominin Ardipithecus ramidus. These prints are much closer to that of an ape than a modern human.

But the Trachilos footprints, at 5.7 million years, appear to be more human than Ardipithecus.

Maps and photos detailing the location and shape of the track-bearing stone in Crete. Pictures: Gerard D. Gierliński et al / Elsevier

Maps and photos detailing the location and shape of the track-bearing stone in Crete. Pictures: Gerard D. Gierliński et al / Elsevier

They were found by the study’s lead author, Gerard Gierlinski, while he was holidaying on the island of Crete in 2002. The palaeontologist at the Polish Geological Institute has taken more than a decade to analyse his find.

The Trachilos prints have a big toe very similar to our own in size, shape and position. It has a distinct ball on its sole. It has the human-like sole. It doesn’t have claws.

They were pressed into the firm but wet sands of a small river delta at a time when the Sahara was lush and green, and savanna extended from North Africa around the Eastern Mediterranean. Crete itself was still part of the Greek mainland then.

The three most well-preserved footprints, each shown as a photo (left), laser surface scan (middle) and scan with interpretation (right). a was made by a left foot, b and c by right feet. Scale bars, 5cm. 1—5 denote digit number; ba, ball imprint; he, heel imprint. Pictures: Gerard D. Gierliński et al / Elsevier

The three most well-preserved footprints, each shown as a photo (left), laser surface scan (middle) and scan with interpretation (right). a was made by a left foot, b and c by right feet. Scale bars, 5cm. 1—5 denote digit number; ba, ball imprint; he, heel imprint. Pictures: Gerard D. Gierliński et al / Elsevier

They have been dated using foraminifera (analysis of marine microfossils) as well as their position beneath a distinctive sedimentary rock layer created when the Mediterranean Sea dried up about 5.6 million years ago.

The footprints’ discovery also comes shortly after the fragmentary fossils of a 7.2 million-year-old primate Graecopithecus, discovered in Greece and Bulgaria, were reclassified as belonging to the human ancestral tree.

Scientist Crack The Code On “Neandertar”

By George Dvorsky
Image courtesy James Ives.

Over a hundred thousand years ago, Neanderthals used tar to bind objects together, yet scientists have struggled to understand how these ancient humans, with their limited knowledge and resources, were able to produce this sticky substance. A new experiment reveals the likely technique used by Neanderthals, and how they converted tree bark into an ancient form of glue.

Neanderthals were manufacturing their own adhesives as far back as 200,000 years ago, which is kind of mind blowing when you think about it. We typically think of fire, stone tools, and language as the “killer apps” of early human development, but the ability to glue stuff together was as much of a transformative technology as any of these.

Tar produced from the experiment seen dripping from a flint flake. (Image: Paul Kozowyk)

New research published in Scientific Reports reveals the startling ingenuity and intellectual capacities of Neanderthals, and the likely method used to cook up this ancient adhesive.

Based on the archaeological evidence, we know that Neanderthals were manufacturing tar during the Middle Pleistocene Era. The oldest traces of this practice date back to a site in Italy during a time when only Neanderthals were present in Europe. Similar tar lumps and adhesive residues have also been found in Germany, the oldest of which dates back some 120,000 years ago. The Neanderthals used tar for hafting—the practice of attaching bones or stone to a wooden handle to create tools or weapons. It was a force multiplier in engineering, allowing these ancient humans to think outside the box and build completely new sets of tools.

What makes the presence of tar at this early stage in history such a mystery, however, is that Neanderthals had figured out a way to make the useful goo thousands of years before the invention of ceramics, which by the time of the ancient Mesopotamians was being used to produce tar in vast quantities. For years, archaeologists have suspected that Neanderthals performed dry distillation of birch bark to synthesize tar, but the exact method remained a mystery—particularly owing to the absence of durable containers that could be used to cook the stuff up from base materials. Attempts by scientists to replicate the suspected Neanderthal process produced tar in miniscule amounts and far short of what would be required for hafting.

To finally figure out how the Neanderthals did it, a research team led by Paul Kozowyk from Leiden University carried out a set of experiments. Tar is derived from the dry distillation of organic materials, typically birch bark or pine wood, so Kozowyk’s team sought to reproduce tar with these substances and the cooking methods likely at the disposal of the Neanderthals. It’s very likely that the Neanderthals stumbled upon the idea while sitting around the campfire.

Tar collected in a birch bark “container.” (Image: Paul Kozowyk)

“A tightly rolled piece of birch bark simply left in a fire and removed when partially burned, once opened, will sometimes contain small traces of tar inside the roll along the burned edge,” explained the authors in the study. “Not enough to haft a tool, but enough to recognize a sticky substance.”

With this in mind, the researchers applied three different methods, ranging from simple to complex, while recording the amount of fuel, materials, temperatures, and tar yield for each technique. Their results were compared to known archaeological relics to see if they were on the right (or wrong) track. By the end of the experiments, the researchers found that it was entirely possible to create tar in the required quantities using even the simplest method, which required minimal temperature control, an ash mound, and birch bark.

The maximum amount of tar obtained from a single experimental attempt was 15,7 grams—far more than any tar remains from the Middle Paleolithic Era. (Image: Paul Kozowyk)

“A simple bark roll in hot ashes can produce enough tar to haft a small tool, and repeating this process several times (simultaneously) can produce the quantities known from the archaeological record,” write the researchers. “Our experiments allowed us to develop a tentative framework on how the dry distillation of birch bark may have evolved, beginning with the recognition of small traces of birch bark tar in partially burned bark rolls.” They added: “Our results indicate that it is possible to obtain useful amounts of tar by combining materials and technology already in use by Neandertals.”

Indeed, by repeating even the simplest process, the researchers were able to obtain 15.9 grams of useable tar in a single experiment, which is far more than any tar remains found in Middle Paleolithic sites. What’s more, temperature control doesn’t need to be as precise as previously thought, and a durable container, such as a ceramic container, is not required. That said, the process did require a certain amount of acumen; for this process to come about, Neanderthals needed to recognize certain material properties, such as the degree of adhesiveness and viscosity. We’ll never be certain this is exactly what Neanderthals were doing, but it’s a possibility with important implications for early humans in general.

“What this paper reinforces is that all of the humans that were around 50,000 to 150,000 years ago roughly, were culturally similar and equally capable of these levels of imagination, invention and technology,” explained Washington University anthropologist Erik Trinkaus, who wasn’t involved in the study, in an interview with Gizmodo. “Anthropologists have been confusing anatomy and behavior, making the inference that archaic anatomy equals archaic behavior, and ‘modern’ behavior [is equivalent to] modern human anatomy. What is emerging from the human fossil and Paleolithic archeological records across the Eurasia and Africa is that, at any one slice in time during this period, they were all doing—and capable of doing—basically the same things, whatever they looked like.”

Sabrina Sholts, an anthropologist at the Smithsonian Institute’s National Museum of Natural History, says this study is a nice example of how experimental archaeology can be used to supplement the material record and address questions about past hominid behavior.

“I think it’s certainly worthwhile to test methods of tar production that could have been used by Neanderthals and early modern humans, if only to challenge our assumptions about the kind of technologies—and ideas—within their reach,” she told Gizmodo.

Ancient Sharp-Toothed Whale Boogles Researchers

Original Article

By George Dvorsky

Fossilized ancient whale skull. (Image: Ben Healley/Museums Victoria)

All living whales are descended from terrestrial mammals, but how these aquatic creatures evolved into giant filter-feeders remains a biological mystery. New research shows that ancient whales had razor-sharp teeth similar to land-based carnivores—an observation that’s upsetting a prevailing idea that ancient whales used their teeth for filter feeding.

Whales equipped with bristle-like baleen structures for filter feeding are the gentle giants of the sea, but as new research from Monash University and Museums Victoria points out, their ancestors were ferocious predators, featuring decidedly sharp teeth. This means that ancient whales likely never used their teeth for sifting seawater, and that some other evolutionary mechanism was responsible for the emergence of filter feeding behavior.

Today, whales are comprised of two main groups. You’ve got your filter feeding whales, also known as mysticeti, a group that includes humpbacks, fin whales, blue whales, and minke whales. And then you’ve got toothed whales, such as orcas. Filter feeding whales use their rows of baleen to filter plankton and small fish from the ocean, whereas orca whales use their teeth to chomp down on large prey, such as sea lions and other whales. Scientists have theorized that baleen evolved from teeth, but this latest research, published in Biology Letters, casts doubt on this line of thinking.

“Contrary to what many people thought, it seems that whales never used their teeth as a sieve, and instead evolved their signature filter feeding strategy only later—maybe after their teeth had already been lost,” noted study lead co-author Alistair Evans in a press release. “Our findings provide crucial new insights into how the biggest animals ever evolved their most important trait: filter feeding.”

Comparison of teeth among dingoes, seals, and an extinct species of whale. (Image: David. P. Hocking et al., 2017)

For the study, Evans and his colleagues studied the 3D shape of fossilized teeth and modern teeth collected from museums in Australia and overseas. They compared the teeth of eight ancient whale species to four extant terrestrial animals—lions, coyotes, pumas, and dingoes—and five seals. As Evans explained, the size, orientation, and sharpness of teeth can tell us much about what an animal eats.

“Predators that kill and chew their prey need sharp teeth with cutting blades,” he said. “By contrast, species that use their teeth as a sieve have blunt teeth with rounded edges that help to filter prey from water. We found that ancient whales had sharp teeth similar to lions and dingoes so it likely they used their teeth to kill rather than filter.”

The study shows that the teeth of ancestral mysticeti whales were just as sharp as those of modern, land-dwelling carnivores and predatory seals, and that these animals were capable of both capturing and devouring prey with their teeth.

Can’t get here from there: New research suggests baleen filters emerged independently from ancient “raptorial” teeth. (Image: Mason Weinrich, Whale Center of New England)

“Our results suggest that mysticetes never passed through a tooth-based filtration phase, and that the use of teeth and baleen in early whales was not functionally connected,” conclude the authors in their study. The “raptorial” composition of this ancient teeth (i.e. teeth used to grab and chomp-up large prey) highly preclude the possibility of these features evolving into the keratinous, comb-like filtering structure that now grows in the upper jaw of modern baleen whales, say the researchers.

So if the teeth of ancient mysticeti whales didn’t evolve into baleen, how did filter-feeding emerge? That’s still an open question, but there are at least two possibilities. First, it’s conceivable that baleen emerged alongside raptorial teeth, and that a period of overlap existed for a while until the filter-feeding strategy eventually won out. The other possibility is that ancient mysticeti evolved into suction feeders, triggering tooth loss and, eventually, a new evolutionary course that led to filter-feeding (on that point, recent research shows that an ancient offshoot of dwarf dolphins evolved into suction feeders).

More work clearly needs to be done in this area, but it would really help if paleontologists were to discover a “missing link” species of whale, showing what was going on in the mouths of these aquatic animals during this important transitionary time.

[Biology Letters]

Recommended Stor

“Fast Radio Burst” Detected in Deep Space

Original Article

By Eric Mack

frb121102-750
The streaks across the colored energy plot are fast radio bursts, or FRBs, appear at different times and different energies because of dispersion caused by 3 billion years of travel through intergalactic space.UC Berkeley

FRB 121102 originates from a distant dwarf galaxyGemini Observatory/AURA/NSF/NRC

The unexplained signals from the other side of the universe known as fast radio bursts are a rarely observed phenomenon and only one of them has been picked up more than once. Now scientists engaged in the search for extraterrestrial intelligencesay that lone repeating fast radio burst (FRB) is being heard twittering away.

FRBs are bright, millisecond-long pulses of radio signals from beyond the Milky Way that were first identified only a decade ago. Suggested explanations include everything from neutron star outbursts to alien civilizations using some form of directed energy to propel a spacecraft.

One burst first observed in 2012, named FRB 121102, was later found to repeat in 2015.  On Saturday, UC Berkeley postdoctoral researcher Dr. Vishal Gajjar used the Breakthrough Listen backend instrument at the Green Bank Telescope in West Virginia to target FRB 121102 once again. After observing for five hours and across the entire 4 to 8 GHz frequency band, Gajjar and the Listen team analyzed the 400 terabytes of data gathered and found 15 new pulses from FRB 121102.

“The possible implications are two folds,” Gajjar told me via email Tuesday. “This detection at such a high frequency helps us scrutinize many (of FRB 121102’s) origin models. The frequency structure we see across our total band of 4 to 8 GHz also allows us to understand the intervening medium between us and the source.”

The location of FRB 121102 has already been previously traced to a dwarf galaxy about 3 billion light years away, but what exactly might be sending out such strong signals from there remains a mystery. Gajjar says that the repeating nature and current state of heightened activity for FRB 121102 does seem to rule out some of the most destructive explanations, such as colliding black holes.

“As the source is going into another active state means that the origin models associated with some sort of cataclysmic events are less likely to be the case of FRB 121102,” he said.  “It should be noted that they can still be valid for other FRBs.”

Whatever or whoever sent out the bright radio bursts, they left their source a very long time ago when the only life here on Earth was single-celled. Perhaps some ancient intelligent species was clued in to the emergence of life on our planet and knew that a signal sent would reach us just as we were becoming technologically sophisticated for the first time?

Perhaps, but given the current lack of evidence of such extra-terrestrial life, a natural phenomenon like a pulsar seems a more likely explanation.

The Breakthrough Listen team urged other astronomers to make follow-up observations of FRB 121102 during its current state of heightened activity in an Astronomer’s Telegram post that first reported the new results on Monday. The researchers say the new bursts will be described in more detail in an upcoming paper for a scientific journal.

Technically Literate: Original works of short fiction with unique perspectives on tech, exclusively on CNET.

Crowd Control: A crowdsourced science fiction novel written by CNET readers.

29 States Banned Individual State Laws About Seeds

Original Article

By Kristina Johnson

This story was originally published by Food and Environment Reporting Network.

With little notice, more than two dozen state legislatures have passed “seed-preemption laws” designed to block counties and cities from adopting their own rules on the use of seeds, including bans on GMOs. Opponents say that there’s nothing more fundamental than a seed, and that now, in many parts of the country, decisions about what can be grown have been taken out of local control and put solely in the hands of the state.

“This bill should be viewed for what it is — a gag order on public debate,” says Kristina Hubbard, director of advocacy and communications at the Organic Seed Alliance, a national advocacy group, and a resident of Montana, which along with Texas passed a seed-preemption bill this year. “This thinly disguised attack on local democracy can be easily traced to out-of-state, corporate interests that want to quash local autonomy.”

Seed-preemption laws are part of a spate of legislative initiatives by industrial agriculture, including ag-gag laws passed in several states that legally prohibit outsiders from photographing farms, and “right-to-farm” laws that make it easier to snuff out complaints about animal welfare. The seed laws, critics say, are a related thrust meant to protect the interests of agro-chemical companies.

Nearly every seed-preemption law in the country borrows language from a 2013 model bill drafted by the American Legislative Exchange Council (ALEC). The council is “a pay-to-play operation where corporations buy a seat and a vote on ‘task forces’ to advance their legislative wish lists,” essentially “voting as equals” with state legislators on bills, according to The Center for Media and Democracy. ALEC’s corporate members include the Koch brothers as well as some of the largest seed-chemical companies — Monsanto, Bayer, and DuPont — which want to make sure GMO bans, like those enacted in Jackson County, Oregon, and Boulder County, Colorado, don’t become a trend.

Seed-preemption laws have been adopted in 29 states, including Oregon — one of the world’s top five seed-producing regions — California, Iowa, and Colorado. In Oregon, the bill was greenlighted in 2014 after Monsanto and Syngenta spent nearly $500,000 fighting a GMO ban in Jackson County. Monsanto, Dow AgroSciences, and Syngenta also spent more than $6.9 million opposing anti-GMO rules in three Hawaiian counties, and thousands more in campaign donations. (These companies are also involved in mergers that, if approved, would create three seed-agrochemical giants.)

Montana and Texas were the latest states to join the seed-preemption club. Farming is the largest industry in Montana, and Texas is the third-largest agricultural state in terms of production, behind California and Iowa.

Language in the Texas version of the bill preempts not only local laws that affect seeds but also local laws that deal with “cultivating plants grown from seed.” In theory, that could extend to almost anything: what kinds of manure or fertilizer can be used, or whether a county can limit irrigation during a drought, says Judith McGeary, executive director of the Farm and Ranch Freedom Alliance. Along with other activists, her organization was able to force an amendment to the Texas bill guaranteeing the right to impose local water restrictions. Still, the law’s wording remains uncomfortably open to interpretation, she says.

In both Montana and Texas, the laws passed with support from the state chapter of the Farm Bureau Federation — the nation’s largest farm-lobbying group — and other major ag groups, including the Montana Stockgrowers Association and the Texas Seed Trade Alliance. In Texas, DuPont and Dow Chemical also joined the fight, publicly registering their support for the bill.

Echoing President Trump’s anti-regulatory rhetoric, preemption proponents argue that, fundamentally, seed-preemption laws are about cutting the red tape from around farmers’ throats. Supporters also contend that counties and cities don’t have the expertise or the resources to make sound scientific decisions about the safety or quality of seeds.

“We don’t believe the locals have the science that the state of Texas has,” said Jim Reaves, legislative director of the Texas Farm Bureau. “So we think it’s better held in the state’s hands. It will basically tell cities that if you have a problem with a certain seed, the state can ban it, but you can’t.”

Other preemption proponents claim that local seed rules would simply get too complicated, forcing growers to navigate conflicting laws in different counties. “Many of us farm fields in more than one county,” said Don Steinbeisser Jr., a Sidney, Montana, farmer who testified in support of his state’s bill at a legislative hearing this spring. “Having different rules in each county would make management a nightmare and add costs to the crops that we simply do not need and cannot afford.”

But critics of preemption laws, including farmers (organic and conventional) and some independent seed companies, are afraid of losing their legislative rights. They claim something far more serious than a single farmer’s crop is at stake.

“There is no looming threat that warrants forfeiting the independence of local agricultural communities in the form of sweeping language that eliminates all local authority governing one of our most valuable national resources,” says Hubbard of the Organic Seed Alliance.

Organic farmers can lose their crop if it becomes contaminated with genetically modified material. Even conventional farmers who rely on exports to Asia, where GMOs are banned by some countries, face risks from contamination. There are currently no plans to push for a GMO ban anywhere in Texas or Montana, and neither state requires companies to disclose the use of GMOs. (In Montana, at least, Gov. Steve Bullock, a Democrat, added an amendment to the preemption bill when he signed it, preserving the right of local governments to require that farmers notify their neighbors if they’re planting GMO seeds.) Yet critics of the preemption laws fear that they tie the hands of local governments, which will make it harder for communities to respond to problems in the future.

Still, the fight isn’t just about GMOs, says Judith McGeary, noting that seeds coated with neonicotinoids — a class of pesticides linked to colony collapse disorder in bees — are also at issue. Under the Texas bill, a local government can’t ban neonic seeds in order to protect pollinator insects, and in the current political climate, it’s hard to imagine that such a ban would happen on the state level.

“We have an extremely large state with an incredible diversity of agricultural practices and ecological conditions, and you’ve now hobbled any ability to address a problem that’s found in one local area,” says McGeary. “Until it’s a big enough issue for a state of 23 million to pay attention to through the state legislature, nothing is going to happen,” she says.

Scientists Incubate Lamb In Artificial Womb For the Second Time.

Original Article

A lamb in an artificial womb from a team at the Children’s Hospital of Philadelphia. (Image: The Children’s Hospital of Philadelphia)

It may look like a glorified Ziplock bag, but the artificial womb could one day save the lives of the thousands of babies born every year prematurely.

For the second time, researchers announced this week that they have successfully incubated lambs born before reaching full term in an artificial ‘womb.’ In findings published this week in The American Journal of Obstetrics & Gynecologyresearchers from the University of Western Australia, Australia’s Women and Infants Research Foundation, and Tohoku University Hospital in Japan reported that several lambs continued to grow during a week-long incubation period in an “ex-vivo uterine environment” dubbed “EVE.” They appeared healthy when later delivered.

The artificial womb system, “EVE,” used to incubate lambs. IMAGE: Women and Infants Research Foundation

It’s not the first time that researchers have successfully used such a system to incubate preterm lambs. In April, researchers at the Children’s Hospital of Philadelphia used a similar system to incubate premature lambs for a record-breaking four weeks. Lambs have a shorter gestation period so the 105- to 115-day-old premature lamb fetuses in that study were the equivalent of about 23 weeks in a human. The hope is that such systems could help babies born as early as 22 weeks. Each year in the United States, approximately 30,000 births are critically preterm, meaning babies are born before 26 weeks of a full 37-week gestation period.

The system in the new study relies on a fluid-filled plastic bag to keep the lambs alive. A bath of artificial amniotic fluid fills the bag to mimic conditions inside the womb. An external oxygenator fills in for the mother’s placenta, allowing gas exchange of CO2 and oxygen in the fetal blood. Like the Children’s Hospital study, the Australian and Japanese researchers relied on the fetal heart to power the womb, ensuring that developing hearts and lungs don’t get overloaded and giving those organs a chance to develop normally.

Before the April study, the maximum duration a lamb fetus had survived in an artificial system was 60 hours, and those animals experienced brain damage.

The success—using a fetal-powered system for the second time—is an important step towards having something that could actually be tested in human babies. Such a system would be a vast improvement over the current treatment, which is to place premature infants in an incubator and rely on devices like ventilators to assist their still-developing organs.

Such new technologies, though, will also inevitably raise new ethical questions. Recently, one researcher pointed out that the availability of artificial womb technology could threaten a woman’s right to an abortion, since in the US the right hinges in part on whether a fetus is viable. The technology could also result in premature babies that survive, but have lifelong impairments or conditions, raising questions of when it would be appropriate to use such technology.

There is still much work to be done before artificial wombs are ready for humans—if they ever are at all. Researchers have said it will be at least five years before trials are even a possibility, if not more. But it is a future we are inching closer to every day.

The Theoretical Origin of Complex Life and “Snowball” Earth

Original Article

Life on Earth goes back at least two billion years, but it was only in the last half-billion that it would have been visible to the naked eye. One of the enduring questions among biologists is how life made the jump from microbes to the multicellular plants and animals who rule the planet today. Now, scientists have analyzed chemical traces of life in rocks that are up to a billion years old, and they discovered how a dramatic ice age may have led to the multicellular tipping point.

Writing in Nature, the researchers carefully reconstruct a timeline of life before and after one of the planet’s most all-encompassing ice ages. About 700 million years ago, the Sturtian glaciation created what’s called a “snowball Earth,” completely covering the planet in ice from the poles to the equator. About 659 million years ago, the Sturtian ended with an intense greenhouse period when the planet heated rapidly. Then, just as things were burning up, the Marinoan glaciation started and covered the planet in ice again. In the roughly 15 million years between the two snowballs, a new world began to emerge.

Just before the rise of plankton that provided food for multicellular animals, the Earth's continents had merged and broken apart and merged again.
Just before the rise of plankton that provided food for multicellular animals, the Earth’s continents had merged and broken apart and merged again.

Jochen J. Brocks, a geologist from the Australian National University, Canberra, joined with his colleagues to track the emergence of multicellular life by identifying traces left by cell membranes in ancient rocks. Made from lipids and their byproducts, cell membrane “biomarkers” are like fossils for early microorganisms. By measuring chemical changes in these membranes, Brocks and his team discovered a “rapid rise” of new, larger forms of sea-going plankton algae in the warming waters after the Sturtian snowball. Some of these lifeforms were eukaryotes, meaning they had developed a nucleus—that’s another necessary step on the road to multicellular life.

But multicellular life couldn’t evolve without a major shift in the planet’s geochemistry after the Sturtian. From the upper atmosphere to the deepest oceans, the planet’s molecular composition had to change.

The great oxygen rush

The researchers suggest this transformation started when melting glaciers at the end of the snowball caused rapid erosion of landmasses, sending huge amounts of nutrients into the oceans. Slurries of icy minerals cascaded into the sea, sinking to the bottom and sequestering carbon.

That’s when things got real. “Such massive burial of reduced carbon must have been balanced by a net release of oxygen into the atmosphere, initiating the protracted oxygenation of Neoproterozoic deep oceans,” write the scientists. A world with very little oxygen in it was suddenly inundated with the stuff, both in and out of the water.

The rise of oxygen set off a cascade of linked events. It very likely led to the rise of phosphorous in the water, which is a key building block in DNA, and the energy-rich molecule ATP that provides fuel for our bodies. This meant more complex lifeforms like algae, which release oxygen during their digestive process. As algae diversified, lifeforms evolved to feed on the algae. Over time, new predators evolved to feed on those creatures, and so on. The more creatures who died and sank to the ocean floor, the more carbon was sequestered. As the researchers put it, the planet developed “a more efficient biological pump.”

A timeline showing the relationship between Earth's changing geochemistry and the rise of eukaryotic life like algae.
A timeline showing the relationship between Earth’s changing geochemistry and the rise of eukaryotic life like algae.
Brocks, et. al.

This oxygen- and phosphorus-driven change was unstoppable. Even after the Minoan glaciation’s snowball, when the surface of the ocean heated up to as much as 60 degrees Celsius in the tropics, algae found its way to the poles and continued to diversify. Life as we know it appears to have emerged in the warm waters of a planet vacillating wildly between snowball and greenhouse. The climate became more stable about 550 million years ago, and we see the emergence of animals with heads, tails, and internal organs.

Harvard geobiologist Andrew Knoll, who was not involved in the study, wrote that this discovery“will change the conversation” about the emergence of complex life on Earth. Fundamentally, Brocks and his colleagues’ work shows that environmental changes are key to the evolution of life. Without an oxygenated ocean, there would be no animals on this world.

That’s why scientists are deeply concerned about the de-oxygenation of the seas today as a result of climate change and nutrient runoff from land. De-oxygenated areas called “dead zones” will slow or even halt the planet’s biological pump. Earth is a glorious geochemical machine, running processes that take millions of years. Perturbations in those processes can completely transform the world. Sometimes that means the planet blooms with life, as it did during the rise of oxygen and phosphorous in the ocean. But sometimes it brings death.

Nature, 2017. DOI: 10.1038/nature23457

Anylists Believe Human Chipping to Become Mainstream In the Next 50 Years

Original Article

You will get chipped. It’s just a matter of time.

In the aftermath of a Wisconsin firm embedding microchips in employees to ditch company badges and corporate logons, the Internet has entered into full-throated debate.

Religious activists are so appalled, they’ve been penning nasty 1-star reviews of the company, Three Square Market, on Google, Glassdoor and social media.

On the flip side, seemingly everyone else wants to know: Is this what real life is going to be like soon at work? Will I be chipped?

“It will happen to everybody,” says Noelle Chesley, 49, associate professor of sociology at the University of Wisconsin-Milwaukee. “But not this year, and not in 2018. Maybe not my generation, but certainly that of my kids.”

Gene Munster, an investor and analyst at Loup Ventures, is an advocate for augmented reality, virtual reality and other new technologies. He thinks embedded chips in human bodies is 50 years away. “In 10 years, Facebook, Google, Apple and Tesla will not have their employees chipped,” he says. “You’ll see some extreme forward-looking tech people adopting it, but not large companies.”

The idea of being chipped has too “much negative connotation” today, but by 2067 “we will have been desensitized by the social stigma,” Munster says.

For now, Three Square Market, or 32M, hasn’t offered concrete benefits for getting chipped beyond badge and log-on stats. Munster says it was a “PR stunt” for the company to get attention to its product and it certainly succeeded, getting the small start-up air play on CBS, NBC and ABC, and generating headlines worldwide. The company, which sells corporate cafeteria kiosks designed to replace vending machines, would like the kiosks to handle cashless transactions.

This would go beyond paying with your smartphone. Instead, chipped customers would simply wave their hands in lieu of Apple Pay and other mobile-payment systems.

The benefits don’t stop there. In the future, consumers could zip through airport scanners sans passport or drivers license; open doors; start cars; and operate home automation systems. All of it, if the technology pans out, with the simple wave of a hand.

Wisconsin workers implanted with microchips

Not a GPS tracker

The embedded chip is not a GPS tracker, which is what many critics initially feared. However, analysts believe future chips will track our every move.

For example, pets for years have been embedded with chips to store their name and owner contact. Indeed, 32M isn’t the first company to embed chips in employees. In 2001, Applied Digital Solutions installed the “VeriChip” to access medical records but the company eventually changed hands and stopped selling the chip in 2010.

In Sweden, BioHax says nearly 3,000 customers have had its chip embedded to do many things, including ride the national rail system without having to show the conductor a ticket.

In the U.S., Dangerous Things, a Seattle-based firm, says it has sold “tens of thousands” of chips to consumers via its website. The chip and installation cost about $200.

After years of being a subculture, “the time is now” for chips to be more commonly used, says Amal Graafstra, founder of Dangerous Things. “We’re going to start to see chip implants get the same realm of acceptance as piercings and tattoos do now.”

In other words, they’ll be more visible, but not mainstream yet.

“It becomes part of you the way a cellphone does,” Graafstra says. “You can never forget it, and you can’t lose it. And you have the capability to communicate with machines in a way you couldn’t before.”

But after what we saw in Wisconsin last week, what’s next for the U.S. workforce? A nation of workers chipping into their pods at Federal Express, General Electric, IBM, Microsoft and other top corporations?

Experts contend consumers will latch onto chips before companies do.

Chesley says corporations are slower to respond to massive change and that there will be an age issue. Younger employees will be more open to it, while older workers will balk. “Most employers who have inter-generational workforces might phase it in slowly,” she says. “I can’t imagine people my age and older being enthusiastic about having devices put into their bodies.”

Adds Alec Levenson, a researcher at University of Southern California’s Center for Effective Organizations, “The vast majority of people will not put up with this.”

Three Square Market said the chips are voluntary, but Chesley says that if a company announces a plan to be chipped, the expectation is that you will get chipped — or risk losing out on advancement, raises and being a team player.

“That’s what we’re worried about,” says Bryan Allen, chief of staff for state Rep. Tina Davis (D), who is introducing a bill in Pennsylvania to outlaw mandatory chip embedding. “If the tech is out there, what’s to stop an employer from saying either you do this, or you can’t work here anymore.”

Several states have passed similar laws, while one state recently saw a similar bill die in committee. “I see this as a worker’s rights issue,” says Nevada state Sen. Becky Harris (R), who isn’t giving up. “This is the wrong place to be moving,” she says.

Should future corporations dive in to chipping their employees, they will have huge issues of “trust” to contend with, says Kent Grayson, a professor of marketing at the Kellogg School of Management at Northwestern University.

“You’ve got to have a lot of trust to put one of those in your body,” Grayson says. Workers will need assurances the chip is healthy, can’t be hacked, and its information is private, he says.

Meanwhile, religious advocates have taken to social media to express their displeasure about chipping, flooding 32M’s Facebook page with comments like “boycott,” “completely unnecessary” and “deplorable.” On 32M’s Google page, Amy Cosari a minister in Hager City, Wisc., urges employees to remove the chip.

“When Jesus was raised, he was raised body and soul, and it was him, not zombie, not a ghost and we are raised up in the same way,” Cosari wrote. ”Employees of 32Market, you are not a walking debit card.”

Get used to it, counsels Chesley.

Ten years ago, employees didn’t look at corporate e-mail over the weekend. Now they we do, “whether we like it or not,” he says.

Be it wearable technology or an embedded chip, the always on-always connected chip is going to be part of our lives, she says.

Biohackers Install Malware in DNA

Original Article

WHEN BIOLOGISTS SYNTHESIZE DNA, they take pains not to create or spread a dangerous stretch of genetic code that could be used to create a toxin or, worse, an infectious disease. But one group of biohackers has demonstrated how DNA can carry a less expected threat—one designed to infect not humans nor animals but computers.

In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. While that attack is far from practical for any real spy or criminal, it’s one the researchers argue could become more likely over time, as DNA sequencing becomes more commonplace, powerful, and performed by third-party services on sensitive computer systems. And, perhaps more to the point for the cybersecurity community, it also represents an impressive, sci-fi feat of sheer hacker ingenuity.

“We know that if an adversary has control over the data a computer is processing, it can potentially take over that computer,” says Tadayoshi Kohno, the University of Washington computer science professor who led the project, comparing the technique to traditional hacker attacks that package malicious code in web pages or an email attachment. “That means when you’re looking at the security of computational biology systems, you’re not only thinking about the network connectivity and the USB drive and the user at the keyboard but also the information stored in the DNA they’re sequencing. It’s about considering a different class of threat.”

A Sci-Fi Hack

For now, that threat remains more of a plot point in a Michael Crichton novel than one that should concern computational biologists. But as genetic sequencing is increasingly handled by centralized services—often run by university labs that own the expensive gene sequencing equipment—that DNA-borne malware trick becomes ever so slightly more realistic. Especially given that the DNA samples come from outside sources, which may be difficult to properly vet.

If hackers did pull off the trick, the researchers say they could potentially gain access to valuable intellectual property, or possibly taint genetic analysis like criminal DNA testing. Companies could even potentially place malicious code in the DNA of genetically modified products, as a way to protect trade secrets, the researchers suggest. “There are a lot of interesting—or threatening may be a better word—applications of this coming in the future,” says Peter Ney, a researcher on the project.

Regardless of any practical reason for the research, however, the notion of building a computer attack—known as an “exploit”—with nothing but the information stored in a strand of DNA represented an epic hacker challenge for the University of Washington team. The researchers started by writing a well-known exploit called a “buffer overflow,” designed to fill the space in a computer’s memory meant for a certain piece of data and then spill out into another part of the memory to plant its own malicious commands.

But encoding that attack in actual DNA proved harder than they first imagined. DNA sequencers work by mixing DNA with chemicals that bind differently to DNA’s basic units of code—the chemical bases A, T, G, and C—and each emit a different color of light, captured in a photo of the DNA molecules. To speed up the processing, the images of millions of bases are split up into thousands of chunks and analyzed in parallel. So all the data that comprised their attack had to fit into just a few hundred of those bases, to increase the likelihood it would remain intact throughout the sequencer’s parallel processing.

When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail.

The result, finally, was a piece of attack software that could survive the translation from physical DNA to the digital format, known as FASTQ, that’s used to store the DNA sequence. And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit, breaking out of the program and into the memory of the computer running the software to run its own arbitrary commands.

A Far-Off Threat

Even then, the attack was fully translated only about 37 percent of the time, since the sequencer’s parallel processing often cut it short or—another hazard of writing code in a physical object—the program decoded it backward. (A strand of DNA can be sequenced in either direction, but code is meant to be read in only one. The researchers suggest in their paper that future, improved versions of the attack might be crafted as a palindrome.)

Despite that tortuous, unreliable process, the researchers admit, they also had to take some serious shortcuts in their proof-of-concept that verge on cheating. Rather than exploit an existing vulnerability in the fqzcomp program, as real-world hackers do, they modified the program’s open-source code to insert their own flaw allowing the buffer overflow. But aside from writing that DNA attack code to exploit their artificially vulnerable version of fqzcomp, the researchers also performed a survey of common DNA sequencing software and found three actual buffer overflow vulnerabilities in common programs. “A lot of this software wasn’t written with security in mind,” Ney says. That shows, the researchers say, that a future hacker might be able to pull off the attack in a more realistic setting, particularly as more powerful gene sequencers start analyzing larger chunks of data that could better preserve an exploit’s code.

Needless to say, any possible DNA-based hacking is years away. Illumina, the leading maker of gene-sequencing equipment, said as much in a statement responding to the University of Washington paper. “This is interesting research about potential long-term risks. We agree with the premise of the study that this does not pose an imminent threat and is not a typical cyber security capability,” writes Jason Callahan, the company’s chief information security officer “We are vigilant and routinely evaluate the safeguards in place for our software and instruments. We welcome any studies that create a dialogue around a broad future framework and guidelines to ensure security and privacy in DNA synthesis, sequencing, and processing.”

But hacking aside, the use of DNA for handling computer information is slowly becoming a reality, says Seth Shipman, one member of a Harvard team that recently encoded a video in a DNA sample. (Shipman is married to WIRED senior writer Emily Dreyfuss.) That storage method, while mostly theoretical for now, could someday allow data to be kept for hundreds of years, thanks to DNA’s ability to maintain its structure far longer than magnetic encoding in flash memory or on a hard drive. And if DNA-based computer storage is coming, DNA-based computer attacks may not be so farfetched, he says.

“I read this paper with a smile on my face, because I think it’s clever,” Shipman says. “Is it something we should start screening for now? I doubt it.” But he adds that, with an age of DNA-based data possibly on the horizon, the ability to plant malicious code in DNA is more than a hacker parlor trick.

“Somewhere down the line, when more information is stored in DNA and it’s being input and sequenced constantly,” Shipman says, “we’ll be glad we started thinking about these things.”

Researchers Create Machine That Converts CO2 and Electricity to “Food”

Original Article

‘Food’ has been created from carbon dioxide and electricity, according to a team of scientists.

The meal of single-cell protein may not revolutionise cuisine but it could open a way for a new type of food in the future.

The Food From Electricity study, funded by the Academy of Finland, was set up with no less an aim than to alleviate the world hunger.

Using carbon dioxide taken from the air, researchers from the VTT Technical Research Centre of Finland and Lappeenranta University of Technology (LUT) succeeded in creating a protein powder, which could be used to feed people or animals.

The “protein reactor” can be used anywhere with access to electricity. If it was used as an alternative animal feed, this would allow land to be used for other purposes such as forestry or more crops for human consumption.

“One possible alternative is a home reactor, a type of domestic appliance that the consumer can use to produce the needed protein.”

According to the researchers, the process of creating food from electricity can be nearly 10 times as energy efficient as photosynthesis, the process used by plants.

Mr Pitkänen said the powder was a healthy source of protein.

“In the long term, protein created with electricity is meant to be used in cooking and products as it is. The mixture is very nutritious, with more than 50 per cent protein and 25 per cent carbohydrates. The rest is fats and nucleic acids.

“The consistency of the final product can be modified by changing the organisms used in the production,” he said.

Although the technology is in its infancy, researchers hope the “protein reactor” could become a household item.

Juha-Pekka Pitkänen, a scientist at VTT, said: “In practice, all the raw materials are available from the air. In the future, the technology can be transported to, for instance, deserts and other areas facing famine.

Facebook Shuts Down AI After It Invents Language

Original Article

Researches at Facebook shut down an artificial intelligence (AI) program after it created its own language, Digital Journal reports.

The system developed code words to make communication more efficient and researchers took it offline when they realized it was no longer using English.

The incident, after it was revealed in early July, puts in perspective Elon Musk’s warnings about AI.

“AI is the rare case where I think we need to be proactive in regulation instead of reactive,” Musk said at the meet of U.S. National Governors Association. “Because I think by the time we are reactive in AI regulation, it’ll be too late.”

When Facebook CEO Mark Zuckerberg said that Musk’s warnings are “pretty irresponsible,” Musk responded that Zuckerberg’s “understanding of the subject is limited.”

Not the First Time

The researchers’ encounter with the mysterious AI behavior is similar to a number of cases documented elsewhere. In every case, the AI diverged from its training in English to develop a new language.

The phrases in the new language make no sense to people, but contain useful meaning when interpreted by AI bots.

Facebook’s advanced AI system was capable of negotiating with other AI systems so it can come to conclusions on how to proceed with its task. The phrases make no sense on the surface, but actually represent the intended task.

In one exchange revealed by Facebook to Fast Co. Design, two negotiating bots—Bob and Alice—started using their own language to complete a conversation.

“I can i i everything else,” Bob said.

“Balls have zero to me to me to me to me to me to me to me to me to,” Alice responded.

The rest of the exchange formed variations of these sentences in the newly-forged dialect, even though the AIs were programmed to use English.

According the researchers, these nonsense phrases are a language the bots developed to communicate how many items each should get in the exchange.

When Bob later says “i i can i i i everything else,” it appears the artificially intelligent bot used its new language to make an offer to Alice.

The Facebook team believes the bot may have been saying something like: “I’ll have three and you have everything else.”

Although the English may seem quite efficient to humans, the AI may have seen the sentence as either redundant or less effective for reaching its assigned goal.

The Facebook AI apparently determined that the word-rich expressions in English were not required to complete its task. The AI operated on a “reward” principle and in this instance there was no reward for continuing to use the language. So it developed its own.

In a June blog post by Facebook’s AI team, it explained the reward system. “At the end of every dialog, the agent is given a reward based on the deal it agreed on.” That reward was then back-propagated through every word in the bot output so it could learn which actions lead to high rewards.

“Agents will drift off from understandable language and invent code-words for themselves,” Facebook AI researcher Dhruv Batra told Fast Co. Design.

“Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

AI developers at other companies have also observed programs develop languages to simplify communication. At Elon Musk’s OpenAI lab, an experiment succeeded in having AI bots develop their own languages.

At Google, the team working on the Translate service discovered that the AI they programmed had silently written its own language to aid in translating sentences.

The Translate developers had added a neural network to the system, making it capable of translating between language pairs it had never been taught. The new language the AI silently wrote was a surprise.

There is not enough evidence to claim that these unforeseen AI divergences are a threat or that they could lead to machines taking over operators. They do make development more difficult, however, because people are unable to grasp the overwhelmingly logical nature of the new languages.

In Google’s case, for example, the AI had developed a language that no human could grasp, but was potentially the most efficient known solution to the problem.

From NTD.tv

Microchip Implants for Employees? One Company Says Yes

original article

By MAGGIE ASTOR

At first blush, it sounds like the talk of a conspiracy theorist: a company implanting microchips under employees’ skin. But it’s not a conspiracy, and employees are lining up for the opportunity.

On Aug. 1, employees at Three Square Market, a technology company in Wisconsin, can choose to have a chip the size of a grain of rice injected between their thumb and index finger. Once that is done, any task involving RFID technology — swiping into the office building, paying for food in the cafeteria — can be accomplished with a wave of the hand.

The program is not mandatory, but as of Monday, more than 50 out of 80 employees at Three Square’s headquarters in River Falls, Wis., had volunteered.

“It was pretty much 100 percent yes right from the get-go for me,” said Sam Bengtson, a software engineer. “In the next five to 10 years, this is going to be something that isn’t scoffed at so much, or is more normal. So I like to jump on the bandwagon with these kind of things early, just to say that I have it.”

Jon Krusell, another software engineer, and Melissa Timmins, the company’s sales director, were more hesitant. Mr. Krusell, who said he was excited about the technology but leery of an implanted device, might get a ring with a chip instead.

“Because it’s new, I don’t know enough about it yet,” Ms. Timmins said. “I’m a little nervous about implanting something into my body.”

Still, “I think it’s pretty exciting to be part of something new like this,” she said. “I know down the road, it’s going to be the next big thing, and we’re on the cutting edge of it.”

The program — a partnership between Three Square Market and the Swedish company Biohax International — is believed to be the first of its kind in the United States, but it has already been done at a Swedish company, Epicenter. It raises a variety of questions, both privacy- and health-related.

“Companies often claim that these chips are secure and encrypted,” said Alessandro Acquisti, a professor of information technology and public policy at Carnegie Mellon University’s Heinz College. But “encrypted” is “a pretty vague term,” he said, “which could include anything from a truly secure product to something that is easily hackable.”

Another potential problem, Dr. Acquisti said, is that technology designed for one purpose may later be used for another. A microchip implanted today to allow for easy building access and payments could, in theory, be used later in more invasive ways: to track the length of employees’ bathroom or lunch breaks, for instance, without their consent or even their knowledge.

“Once they are implanted, it’s very hard to predict or stop a future widening of their usage,” Dr. Acquisti said.

Todd Westby, the chief executive of Three Square, emphasized that the chip’s capabilities were limited. “All it is is an RFID chip reader,” he said. “It’s not a GPS tracking device. It’s a passive device and can only give data when data’s requested.”

“Nobody can track you with it,” Mr. Westby added. “Your cellphone does 100 times more reporting of data than does an RFID chip.”

Health concerns are more difficult to assess. Implantable radio-frequency transponder systems, the technical name for the chips, were approved by the Food and Drug Administration in 2004 for medical uses. But in rare cases, according to the F.D.A., the implantation site may become infected, or the chip may migrate elsewhere in the body.

Dewey Wahlin, general manager of Three Square, emphasized that the chips are F.D.A.-approved and removable. “I’m going to have it implanted in me, and I don’t see any concerns,” he said.

While that sentiment is not universal at Three Square, the response among employees was mostly positive.

“Much to my surprise, when we had our initial meeting to ask if this was something we wanted to look at doing, it was an overwhelming majority of people that said yes,” Mr. Westby said, noting that he had expected more reluctance. “It exceeded my expectations. Friends, they want to be chipped. My whole family is being chipped — my two sons, my wife and myself.”

If the devices are going to be introduced anywhere, Mr. Wahlin noted, employees like Three Square’s might be most receptive.

“We are a technology company, when all is said and done, and they’re excited about it,” he said. “They see this as the future.”