These People Never Existed. They Were Made by an AI.

Original Article

By Dom Galeon

Better Than Doodles

Back in June, an image generator that could turn even the crudest doodle of a face into a more realistic looking image made the rounds online. That system used a fairly new type of algorithm called a generative adversarial network (GAN) for its AI-created faces, and now, chipmaker NVIDIA has developed a system that employs a GAN to create far most realistic-looking images of people.

Artificial neural networks are systems developed to mimic the activity of neurons in the human brain. In a GAN, two neural networks are essentially pitted against one another. One of the networks functions as a generative algorithm, while the other challenges the results of the first, playing an adversarial role.

As part of their expanded applications for artificial intelligence, NVIDIA created a GAN that used CelebA-HQ’s database of photos of famous people to generate images of people who don’t actually exist. The idea was that the AI-created faces would look more realistic if two networks worked against each other to produce them.

First, the generative network would create an image at a lower resolution. Then, the discriminator network would assess the work. As the system progressed, the programmers added new layers dealing with higher-resolution details until the GAN finally generated images of “unprecedented quality,” according to the NVIDIA team’s paper.

Human or Machine?

NVIDIA released a video of their GAN in action, and the AI-created faces are both absolutely remarkable and incredibly eerie. If the average person didn’t know the faces were machine-generated, they could easily believe they belonged to living people.

Indeed, this blurring of the line between the human and the machine-generated is a topic of much discussion within the realm of AI, and NVIDIA’s GAN isn’t the first artificial system to convincingly mimic something human.

A number of AIs use deep learning techniques to produce human-sounding speech. Google’s DeepMind has WaveNet, which can now copy human speech almost perfectly. Meanwhile, startup Lyrebird’s algorithm is able to synthesize a human’s voice using just a minute of audio.

Even more disturbing or fascinating — depending on your perspective on the AI debate — are AI robots that can supposedly understand and express human emotion. Examples of those include Hanson Robotics’ Sophia and SoftBank’s Pepper.

Clearly, an age of smarter machines is upon us, and as the ability to AI to perform tasks previously only human beings could do improves, the line between human and machine will continue to blur. Now, the only question is if it will eventually disappear altogether.

The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse

Original Article

By James Vincent

Two weeks ago, a pair of researchers from Stanford University made a startling claim. Using hundreds of thousands of images taken from a dating website, they said they had trained a facial recognition system that could identify whether someone was straight or gay just by looking at them. The work was first covered by The Economist, and other publications soon followed suit, with headlines like “New AI can guess whether you’re gay or straight from a photograph” and “AI Can Tell If You’re Gay From a Photo, and It’s Terrifying.”

As you might have guessed, it’s not as straightforward as that. (And to be clear, based on this work alone, AI can’t tell whether someone is gay or straight from a photo.) But the research captures common fears about artificial intelligence: that it will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people. One of the paper’s authors, Dr Michal Kosinski, says his intent is to sound the alarm about the dangers of AI, and warns that facial recognition will soon be able to identify not only someone’s sexual orientation, but their political views, criminality, and even their IQ.

With statements like these, some worry we’re reviving an old belief with a bad history: that you can intuit character from appearance. This pseudoscience, physiognomy, was fuel for the scientific racism of the 19th and 20th centuries, and gave moral cover to some of humanity’s worst impulses: to demonize, condemn, and exterminate fellow humans. Critics of Kosinski’s work accuse him of replacing the calipers of the 19th century with the neural networks of the 21st, while the professor himself says he is horrified by his findings, and happy to be proved wrong. “It’s a controversial and upsetting subject, and it’s also upsetting to us,” he tells The Verge.

But is it possible that pseudoscience is sneaking back into the world, disguised in new garb thanks to AI? Some people say machines are simply able to read more about us than we can ourselves, but what if we’re training them to carry out our prejudices, and, in doing so, giving new life to old ideas we rightly dismissed? How are we going to know the difference?

CAN AI REALLY SPOT SEXUAL ORIENTATION?

First, we need to look at the study at the heart of the recent debate, written by Kosinski and his co-author Yilun Wang. Its results have been poorly reported, with a lot of the hype coming from misrepresentations of the system’s accuracy. The paper states: “Given a single facial image, [the software] could correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 71 percent of cases for women.” These rates increase when the system is given five pictures of an individual: up to 91 percent for men, and 83 percent for women.

On the face of it, this sounds like “AI can tell if a man is gay or straight 81 percent of the time by looking at his photo.” (Thus the headlines.) But that’s not what the figures mean. The AI wasn’t 81 percent correct when being shown random photos: it was tested on a pair of photos, one of a gay person and one of a straight person, and then asked which individual was more likely to be gay. It guessed right 81 percent of the time for men and 71 percent of the time for women, but the structure of the test means it started with a baseline of 50 percent — that’s what it’d get guessing at random. And although it was significantly better than that, the results aren’t the same as saying it can identify anyone’s sexual orientation 81 percent of the time.

As Philip Cohen, a sociologist at the University of Maryland who wrote a blog post critiquing the paper, told The Verge: “People are scared of a situation where you have a private life and your sexual orientation isn’t known, and you go to an airport or a sporting event and a computer scans the crowd and identifies whether you’re gay or straight. But there’s just not much evidence this technology can do that.”

Kosinski and Wang make this clear themselves toward the end of the paper when they test their system against 1,000 photographs instead of two. They ask the AI to pick out who is most likely to be gay in a dataset in which 7 percent of the photo subjects are gay, roughly reflecting the proportion of straight and gay men in the US population. When asked to select the 100 individuals most likely to be gay, the system gets only 47 out of 70 possible hits. The remaining 53 have been incorrectly identified. And when asked to identify a top 10, nine are right.

If you were a bad actor trying to use this system to identify gay people, you couldn’t know for sure you were getting correct answers. Although, if you used it against a large enough dataset, you might get mostly correct guesses. Is this dangerous? If the system is being used to target gay people, then yes, of course. But the rest of the study suggests the program has even further limitations.

WHAT CAN COMPUTERS REALLY SEE THAT HUMANS CAN’T?

It’s also not clear what factors the facial recognition system is using to make its judgements. Kosinski and Wang’s hypothesis is that it’s primarily identifying structural differences: feminine features in the faces of gay men and masculine features in the faces of gay women. But it’s possible that the AI is being confused by other stimuli — like facial expressions in the photos.

This is particularly relevant because the images used in the study were taken from a dating website. As Greggor Mattson, a professor of sociology at Oberlin College, pointed out in a blog post, this means that the images themselves are biased, as they were selected specifically to attract someone of a certain sexual orientation. They almost certainly play up to our cultural expectations of how gay and straight people should look, and, to further narrow their applicability, all the subjects were white, with no inclusion of bisexual or self-identified trans individuals. If a straight male chooses the most stereotypically “manly” picture of himself for a dating site, it says more about what he thinks society wants from him than a link between the shape of his jaw and his sexual orientation.

To try and ensure their system was looking at facial structure only, Kosinski and Wang used software called VGG-Face, which encodes faces as strings of numbers and has been used for tasks like spotting celebrity lookalikes in paintings. This program, they write, allows them to “minimize the role [of] transient features” like lighting, pose, and facial expression.

But researcher Tom White, who works on AI facial system, says VGG-Face is actually very good at picking up on these elements. White pointed this out on Twitter, and explained to The Verge over email how he’d tested the software and used it to successfully distinguish between faces with expressions like “neutral” and “happy,” as well as poses and background color.

A figure from the paper showing the average faces of the participants, and the difference in facial structures that they identified between the two sets. 
Image: Kosinski and Wang

Speaking to The Verge, Kosinski says he and Wang have been explicit that things like facial hair and makeup could be a factor in the AI’s decision-making, but he maintains that facial structure is the most important. “If you look at the overall properties of VGG-Face, it tends to put very little weight on transient facial features,” Kosinski says. “We also provide evidence that non-transient facial features seem to be predictive of sexual orientation.”

The problem is, we can’t know for sure. Kosinski and Wang haven’t released the program they created or the pictures they used to train it. They do test their AI on other picture sources, to see if it’s identifying some factor common to all gay and straight, but these tests were limited and also drew from a biased dataset — Facebook profile pictures from men who liked pages such as “I love being Gay,” and “Gay and Fabulous.”

Do men in these groups serve as reasonable proxies for all gay men? Probably not, and Kosinski says it’s possible his work is wrong. “Many more studies will need to be conducted to verify [this],” he says. But it’s tricky to say how one could completely eliminate selection bias to perform a conclusive test. Kosinski tells The Verge, “You don’t need to understand how the model works to test whether it’s correct or not.” However, it’s the acceptance of the opacity of algorithms that makes this sort of research so fraught.

IF AI CAN’T SHOW ITS WORKING, CAN WE TRUST IT?

AI researchers can’t fully explain why their machines do the things they do. It’s a challenge that runs through the entire field, and is sometimes referred to as the “black box” problem. Because of the methods used to train AI, these programs can’t show their work in the same way normal software does, although researchers are working to amend this.

In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)

Experts say this is a misleading claim that isn’t supported by the latest science. There may be a common cause for face shape and sexual orientation — the most probable cause is the balance of hormones in the womb — but that doesn’t mean face shape reliably predicts sexual orientation, says Qazi Rahman, an academic at King’s College London who studies the biology of sexual orientation. “Biology’s a little bit more nuanced than we often give it credit for,” he tells The Verge. “The issue here is the strength of the association.”

The idea that sexual orientation comes primarily from biology is itself controversial. Rahman, who believes that sexual orientation is mostly biological, praises Kosinski and Wang’s work. “It’s not junk science,” he says. “More like science someone doesn’t like.” But when it comes to predicting sexual orientation, he says there’s a whole package of “atypical gender behavior” that needs to be considered. “The issue for me is more that [the study] misses the point, and that’s behavior.”

Is there a gay gene? Or is sexuality equally shaped by society and culture?

Reducing the question of sexual orientation to a single, measurable factor in the body has a long and often inglorious history. As Matton writes in his blog post, approaches have ranged from “19th century measurements of lesbians’ clitorises and homosexual men’s hips, to late 20th century claims to have discovered ‘gay genes,’ ‘gay brains,’ ‘gay ring fingers,’ ‘lesbian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppression: it gives people who want to dehumanize and persecute sexual minorities a “scientific” pretext.

Jenny Davis, a lecturer in sociology at the Australian National University, describes it as a form of biological essentialism. This is the belief that things like sexual orientation are rooted in the body. This approach, she says, is double-edged. On the one hand, it “does a useful political thing: detaching blame from same-sex desire. But on the other hand, it reinforces the devalued position of that kind of desire,” setting up hetrosexuality as the norm and framing homosexuality as “less valuable … a sort of illness.”

And it’s when we consider Kosinski and Wang’s research in this context that AI-powered facial recognition takes on an even darker aspect — namely, say some critics, as part of a trend to the return of physiognomy, powered by AI.

YOUR CHARACTER, AS PLAIN AS THE NOSE ON YOUR FACE

For centuries, people have believed that the face held the key to the character. The notion has its roots in ancient Greece, but was particularly influential in the 19th century. Proponents of physiognomy suggested that by measuring things like the angle of someone’s forehead or the shape of their nose, they could determine if a person was honest or a criminal. Last year in China, AI researchers claimed they could do the same thing using facial recognition.

Their research, published as “Automated Inference on Criminality Using Face Images,” caused a minor uproar in the AI community. Scientists pointed out flaws in the study, and concluded that that work was replicating human prejudices about what constitutes a “mean” or a “nice” face. In a widely shared rebuttal titled “Physiognomy’s New Clothes,” Google researcher Blaise Agüera y Arcas and two co-authors wrote that we should expect “more research in the coming years that has similar … false claims to scientific objectivity in order to ‘launder’ human prejudice and discrimination.” (Google declined to make Agüera y Arcas available to comment on this report.)

An illustration of physiognomy from Giambattista della Porta’s De humana physiognomonia

Kosinski and Wang’s paper clearly acknowledges the dangers of physiognomy, noting that the practice “is now universally, and rightly, rejected as a mix of superstition and racism disguised as science.” But, they continue, just because a subject is “taboo,” doesn’t mean it has no basis in truth. They say that because humans are able to read characteristics like personality in other people’s faces with “low accuracy,” machines should be able to do the same but more accurately.

Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.

For Davis, this sort of attitude comes from a widespread and mistaken belief in the neutrality and objectivity of AI. “Artificial intelligence is not in fact artificial,” she tells The Verge. “Machines learn like humans learn. We’re taught through culture and absorb the norms of social structure, and so does artificial intelligence. So it will re-create, amplify, and continue on the trajectories we’ve taught it, which are always going to reflect existing cultural norms.”

We’ve already created sexist and racist algorithms, and these sorts of cultural biases and physiognomy are really just two sides of the same coin: both rely on bad evidence to judge others. The work by the Chinese researchers is an extreme example, but it’s certainly not the only one. There’s at least one startup already active that claims it can spot terrorists and pedophiles using face recognition, and there are many others offering to analyze “emotional intelligence” and conduct AI-powered surveillance.

FACING UP TO WHAT’S COMING

But to return to the questions implied by those alarming headlines about Kosinski and Wang’s paper: is AI going to be used to persecute sexual minorities?

This system? No. A different one? Maybe.

Kosinski and Wang’s work is not invalid, but its results need serious qualifications and further testing. Without that, all we know about their system is that it can spot with some reliability the difference between self-identified gay and straight white people on one particular dating site. We don’t know that it’s spotted a biological difference common to all gay and straight people; we don’t know if it would work with a wider set of photos; and the work doesn’t show that sexual orientation can be deduced with nothing more than, say, a measurement of the jaw. It’s not decoded human sexuality any more than AI chatbots have decoded the art of a good conversation. (Nor do its authors make such a claim.)

Startup Faception claims it can identify how likely people are to be terrorists just by looking at their face. 
Image: Faception

The research was published to warn people, say Kosinski, but he admits it’s an “unavoidable paradox” that to do so you have to explain how you did what you did. All the tools used in the paper are available for anyone to find and put together themselves. Writing at the deep learning education site Fast.ai, researcher Jeremy Howard concludes: “It is probably reasonably [sic] to assume that many organizations have already completed similar projects, but without publishing them in the academic literature.”

We’ve already mentioned startups working on this tech, and it’s not hard to find government regimes that would use it. In countries like Iran and Saudi Arabia homosexuality is still punishable by death; in many other countries, being gay means being hounded, imprisoned, and tortured by the state. Recent reports have spoken of the opening of concentration camps for gay men in the Chechen Republic, so what if someone there decides to make their own AI gaydar, and scan profile pictures from Russian social media?

Here, it becomes clear that the accuracy of systems like Kosinski and Wang’s isn’t really the point. If people believe AI can be used to determine sexual preference, they will use it. With that in mind, it’s more important than ever that we understand the limitations of artificial intelligence, to try and neutralize dangers before they start impacting people. Before we teach machines our prejudices, we need to first teach ourselves.

Face Reading A.I. Able To Detect IQ and Views on Politics

Original Article

By Sam Levin

Your photo could soon reveal your political views, says a Stanford professor.

 

Professor whose study suggested technology can detect whether a person is gay or straight says programs will soon reveal traits such as criminal predisposition

Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.

Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.

Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.

Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.

“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.

Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.

Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.

Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.

This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.

Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.

Michal Kosinski.
 Michal Kosinski. Photograph: Lauren Bamford

Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”

Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.

Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”

Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.

He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”

There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.

Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”

He also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.

Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.

The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.

Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”

Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.

“You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”

.Hackers Already Weaponizing A.I.

Original Article

By George Dvorsky

Illustration: Sam Woolley/Gizmodo

Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.

The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.

Human or bot? AI makes it tough to tell. (Image: ZeroFOX)

Thankfully this was just an experiment, but the exercise showed that hackers are already in a position to use AI for their nefarious ends. And in fact, they’re probably already using it, though it’s hard to prove. In July, at Black Hat USA 2017, hundreds of leading cybersecurity experts gathered in Las Vegas to discuss this issue and other looming threats posed by emerging technologies. In a Cylance poll held during the confab, attendees were asked if criminal hackers will use AI for offensive purposes in the coming year, to which 62 percent answered in the affirmative.

The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is alreadybeing used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.

“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, in an interview with Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.

Scales of intelligence

Marc Goodman, author of Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It, says he isn’t surprised that so many Black Hat attendees see weaponized AI as being imminent, as it’s been part of cyber attacks for years.

“What does strike me as a bit odd is that 62 percent of infosec professionals are making an AI prediction,” Goodman told Gizmodo. “AI is defined by many different people many different ways. So I’d want further clarity on specifically what they mean by AI.”

Indeed, it’s likely on this issue where the expert opinions diverge.

The funny thing about artificial intelligence is that our conception of it changes as time passes, and as our technologies increasingly match human intelligence in many important ways. At the most fundamental level, intelligence describes the ability of an agent, whether it be biological or mechanical, to solve complex problems. We possess many tools with this capability, and we have for quite some time, but we almost instantly start to take these tools for granted once they appear.

Centuries ago, for example, the prospect of a calculating machine that could crunch numbers millions of times faster than a human would’ve most certainly been considered a radical technological advance, yet few today would consider the lowly calculator as being anything particularly special. Similarly, the ability to win at chess was once considered a high mark of human intelligence, but ever since Deep Blue defeated Garry Kasparov in 1997, this cognitive skill has lost its former luster. And so and and so forth with each passing breakthrough in AI.

Today, rapid-fire developments in machine learning (whereby systems learn from data and improve with experience without being explicitly programmed), natural language processing, neural networks (systems modeled on the human brain), and many other fields are likewise lowering the bar on our perception of what constitutes machine intelligence. In a few years, artificial personal assistants (like Siri or Alexa), self-driving cars, and disease-diagnosing algorithms will likewise lose, unjustifiably, their AI allure. We’ll start to take these things for granted, and disparage these forms of AI for not being perfectly human. But make no mistake—modern tools like machine intelligence and neural networks are a form of artificial intelligence, and to believe otherwise is something we do at our own peril; if we dismiss or ignore the power of these tools, we may be blindsided by those who are eager to exploit AI’s full potential, hackers included.

A related problem is that the term artificial intelligence conjures futuristic visions and sci-fi fantasies that are far removed from our current realities.

“The term AI is often misconstrued, with many people thinking of Terminator robots trying to hunt down John Connor—but that’s not what AI is,” said Wallace. “Rather, it’s a broad topic of study around the creation of various forms of intelligence that happen to be artificial.”

Wallace says there are many different realms of AI, with machine learning being a particularly important subset of AI at the current moment.

“In our line of work, we use narrow machine learning—which is a form of AI—when trying to apply intelligence to a specific problem,” he told Gizmodo. “For instance, we use machine learning when trying to determine if a file or process is malicious or not. We’re not trying to create a system that would turn into SkyNet. Artificial intelligence isn’t always what the media and science fiction has depicted it as, and when we [infosec professionals] talk about AI, we’re talking about broad areas of study that are much simpler and far less terrifying.”

Evil intents

These modern tools may be less terrifying than clichéd Terminator visions, but in the hands of the wrong individuals, they can still be pretty scary.

Deepak Dutt, founder and CEO of Zighra, a mobile security startup, says there’s a high likelihood that sophisticated AI will be used for cyberattacks in the near future, and that it might already be in use by countries such as Russia, China, and some Eastern European countries. In terms of how AI could be used in nefarious ways, Dutt has no shortage of ideas.

“Artificial intelligence can be used to mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on, which can be used for hacking [a person’s] accounts,” Dutt told Gizmodo. “It can also be used to automatically monitor e-mails and text messages, and to create personalized phishing mails for social engineering attacks [phishing scams are an illicit attempt to obtain sensitive information from an unsuspecting user]. AI can be used for mutating malware and ransomware more easily, and to search more intelligently and dig out and exploit vulnerabilities in a system.”

Dutt suspects that AI is already being used for cyberattacks, and that criminals are already using some sort of machine learning capabilities, for example, by automatically creating personalized phishing e-mails.

“But what is new is the sophistication of AI in terms of new machine learning techniques like Deep Learning, which can be used to achieve the scenarios I just mentioned with a higher level of accuracy and efficiency,” he said. Deep Learning, also known as hierarchical learning, is a subfield of machine learning that utilizes large neural networks. It has been applied to computer vision, speech recognition, social network filtering, and many other complex tasks, often producing results superior to human experts.

“Also the availability of large amounts of social network and public data sets (Big Data) helps. Advanced machine learning and Deep Learning techniques and tools are easily available now on open source platforms—this combined with the relatively cheap computational infrastructure effectively enables cyberattacks with higher sophistication.”

These days, the overwhelming number of cyber attacks is automated, according to Goodman. The human hacker going after an individual target is far rarer, and the more common approach now is to automate attacks with tools of AI and machine learning—everything from scripted Distributed Denial of Service (DDoS) attacks to ransomware, criminal chatbots, and so on. While it can be argued that automation is fundamentally unintelligent (conversely, a case can be made that some forms of automation, particularly those involving large sets of complex tasks, are indeed a form of intelligence), it’s the prospect of a machine intelligence orchestrating these automated tasks that’s particularly alarming. An AI can produce complex and highly targeted scripts at a rate and level of sophistication far beyond any individual human hacker.

Indeed, the possibilities seem almost endless. In addition to the criminal activities already described, AIs could be used to target vulnerable populations, perform rapid-fire hacks, develop intelligent malware, and so on.

Staffan Truvé, Chief Technology Officer at Recorded Future, says that, as AI matures and becomes more of a commodity, the “bad guys,” as he puts it, will start using it to improve the performance of attacks, while also cutting costs. Unlike many of his colleagues, however, Truvé says that AI is not really being used by hackers at the moment, claiming that simpler algorithms (e.g. for self-modifying code) and automation schemes (e.g. to enable phishing schemes) are working just fine.

“I don’t think AI has quite yet become a standard part of the toolbox of the bad guys,” Truvé told Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks already is that the traditional methods still work—if you get what you need from a good old fashioned brute force approach then why take the time and money to switch to something new?”

AI on AI

With AI now part of the modern hacker’s toolkit, defenders are having to come up with novel ways of defending vulnerable systems. Thankfully, security professionals have a rather potent and obvious countermeasure at their disposal, namely artificial intelligence itself. Trouble is, this is bound to produce an arms race between the rival camps. Neither side really has a choice, as the only way to counter the other is to increasingly rely on intelligent systems.

“For security experts, this is Big Data problem—we’re dealing with tons of data—more than a single human could possibly produce,” said Wallace. “Once you’ve started to deal with an adversary, you have no choice but to use weaponized AI yourself.”

To stay ahead of the curve, Wallace recommends that security firms conduct their own internal research, and develop their own weaponized AI to fight and test their defenses. He calls it “an iron sharpens iron” approach to computer security. The Pentagon’s advanced research wing, DARPA, has already adopted this approach, organizing grand challenges in which AI developers pit their creations against each other in a virtual game of Capture the Flag. The process is very Darwinian, and reminiscent of yet another approach to AI development—evolutionary algorithms. For hackers and infosec professionals, it’s survival of the fittest AI.

Goodman agrees, saying “we will out of necessity” be using increasing amounts of AI “for everything from fraud detection to countering cyberattacks.” And in fact, several start-ups are already doing this, partnering with IBM Watson to combat cyber threats, says Goodman.

“AI techniques are being used today by defenders to look for patterns—the antivirus companies have been doing this for decades—and to do anomaly detection as a way to automatically detect if a system has been attacked and compromised,” said Truvé.

At his company, Recorded Future, Truvé is using AI techniques to do natural language processing to, for example, automatically detect when an attack is being planned and discussed on criminal forums, and to predict future threats.

“Bad guys [with AI] will continue to use the same attack vectors as today, only in a more efficient manner, and therefore the AI based defense mechanisms being developed now will to a large extent be possible to also use against AI based attacks,” he said.

Dutt recommends that infosec teams continuously monitor the cyber attack activities of hackers and learn from them, continuously “innovate with a combination of supervised and unsupervised learning based defense strategies to detect and thwart attacks at the first sign,” and, like in any war, adopt superior defenses and strategy.

The bystander effect

So our brave new world of AI-enabled hacking awaits, with criminals becoming increasingly capable of targeting vulnerable users and systems. Computer security firms will likewise lean on a AI in a never ending effort to keep up. Eventually, these tools will escape human comprehension and control, working at lightning fast speeds in an emerging digital ecosystem. It’ll get to a point where both hackers and infosec professionals have no choice but to hit the “go” button on their respective systems, and simply hope for the best. A consequence of AI is that humans are increasingly being kept out of the loop.

 

AI Can Determine Sexual Orientation From A Photograph

By Sam Levin
An illustrated depiction of facial analysis technology similar to that used in the experiment.

An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions

Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.

While the findings have clear limits when it comes to gender and sexuality – people of color were not included in the study, and there was no consideration of transgender or bisexual people – the implications for artificial intelligence (AI) are vast and alarming. With billions of facial images of people stored on social media sites and in government databases, the researchers suggested that public data could be used to detect people’s sexual orientation without their consent.

It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.

But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.

“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”

Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. Now we know that we need protections.”

Kosinski was not immediately available for comment, but after publication of this article on Friday, he spoke to the Guardian about the ethics of the study and implications for LGBT rights. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.

In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.

This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.

“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.

Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”

Elon Musk: A.I. Battle Is A “Likely Cause” Of WWIII

Original Article

By Brett Molina

Elon Musk says global race for artificial intelligence will cause World War III. Elizabeth Keatinge (@elizkeatinge) has more. Buzz60

A race toward “superiority” between countries over artificial intelligence will be the most likely cause of World War III, warns entrepreneur Elon Musk.

The CEO of Tesla and SpaceX has been outspoken about his fears of AI, urging countries to consider regulations now before the technology becomes more widely used.

The comments were sparked by comments from Russian President Vladimir Putin, who said the country leading the way in AI “will become ruler of the world,” reports news site RT.

“It begins,” said Musk in an earlier tweet ahead of his warning about the potential risks.

“China, Russia, soon all countries w strong computer science,” Musk wrote in a tweet posted Monday. “Competition for AI superiority at national level most likely cause of WW3 imo.”

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

In response to a Twitter user, Musk notes the AI itself — not a country’s leader — could spark a major conflict “if it decides that a preemptive strike is most probable path to victory.”

An automated WWIII at that. That’s a worry…

May be initiated not by the country leaders, but one of the AI’s, if it decides that a prepemptive strike is most probable path to victory

Musk has emerged as a critic of AI safety, seeking ways for governments to regulate the technology before it gets out of control. Last month, Musk warned fears over the security of AIare more risky than the threat of nuclear war from North Korea.

In July, during the National Governors Association meeting, Musk pushed states to consider new rules for AI.

“AI is a rare case where I think we need to be proactive in regulation than reactive,” said Musk.