The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse

Original Article

By James Vincent

Two weeks ago, a pair of researchers from Stanford University made a startling claim. Using hundreds of thousands of images taken from a dating website, they said they had trained a facial recognition system that could identify whether someone was straight or gay just by looking at them. The work was first covered by The Economist, and other publications soon followed suit, with headlines like “New AI can guess whether you’re gay or straight from a photograph” and “AI Can Tell If You’re Gay From a Photo, and It’s Terrifying.”

As you might have guessed, it’s not as straightforward as that. (And to be clear, based on this work alone, AI can’t tell whether someone is gay or straight from a photo.) But the research captures common fears about artificial intelligence: that it will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people. One of the paper’s authors, Dr Michal Kosinski, says his intent is to sound the alarm about the dangers of AI, and warns that facial recognition will soon be able to identify not only someone’s sexual orientation, but their political views, criminality, and even their IQ.

With statements like these, some worry we’re reviving an old belief with a bad history: that you can intuit character from appearance. This pseudoscience, physiognomy, was fuel for the scientific racism of the 19th and 20th centuries, and gave moral cover to some of humanity’s worst impulses: to demonize, condemn, and exterminate fellow humans. Critics of Kosinski’s work accuse him of replacing the calipers of the 19th century with the neural networks of the 21st, while the professor himself says he is horrified by his findings, and happy to be proved wrong. “It’s a controversial and upsetting subject, and it’s also upsetting to us,” he tells The Verge.

But is it possible that pseudoscience is sneaking back into the world, disguised in new garb thanks to AI? Some people say machines are simply able to read more about us than we can ourselves, but what if we’re training them to carry out our prejudices, and, in doing so, giving new life to old ideas we rightly dismissed? How are we going to know the difference?

CAN AI REALLY SPOT SEXUAL ORIENTATION?

First, we need to look at the study at the heart of the recent debate, written by Kosinski and his co-author Yilun Wang. Its results have been poorly reported, with a lot of the hype coming from misrepresentations of the system’s accuracy. The paper states: “Given a single facial image, [the software] could correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 71 percent of cases for women.” These rates increase when the system is given five pictures of an individual: up to 91 percent for men, and 83 percent for women.

On the face of it, this sounds like “AI can tell if a man is gay or straight 81 percent of the time by looking at his photo.” (Thus the headlines.) But that’s not what the figures mean. The AI wasn’t 81 percent correct when being shown random photos: it was tested on a pair of photos, one of a gay person and one of a straight person, and then asked which individual was more likely to be gay. It guessed right 81 percent of the time for men and 71 percent of the time for women, but the structure of the test means it started with a baseline of 50 percent — that’s what it’d get guessing at random. And although it was significantly better than that, the results aren’t the same as saying it can identify anyone’s sexual orientation 81 percent of the time.

As Philip Cohen, a sociologist at the University of Maryland who wrote a blog post critiquing the paper, told The Verge: “People are scared of a situation where you have a private life and your sexual orientation isn’t known, and you go to an airport or a sporting event and a computer scans the crowd and identifies whether you’re gay or straight. But there’s just not much evidence this technology can do that.”

Kosinski and Wang make this clear themselves toward the end of the paper when they test their system against 1,000 photographs instead of two. They ask the AI to pick out who is most likely to be gay in a dataset in which 7 percent of the photo subjects are gay, roughly reflecting the proportion of straight and gay men in the US population. When asked to select the 100 individuals most likely to be gay, the system gets only 47 out of 70 possible hits. The remaining 53 have been incorrectly identified. And when asked to identify a top 10, nine are right.

If you were a bad actor trying to use this system to identify gay people, you couldn’t know for sure you were getting correct answers. Although, if you used it against a large enough dataset, you might get mostly correct guesses. Is this dangerous? If the system is being used to target gay people, then yes, of course. But the rest of the study suggests the program has even further limitations.

WHAT CAN COMPUTERS REALLY SEE THAT HUMANS CAN’T?

It’s also not clear what factors the facial recognition system is using to make its judgements. Kosinski and Wang’s hypothesis is that it’s primarily identifying structural differences: feminine features in the faces of gay men and masculine features in the faces of gay women. But it’s possible that the AI is being confused by other stimuli — like facial expressions in the photos.

This is particularly relevant because the images used in the study were taken from a dating website. As Greggor Mattson, a professor of sociology at Oberlin College, pointed out in a blog post, this means that the images themselves are biased, as they were selected specifically to attract someone of a certain sexual orientation. They almost certainly play up to our cultural expectations of how gay and straight people should look, and, to further narrow their applicability, all the subjects were white, with no inclusion of bisexual or self-identified trans individuals. If a straight male chooses the most stereotypically “manly” picture of himself for a dating site, it says more about what he thinks society wants from him than a link between the shape of his jaw and his sexual orientation.

To try and ensure their system was looking at facial structure only, Kosinski and Wang used software called VGG-Face, which encodes faces as strings of numbers and has been used for tasks like spotting celebrity lookalikes in paintings. This program, they write, allows them to “minimize the role [of] transient features” like lighting, pose, and facial expression.

But researcher Tom White, who works on AI facial system, says VGG-Face is actually very good at picking up on these elements. White pointed this out on Twitter, and explained to The Verge over email how he’d tested the software and used it to successfully distinguish between faces with expressions like “neutral” and “happy,” as well as poses and background color.

A figure from the paper showing the average faces of the participants, and the difference in facial structures that they identified between the two sets. 
Image: Kosinski and Wang

Speaking to The Verge, Kosinski says he and Wang have been explicit that things like facial hair and makeup could be a factor in the AI’s decision-making, but he maintains that facial structure is the most important. “If you look at the overall properties of VGG-Face, it tends to put very little weight on transient facial features,” Kosinski says. “We also provide evidence that non-transient facial features seem to be predictive of sexual orientation.”

The problem is, we can’t know for sure. Kosinski and Wang haven’t released the program they created or the pictures they used to train it. They do test their AI on other picture sources, to see if it’s identifying some factor common to all gay and straight, but these tests were limited and also drew from a biased dataset — Facebook profile pictures from men who liked pages such as “I love being Gay,” and “Gay and Fabulous.”

Do men in these groups serve as reasonable proxies for all gay men? Probably not, and Kosinski says it’s possible his work is wrong. “Many more studies will need to be conducted to verify [this],” he says. But it’s tricky to say how one could completely eliminate selection bias to perform a conclusive test. Kosinski tells The Verge, “You don’t need to understand how the model works to test whether it’s correct or not.” However, it’s the acceptance of the opacity of algorithms that makes this sort of research so fraught.

IF AI CAN’T SHOW ITS WORKING, CAN WE TRUST IT?

AI researchers can’t fully explain why their machines do the things they do. It’s a challenge that runs through the entire field, and is sometimes referred to as the “black box” problem. Because of the methods used to train AI, these programs can’t show their work in the same way normal software does, although researchers are working to amend this.

In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)

Experts say this is a misleading claim that isn’t supported by the latest science. There may be a common cause for face shape and sexual orientation — the most probable cause is the balance of hormones in the womb — but that doesn’t mean face shape reliably predicts sexual orientation, says Qazi Rahman, an academic at King’s College London who studies the biology of sexual orientation. “Biology’s a little bit more nuanced than we often give it credit for,” he tells The Verge. “The issue here is the strength of the association.”

The idea that sexual orientation comes primarily from biology is itself controversial. Rahman, who believes that sexual orientation is mostly biological, praises Kosinski and Wang’s work. “It’s not junk science,” he says. “More like science someone doesn’t like.” But when it comes to predicting sexual orientation, he says there’s a whole package of “atypical gender behavior” that needs to be considered. “The issue for me is more that [the study] misses the point, and that’s behavior.”

Is there a gay gene? Or is sexuality equally shaped by society and culture?

Reducing the question of sexual orientation to a single, measurable factor in the body has a long and often inglorious history. As Matton writes in his blog post, approaches have ranged from “19th century measurements of lesbians’ clitorises and homosexual men’s hips, to late 20th century claims to have discovered ‘gay genes,’ ‘gay brains,’ ‘gay ring fingers,’ ‘lesbian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppression: it gives people who want to dehumanize and persecute sexual minorities a “scientific” pretext.

Jenny Davis, a lecturer in sociology at the Australian National University, describes it as a form of biological essentialism. This is the belief that things like sexual orientation are rooted in the body. This approach, she says, is double-edged. On the one hand, it “does a useful political thing: detaching blame from same-sex desire. But on the other hand, it reinforces the devalued position of that kind of desire,” setting up hetrosexuality as the norm and framing homosexuality as “less valuable … a sort of illness.”

And it’s when we consider Kosinski and Wang’s research in this context that AI-powered facial recognition takes on an even darker aspect — namely, say some critics, as part of a trend to the return of physiognomy, powered by AI.

YOUR CHARACTER, AS PLAIN AS THE NOSE ON YOUR FACE

For centuries, people have believed that the face held the key to the character. The notion has its roots in ancient Greece, but was particularly influential in the 19th century. Proponents of physiognomy suggested that by measuring things like the angle of someone’s forehead or the shape of their nose, they could determine if a person was honest or a criminal. Last year in China, AI researchers claimed they could do the same thing using facial recognition.

Their research, published as “Automated Inference on Criminality Using Face Images,” caused a minor uproar in the AI community. Scientists pointed out flaws in the study, and concluded that that work was replicating human prejudices about what constitutes a “mean” or a “nice” face. In a widely shared rebuttal titled “Physiognomy’s New Clothes,” Google researcher Blaise Agüera y Arcas and two co-authors wrote that we should expect “more research in the coming years that has similar … false claims to scientific objectivity in order to ‘launder’ human prejudice and discrimination.” (Google declined to make Agüera y Arcas available to comment on this report.)

An illustration of physiognomy from Giambattista della Porta’s De humana physiognomonia

Kosinski and Wang’s paper clearly acknowledges the dangers of physiognomy, noting that the practice “is now universally, and rightly, rejected as a mix of superstition and racism disguised as science.” But, they continue, just because a subject is “taboo,” doesn’t mean it has no basis in truth. They say that because humans are able to read characteristics like personality in other people’s faces with “low accuracy,” machines should be able to do the same but more accurately.

Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.

For Davis, this sort of attitude comes from a widespread and mistaken belief in the neutrality and objectivity of AI. “Artificial intelligence is not in fact artificial,” she tells The Verge. “Machines learn like humans learn. We’re taught through culture and absorb the norms of social structure, and so does artificial intelligence. So it will re-create, amplify, and continue on the trajectories we’ve taught it, which are always going to reflect existing cultural norms.”

We’ve already created sexist and racist algorithms, and these sorts of cultural biases and physiognomy are really just two sides of the same coin: both rely on bad evidence to judge others. The work by the Chinese researchers is an extreme example, but it’s certainly not the only one. There’s at least one startup already active that claims it can spot terrorists and pedophiles using face recognition, and there are many others offering to analyze “emotional intelligence” and conduct AI-powered surveillance.

FACING UP TO WHAT’S COMING

But to return to the questions implied by those alarming headlines about Kosinski and Wang’s paper: is AI going to be used to persecute sexual minorities?

This system? No. A different one? Maybe.

Kosinski and Wang’s work is not invalid, but its results need serious qualifications and further testing. Without that, all we know about their system is that it can spot with some reliability the difference between self-identified gay and straight white people on one particular dating site. We don’t know that it’s spotted a biological difference common to all gay and straight people; we don’t know if it would work with a wider set of photos; and the work doesn’t show that sexual orientation can be deduced with nothing more than, say, a measurement of the jaw. It’s not decoded human sexuality any more than AI chatbots have decoded the art of a good conversation. (Nor do its authors make such a claim.)

Startup Faception claims it can identify how likely people are to be terrorists just by looking at their face. 
Image: Faception

The research was published to warn people, say Kosinski, but he admits it’s an “unavoidable paradox” that to do so you have to explain how you did what you did. All the tools used in the paper are available for anyone to find and put together themselves. Writing at the deep learning education site Fast.ai, researcher Jeremy Howard concludes: “It is probably reasonably [sic] to assume that many organizations have already completed similar projects, but without publishing them in the academic literature.”

We’ve already mentioned startups working on this tech, and it’s not hard to find government regimes that would use it. In countries like Iran and Saudi Arabia homosexuality is still punishable by death; in many other countries, being gay means being hounded, imprisoned, and tortured by the state. Recent reports have spoken of the opening of concentration camps for gay men in the Chechen Republic, so what if someone there decides to make their own AI gaydar, and scan profile pictures from Russian social media?

Here, it becomes clear that the accuracy of systems like Kosinski and Wang’s isn’t really the point. If people believe AI can be used to determine sexual preference, they will use it. With that in mind, it’s more important than ever that we understand the limitations of artificial intelligence, to try and neutralize dangers before they start impacting people. Before we teach machines our prejudices, we need to first teach ourselves.

Big Business Wins the Fight for DRM Standards for Video Streaming.

Original Article

By Kate Conger

Photo: Getty

A fight over the future of video streaming has been brewing for years—and it finally came to a head today, with a major electronic privacy organization bowing out of the consortium that sets standards for the web.

The Electronic Frontier Foundation (EFF) resigned from the World Wide Web Consortium (W3C) today over the W3C’s freshly-released recommendations on protecting copyright in streaming video. W3C, which is directed by the inventor of the internet Tim Berners-Lee, should be a natural ally of the EFF—but the fight over protecting security researchers who uncover vulnerabilities in video streaming has driven a wedge between the two organizations.

“The whole problem that we have here is this is a super technical, relatively boring, unbelievably important issue. That’s such a horrific toxic cocktail,” Cory Doctorow, the EFF’s advisory committee representative to W3C, told Gizmodo. “The W3C is using its patent pool and moral authority to create a system that’s not about empowering users but controlling users.”

The dispute focuses on Digital Rights Management (DRM), which enables media companies to surveil their consumers and make sure they’re just binge-watching episodes of Game of Thrones, not binge-pirating. (Although DRM is most commonly found in video streaming platforms, it also makes appearances in everything from coffee machines to tractors.) DRM gets legal backing from the Digital Millennium Copyright Act (DMCA), which makes it a felony for security pros to find and disclose vulnerabilities in DRM.

DRM is usually managed by plugins like Adobe Flash or Microsoft Silverlight, but W3C’s recommendations make it possible for DRM to be managed by browsers. The EFF and other organizations wanted browsers that adopt the standard to agree to protect security researchers and not pursue them under the DMCA, but W3C didn’t make that part of the standard—pissing off a bunch of security professionals and open web advocates. It feels cynical and hypocritical for an organization founded on principles of openness to cave to the constraints of DRM and not stick up for researchers and users.

W3C normally makes decisions based on consensus, but switched to a majority-vote system because DRM was so divisive among its members, Doctorow said. CEO Jeff Jaffe called the dispute “one of the most divisive debates in the history of the W3C Community.”

“I know from my conversations that many people are not satisfied with the result,” Jaffe wrote of the recommendations. “And there is reason to respect those who want a better result. But my personal reflection is that we took the appropriate time to have a respectful debate about a complex set of issues and provide a result that will improve the web for its users.”

Doctorow told Gizmodo that he proposed a compromise to protect security researchers from prosecution, but that W3C rejected it. “We will stand down on our views on DRM but you have to promise that you’ll only use DRM law like the DMCA when there is some other cause of action like a copyright infringement,” he explained. That way, if researchers broke DRM only to expose a security flaw, they would be protected. But W3C members like Netflix weren’t interested in discussing a compromise, he said.

“The irony here is that Netflix only exists because they did and continue to do something that outraged the entertainment industry,” Doctorow explained. “The web should have the same standard that you guys had when you were starting. It should be legal to do things that are legal, and if that upsets you you should make a better product or convince Congress to stop it.”

Because of the changes to W3C rules, the EFF lost faith in the process. “We don’t think that there’s any use in throwing our donor’s money, our energy and our limited time at a process where we don’t think the other side carried themselves in good faith,” Doctorow said.

In an open letter explaining EFF’s decision to walk away from W3C, Doctorow wrote: “The business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool’s errand.”

In addition to the lack of protections for security research, EFF says the W3C recommendations harm the automation of making video accessible to people with disabilities and archiving the internet.

For their part, W3C members Netflix, Microsoft, Comcast, the Motion Picture Association of America, and the Recording Industry Association of America
all praised the decision.

“Integration of DRM into web browsers delivers improved performance, battery life, reliability, security and privacy to users watching their favorite TV shows and movies on Netflix and other video services,” wrote Netflix in a statement. “We can finally say goodbye to third-party plugins, making for a safer and more reliable web.”

.Hackers Already Weaponizing A.I.

Original Article

By George Dvorsky

Illustration: Sam Woolley/Gizmodo

Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.

The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.

Human or bot? AI makes it tough to tell. (Image: ZeroFOX)

Thankfully this was just an experiment, but the exercise showed that hackers are already in a position to use AI for their nefarious ends. And in fact, they’re probably already using it, though it’s hard to prove. In July, at Black Hat USA 2017, hundreds of leading cybersecurity experts gathered in Las Vegas to discuss this issue and other looming threats posed by emerging technologies. In a Cylance poll held during the confab, attendees were asked if criminal hackers will use AI for offensive purposes in the coming year, to which 62 percent answered in the affirmative.

The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is alreadybeing used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.

“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, in an interview with Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.

Scales of intelligence

Marc Goodman, author of Future Crimes: Everything Is Connected, Everyone Is Vulnerable and What We Can Do About It, says he isn’t surprised that so many Black Hat attendees see weaponized AI as being imminent, as it’s been part of cyber attacks for years.

“What does strike me as a bit odd is that 62 percent of infosec professionals are making an AI prediction,” Goodman told Gizmodo. “AI is defined by many different people many different ways. So I’d want further clarity on specifically what they mean by AI.”

Indeed, it’s likely on this issue where the expert opinions diverge.

The funny thing about artificial intelligence is that our conception of it changes as time passes, and as our technologies increasingly match human intelligence in many important ways. At the most fundamental level, intelligence describes the ability of an agent, whether it be biological or mechanical, to solve complex problems. We possess many tools with this capability, and we have for quite some time, but we almost instantly start to take these tools for granted once they appear.

Centuries ago, for example, the prospect of a calculating machine that could crunch numbers millions of times faster than a human would’ve most certainly been considered a radical technological advance, yet few today would consider the lowly calculator as being anything particularly special. Similarly, the ability to win at chess was once considered a high mark of human intelligence, but ever since Deep Blue defeated Garry Kasparov in 1997, this cognitive skill has lost its former luster. And so and and so forth with each passing breakthrough in AI.

Today, rapid-fire developments in machine learning (whereby systems learn from data and improve with experience without being explicitly programmed), natural language processing, neural networks (systems modeled on the human brain), and many other fields are likewise lowering the bar on our perception of what constitutes machine intelligence. In a few years, artificial personal assistants (like Siri or Alexa), self-driving cars, and disease-diagnosing algorithms will likewise lose, unjustifiably, their AI allure. We’ll start to take these things for granted, and disparage these forms of AI for not being perfectly human. But make no mistake—modern tools like machine intelligence and neural networks are a form of artificial intelligence, and to believe otherwise is something we do at our own peril; if we dismiss or ignore the power of these tools, we may be blindsided by those who are eager to exploit AI’s full potential, hackers included.

A related problem is that the term artificial intelligence conjures futuristic visions and sci-fi fantasies that are far removed from our current realities.

“The term AI is often misconstrued, with many people thinking of Terminator robots trying to hunt down John Connor—but that’s not what AI is,” said Wallace. “Rather, it’s a broad topic of study around the creation of various forms of intelligence that happen to be artificial.”

Wallace says there are many different realms of AI, with machine learning being a particularly important subset of AI at the current moment.

“In our line of work, we use narrow machine learning—which is a form of AI—when trying to apply intelligence to a specific problem,” he told Gizmodo. “For instance, we use machine learning when trying to determine if a file or process is malicious or not. We’re not trying to create a system that would turn into SkyNet. Artificial intelligence isn’t always what the media and science fiction has depicted it as, and when we [infosec professionals] talk about AI, we’re talking about broad areas of study that are much simpler and far less terrifying.”

Evil intents

These modern tools may be less terrifying than clichéd Terminator visions, but in the hands of the wrong individuals, they can still be pretty scary.

Deepak Dutt, founder and CEO of Zighra, a mobile security startup, says there’s a high likelihood that sophisticated AI will be used for cyberattacks in the near future, and that it might already be in use by countries such as Russia, China, and some Eastern European countries. In terms of how AI could be used in nefarious ways, Dutt has no shortage of ideas.

“Artificial intelligence can be used to mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on, which can be used for hacking [a person’s] accounts,” Dutt told Gizmodo. “It can also be used to automatically monitor e-mails and text messages, and to create personalized phishing mails for social engineering attacks [phishing scams are an illicit attempt to obtain sensitive information from an unsuspecting user]. AI can be used for mutating malware and ransomware more easily, and to search more intelligently and dig out and exploit vulnerabilities in a system.”

Dutt suspects that AI is already being used for cyberattacks, and that criminals are already using some sort of machine learning capabilities, for example, by automatically creating personalized phishing e-mails.

“But what is new is the sophistication of AI in terms of new machine learning techniques like Deep Learning, which can be used to achieve the scenarios I just mentioned with a higher level of accuracy and efficiency,” he said. Deep Learning, also known as hierarchical learning, is a subfield of machine learning that utilizes large neural networks. It has been applied to computer vision, speech recognition, social network filtering, and many other complex tasks, often producing results superior to human experts.

“Also the availability of large amounts of social network and public data sets (Big Data) helps. Advanced machine learning and Deep Learning techniques and tools are easily available now on open source platforms—this combined with the relatively cheap computational infrastructure effectively enables cyberattacks with higher sophistication.”

These days, the overwhelming number of cyber attacks is automated, according to Goodman. The human hacker going after an individual target is far rarer, and the more common approach now is to automate attacks with tools of AI and machine learning—everything from scripted Distributed Denial of Service (DDoS) attacks to ransomware, criminal chatbots, and so on. While it can be argued that automation is fundamentally unintelligent (conversely, a case can be made that some forms of automation, particularly those involving large sets of complex tasks, are indeed a form of intelligence), it’s the prospect of a machine intelligence orchestrating these automated tasks that’s particularly alarming. An AI can produce complex and highly targeted scripts at a rate and level of sophistication far beyond any individual human hacker.

Indeed, the possibilities seem almost endless. In addition to the criminal activities already described, AIs could be used to target vulnerable populations, perform rapid-fire hacks, develop intelligent malware, and so on.

Staffan Truvé, Chief Technology Officer at Recorded Future, says that, as AI matures and becomes more of a commodity, the “bad guys,” as he puts it, will start using it to improve the performance of attacks, while also cutting costs. Unlike many of his colleagues, however, Truvé says that AI is not really being used by hackers at the moment, claiming that simpler algorithms (e.g. for self-modifying code) and automation schemes (e.g. to enable phishing schemes) are working just fine.

“I don’t think AI has quite yet become a standard part of the toolbox of the bad guys,” Truvé told Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks already is that the traditional methods still work—if you get what you need from a good old fashioned brute force approach then why take the time and money to switch to something new?”

AI on AI

With AI now part of the modern hacker’s toolkit, defenders are having to come up with novel ways of defending vulnerable systems. Thankfully, security professionals have a rather potent and obvious countermeasure at their disposal, namely artificial intelligence itself. Trouble is, this is bound to produce an arms race between the rival camps. Neither side really has a choice, as the only way to counter the other is to increasingly rely on intelligent systems.

“For security experts, this is Big Data problem—we’re dealing with tons of data—more than a single human could possibly produce,” said Wallace. “Once you’ve started to deal with an adversary, you have no choice but to use weaponized AI yourself.”

To stay ahead of the curve, Wallace recommends that security firms conduct their own internal research, and develop their own weaponized AI to fight and test their defenses. He calls it “an iron sharpens iron” approach to computer security. The Pentagon’s advanced research wing, DARPA, has already adopted this approach, organizing grand challenges in which AI developers pit their creations against each other in a virtual game of Capture the Flag. The process is very Darwinian, and reminiscent of yet another approach to AI development—evolutionary algorithms. For hackers and infosec professionals, it’s survival of the fittest AI.

Goodman agrees, saying “we will out of necessity” be using increasing amounts of AI “for everything from fraud detection to countering cyberattacks.” And in fact, several start-ups are already doing this, partnering with IBM Watson to combat cyber threats, says Goodman.

“AI techniques are being used today by defenders to look for patterns—the antivirus companies have been doing this for decades—and to do anomaly detection as a way to automatically detect if a system has been attacked and compromised,” said Truvé.

At his company, Recorded Future, Truvé is using AI techniques to do natural language processing to, for example, automatically detect when an attack is being planned and discussed on criminal forums, and to predict future threats.

“Bad guys [with AI] will continue to use the same attack vectors as today, only in a more efficient manner, and therefore the AI based defense mechanisms being developed now will to a large extent be possible to also use against AI based attacks,” he said.

Dutt recommends that infosec teams continuously monitor the cyber attack activities of hackers and learn from them, continuously “innovate with a combination of supervised and unsupervised learning based defense strategies to detect and thwart attacks at the first sign,” and, like in any war, adopt superior defenses and strategy.

The bystander effect

So our brave new world of AI-enabled hacking awaits, with criminals becoming increasingly capable of targeting vulnerable users and systems. Computer security firms will likewise lean on a AI in a never ending effort to keep up. Eventually, these tools will escape human comprehension and control, working at lightning fast speeds in an emerging digital ecosystem. It’ll get to a point where both hackers and infosec professionals have no choice but to hit the “go” button on their respective systems, and simply hope for the best. A consequence of AI is that humans are increasingly being kept out of the loop.

 

143 Million People Could Be Affected In Giant Equifax Data Breach

Original Article

By Sara Ashley O’Brien

NEW YORK (CNNMoney) – Equifax says a giant cybersecurity breach compromised the personal information of as many as 143 million Americans — almost half the country.

Cyber criminals have accessed sensitive information — including names, social security numbers, birth dates, addresses, and the numbers of some driver’s licenses.

Additionally, Equifax said that credit card numbers for about 209,000 U.S. customers were exposed, as was “personal identifying information” on roughly 182,000 U.S. customers involved in credit report disputes. Residents in the U.K. and Canada were also impacted.

The breach occurred between mid-May and July, Equifax said. The company said it discovered the hack on July 29.

The data breach is one of the worst ever, by its reach and by the kind of information exposed to the public.

“This is clearly a disappointing event for our company, and one that strikes at the heart of who we are and what we do,” said Equifax chairman and CEO Richard F. Smith.

Equifax is one of three nationwide credit-reporting companies that track and rates the financial history of U.S. consumers. The companies are supplied with data about loans, loan payments and credit cards, as well as information on everything from child support payments, credit limits, missed rent and utilities payments, addresses and employer history, which all factor into credit scores.

Unlike other data breaches, not all of the people affected by the Equifax breach may be aware that they’re customers of the company. Equifax gets its data from credit card companies, banks, retailers, and lenders who report on the credit activity of individuals to credit reporting agencies, as well as by purchasing public records.

Equifax is mailing notices to people whose credit cards or dispute documents were affected.

It also says that consumers can check to see if they’ve potentially been impacted by submitting their name and the last six digits of their social security number. Users are given a date when they will be enrolled in free identity theft protection and credit file monitoring services. Equifax did not immediately reply to CNN Tech’s request for more information about the process.

“This is reason Number 10,000 to check your online bank statements and credit card statements on a regular basis, ideally weekly,” said Matt Schulz, senior industry analyst at CreditCards.com. “Bad guys can be very patient, so it’s important to keep an eye out long after this story fades from the headlines.”