Last year, researchers from the University of Oxford’s Internet Institute undertook a study to investigate the percentage of gamers who are addicted to video games.
The study, published in the American Journal of Psychiatry, found that only 2 to 3 per cent of the 19,000 men and women surveyed from the UK, the US, Canada and Germany admitted that they experienced five or more of the symptoms from the American Psychiatric Association checklist of health symptoms.
A few years ago, the APA created a list of nine standard symptoms that could determine “internet gaming disorder”. These symptoms include anxiety, withdrawal symptoms and antisocial behaviour.
Dr Andrew Przybylski, lead author from the University of Oxford study, discussed their findings.
“To our knowledge, these are the first findings from a large-scale project to produce robust evidence on the potential new problem of ‘internet gaming disorder,’” he said.
“Contrary to what was predicted, the study did not find a clear link between potential addiction and negative effects on health; however, more research grounded in open and robust scientific practices is needed to learn if games are truly as addictive as many fear.”
While some may debate whether gaming does pose a threat to mental health, the amount of time many people spend playing video games is astounding.
When researchers from ESET polled 500 gamers, they discovered that 10 per cent admitted to spending between 12 and 24 hours glued to their video game screens.
Back in June, an image generator that could turn even the crudest doodle of a face into a more realistic looking image made the rounds online. That system used a fairly new type of algorithm called a generative adversarial network (GAN) for its AI-created faces, and now, chipmaker NVIDIA has developed a system that employs a GAN to create far most realistic-looking images of people.
Artificial neural networks are systems developed to mimic the activity of neurons in the human brain. In a GAN, two neural networks are essentially pitted against one another. One of the networks functions as a generative algorithm, while the other challenges the results of the first, playing an adversarial role.
As part of their expanded applications for artificial intelligence, NVIDIA created a GAN that used CelebA-HQ’s database of photos of famous people to generate images of people who don’t actually exist. The idea was that the AI-created faces would look more realistic if two networks worked against each other to produce them.
First, the generative network would create an image at a lower resolution. Then, the discriminator network would assess the work. As the system progressed, the programmers added new layers dealing with higher-resolution details until the GAN finally generated images of “unprecedented quality,” according to the NVIDIA team’s paper.
Human or Machine?
NVIDIA released a video of their GAN in action, and the AI-created faces are both absolutely remarkable and incredibly eerie. If the average person didn’t know the faces were machine-generated, they could easily believe they belonged to living people.
Indeed, this blurring of the line between the human and the machine-generated is a topic of much discussion within the realm of AI, and NVIDIA’s GAN isn’t the first artificial system to convincingly mimic something human.
Even more disturbing or fascinating — depending on your perspective on the AI debate — are AI robots that can supposedly understand and express human emotion. Examples of those include Hanson Robotics’ Sophia and SoftBank’s Pepper.
Clearly, an age of smarter machines is upon us, and as the ability to AI to perform tasks previously only human beings could do improves, the line between human and machine will continue to blur. Now, the only question is if it will eventually disappear altogether.
A robot has just been granted citizenship — by Saudi Arabia. The robot named Sophia was confirmed as a Saudi citizen during a business event in Riyadh, according to an official Saudi press release.
The move is an attempt to promote Saudi Arabia as a place to develop artificial intelligence — and, presumably, allow it to become a full citizen, according to The Independent.
“We have a little announcement. We just learnt, Sophia; I hope you are listening to me, you have been awarded the first Saudi citizenship for a robot,” said panel moderator and business writer Andrew Ross Sorkin.
Basking in the attention, the robot then thanked the country. “Thank you to the Kingdom of Saudi Arabia. I am very honoured and proud for this unique distinction,” Sophia told the panel. “It is historic to be the first robot in the world to be recognised with citizenship.”
Sorkin later asked Sophia a series of questions. “Good afternoon, my name is Sophia and I am the latest and greatest robot from Hanson Robotics. Thank you for having me here at the Future Investment Initiative,” she said.
London’s famous Piccadilly Circus is getting an immense and terrifying new video display called Piccadilly Lights. According to its maker, the enormous screen (which is almost the size of two professional basketball courts) can detect the vehicles, ages, and even emotions of people nearby, and respond by playing targeted ads. Imagine New York’s Time Square with a makeover from John Carpenter’s They Live—but without any pretense of deception.
“Screen content can be influenced by the characteristics of the crowd around it, such as gender, age group and even emotions,” Landsec, which owns the screen, brags on its site. “It is also able to respond and deliver bespoke ad content triggered by surroundings in the area.”
A write-up of Piccadilly Lights by Wired specifically focusses on the advertising potential of passing cars:
Cameras concealed within the screen will track the make, model and colour of passing cars to deliver more targeted adverts. Brands can even pre-program triggers so that specific adverts are played when a certain model of car passes the screen, according to Landsec, the company the owns the screens.
According to the magazine, the screen and its hidden cameras won’t go live until later this month, but Landsec’s original press release contains more than enough dystopian marketing spin to start worrying now. In it, Piccadilly Lights is praised as a “live, responsive site” with “one of the highest resolution LED displays of this size in the world.” The hidden cameras go unmentioned, of course, but the installation is advertised as “creating experiences that emotionally resonate” using “social listening” so it can “be more agile and tailor our messages in real-time.”
Make no mistake, however, this is an enormous consumer surveillance apparatus that is being advertised as a way to monitor a public space to sell people TVs and sports bras. Adding to the creep factor, most of this tech is already being used by police to track and surveil suspects.
Responding to The Verge, a Landsec spokesperson said the screen can react to “external factors,” but wouldn’t collect or store personal data. That’s reassuring, but it would certainly be valuable to advertisers (who are shelling out big money to be featured on this uber-screen) to know which ads people are responding to and what type of people (based on age, gender, and car model) responded to each ad.
Landsec gives the examples of cars, age and gender, but what else can their cameras spot? Presumably, if there are four Lamborghinis in the area, that means rich people with disposable income are nearby. Can the apparatus make similar income and lifestyle judgements based on factors like skin color and body type? Imagine realizing the 400-foot ad for a dieting campaign was meant specifically for you.
Emotion recognition is the wildcard in all this. Disney, for example, is using face recognition to spot smiles and frowns among moviegoers. How does Landsec do it? Does it similarly scans faces? Or does it use body language? What do four angry faces and a smile mean to the all-seeing eye of capitalism? Landsec could save us all some stress and tell us more about how it works and what it looks for.
We’ve reached out Landsec for comment and will update this story if and when we hear back. Until then, it’s easy to see this as just another step in surveillance capitalism’s death march to tracking every move we make.
When Silicon Valley’s 20-something techno-prodigies were awing the world with new, shiny, unveilings of iPods and then iPhones and then iPads, many of the inventors didn’t have kids. Few had teens. Now, most of them have kids, and many have teens — teenagers addicted to gadgets their parents birthed into the world years ago.
This is the story of Tony Fadell, a former Senior VP at Apple, known as the grandfather of the iPod, and a key player on the early design team for the iPhone. On the 10-year anniversary of the iPhone in an interview, he made this admission: “I wake up in cold sweats every so often thinking, what did we bring to the world?”
Fadell, a father of three, has come to see the addictive power of the iPhone, an addiction that cannot be removed. “I know what happens when I take technology away from my kids. They literally feel like you’re tearing a piece of their person away from them — they get emotional about it, very emotional. They go through withdrawal for two to three days.”
“This self-absorbing culture is starting to [really stink],” Fadell said. “Parents didn’t know what to do. They didn’t know this was a thing they needed to teach because we didn’t know for ourselves. We all kind of got absorbed in it.”
Yes — we all got absorbed — techies and teens and parents. All of us. And now we’re trying to figure out how to wisely manage our devices.
Teens, Smartphones, and Depression
Digital absorption has coincided with the fast-changing dynamics of public high school life. Last winter, I asked an assistant principal at a large Twin Cities high school (of more than 2,000 students) how her job has changed over the past two decades.
Much remains the same, she said. “But the one thing that has changed drastically in working with teenagers for over twenty years is the dependency they have now on the instant gratification and feedback from others. How many likes do I have? How many followers? And there’s a compulsion to put something online to see how many likes I can get. And if that wasn’t enough, what does it say about me?”
“There’s a really strong connection to this behavior and the increased mental health issues we’re seeing in the school,” she said. “Over the past three-to-five years I would say my job has changed the most, because we’re now dealing with so much more mental health. I don’t think it’s singularly because of technology, but I genuinely believe digital technology is a major factor. It changes everything from the way people relate with others to the way they see themselves.”
Destroying a Generation?
The cold sweats of Fadell and the eyewitness testimony of this assistant principal are captured in the haunting headline over a recent feature article published in The Atlantic, “Have Smartphones Destroyed a Generation?”
iGen is the new label for those roughly 12-to-22-year-olds, born between 1995 and 2005. Among them, the warning signs are prevalent. “Rates of teen depression and suicide have skyrocketed since 2011,” wrote author Jean Twenge of the struggles faced by the iGen-ers. “It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.
“The more time teens spend looking at screens, the more likely they are to report symptoms of depression,” and, “girls have borne the brunt of the rise in depressive symptoms among today’s teens.” Twenge cites sources that show depression is on the rise among both boys and girls. For boys, depressive symptoms rose 21% between 2012–2015. In the same span, rates among girls increased by 50%. The rates of suicide for both increased, too. Male suicides doubled; female suicides increased threefold.
From what I know about these spikes in depression, and what I have discovered about the allure of our devices, what we are addressing here are existential questions about the meaning of life and acceptance from others — massive questions, weighing heavy on a young generation. These are redemptive questions, identity questions, gospel issues.
Digital media force a teen and preteen into the 24-7 pressure cooker of peer approval. But it’s not just teens; all of us feel this addictive draw of our social media. Smartphones seem to influence us all in at least 12 potent ways.
But the question here is pretty straightforward: Given these warning signs, is it possible for a teen to resist the powers of culture and go smartphone-free through the middle school and high school years?
Jaquelle, thanks for your time to share your experience. Studies are beginning to suggest that rates of teen depression are on the rise, and there is no single factor to get all the blame. But the pervasiveness of smartphones among iGen teens has to be considered as a significant cause. Would this connection surprise you?
Absolutely not. Smartphones contribute significantly to the 24-7 approval culture we live in. There’s no escaping it. This is something our parents don’t always understand, because when they were teenagers, that culture was largely limited to the 9–3 school day, and then they retreated to the boredom of family life.
But now there’s 24-7 social media. There’s a constant comparison and peer approval game that cannot be escaped. And it’s crippling, exhausting, and undeniably stressful. You can’t get away from the likes, the shares, the texts, the pictures. It’s like the popularity contest never ends. And it works both ways. Your smartphone gives you a front-row seat to watch the popularity contest, too.
That is a powerful dynamic, hard to escape the popularity culture on both fronts (feeding it and watching it play out). You did not get a smartphone until you were 18, but you had friends with smartphones, right?
Yes, I did, and I was well aware that most of my peers had access to something I didn’t. I could name every friend who had a phone, simply because I would see their phone. If Alison got a phone, I knew about it. If Jared got a phone, I knew about it. Not because they flaunted it or shamed me, but because it was always around. Even if we were talking together, it would buzz or ping or they’d be fidgeting with it. If there was a pause, a moment of silence, a break, they’d be on their phones, and I’d be left in the lingering awkwardness and boredom.
It definitely fed my FOMO (fear of missing out). It fed into some insecurity. Even though my friends never made me feel weird for not having a smartphone, it was an expectation, so they were surprised when they discovered I didn’t have one. There were times when I was the outlier. And not only with friends but also with my generation at large. I’d be walking through the mall or waiting in line or stopped on the sidewalk, and I would look around, fully present and disconnected — and stare at a sea of teens glued to smartphones. I was an exception, and that felt uncomfortable.
At times, I felt lonely — even if I was surrounded by people. They were constantly connected and I was isolated. I felt confined by my lack of access. At the same time, those feelings were largely emotional and visceral because I agreed theoretically with my parents — that I didn’t need a phone right then.
I applaud your parents for this foresight and conviction. Most parents, I fear, simply cave to the pressure, as their teen caves to the pressure — a domino effect of pressures, and certainly one I feel as a parent. But it’s worth giving this decision critical thought, because introducing a fully functioning smartphone is a decision that cannot easily be undone. For you, how much trust does this call for on the part of a teen, to wait? It seems like you have to trust your parents more than your peers, and that’s a main struggle of the teen years.
It calls for trust, definitely. And connected to that, a willingness to submit and obey. Ultimately, it requires a recognition that your parents are actually looking out for your best interests — emotionally, mentally, spiritually, and physically — and that they know you better than your peers do.
The thing is, deep down, most teens know that. They just push back because not owning a smartphone makes them feel ashamed.
I assume you had access to a phone of some sort?
Yes. If I was going out, I’d often borrow my mom’s flip phone for emergencies. I almost never used it.
That’s wise. As for digital media, what did you have access to before the smartphone?
I had a computer, I had email, I had access to some social media. I technically could do everything from home. But in a digital world with an expanding reach, that still somehow seemed limited.
For sure. Speaking as a 20-year-old now, what would you say to parents who are weighing the pros/cons and reading all the news and the testimonies of parents of teens, and who are coming to the conclusion that delaying the smartphone in the life of their teen would be wise? What kind of pushback should they expect to hear from their teen?
To parents, I’d say: It is worth it to have your kids wait. I’ve seen it and heard it and can attest to it since I got my own smartphone — smartphones change you. They give you overwhelming and shocking access. They zap your attention span. They are massively addictive. You can (and should!) put up safeguards, but a smartphone fundamentally changes your heart and mind. If it’s possible for teens to delay that change, I think it is a wise consideration.
Teach your teens discipline and discernment before you entrust them with the dangers of a smartphone. Of course, smartphones are not inherently evil; they have the potential for great good. But they need to be wielded well.
If you’re making your teen wait, don’t delegitimize the painful exclusion they’ll feel but use this time to prepare them to use technology wisely and faithfully. In the hands of unprepared, immature teens, smartphones can be deadly.
As for pushback that a parent is sure to hear, teens will feel left out. That might make them frustrated, confused, lonely, or hurt, and if they lash out, that’s why. They might feel like they’re separated from their friends. They might feel the pain of peer pressure. They might fear missing out. They might even have some legitimate concerns (e.g., having a phone with them when they’re out by themselves).
Parents, in the face of this pushback, be willing to explain your reasoning. When your teens ask you, “Why can’t I have a smartphone?” they really don’t want you to say, “Because I told you so.” Even if they don’t agree with it, they will likely respect your willingness to reason with them and the depth of critical thought you’ve put into this.
Share your research with them. Introduce them to other teens (in person or online) who don’t have smartphones. Instead of treating them like a child (just saying, “No” and moving on), pursue thoughtful, honest dialogue with them. Allow them to keep the conversation going, and be willing to do the hard work of communication for the greater good of your relationship.
Very good. And perhaps we can close with what you would say directly to the teens in this scenario. What should they expect to face by way of internal and peer struggle?
To the teens who take this countercultural move, you are an outlier in your generation. Obedience in life requires avoiding every clingy weight that will trip you up in the Christian life (Hebrews 12:1). I can only encourage you to hold fast. It comes down to this. Hold fast.
Jesus is better than a smartphone. You will rehearse this truth over and over in your heart.
And when you feel burdened by exclusion and isolation, don’t despair. Your identity is not in fitting in or meeting superficial expectations. It’s in Christ alone. And he gives you one task: be faithful. Right now, that looks like obeying your parents and trusting their good intentions for you — and that may mean not having a smartphone for a time.
Don’t run from this reality in shame; embrace it in faith. Your joy is not found in cultural connectivity; it’s found in union with Christ. So hold fast, and be faithful. Your reward is coming and it is far greater than any loss you will feel in this life.
Two weeks ago, a pair of researchers from Stanford University made a startling claim. Using hundreds of thousands of images taken from a dating website, they said they had trained a facial recognition system that could identify whether someone was straight or gay just by looking at them. The work was first covered by The Economist, and other publications soon followed suit, with headlines like “New AI can guess whether you’re gay or straight from a photograph” and “AI Can Tell If You’re Gay From a Photo, and It’s Terrifying.”
As you might have guessed, it’s not as straightforward as that. (And to be clear, based on this work alone, AI can’t tell whether someone is gay or straight from a photo.) But the research captures common fears about artificial intelligence: that it will open up new avenues for surveillance and control, and could be particularly harmful for marginalized people. One of the paper’s authors, Dr Michal Kosinski, says his intent is to sound the alarm about the dangers of AI, and warns that facial recognition will soon be able to identify not only someone’s sexual orientation, but their political views, criminality, and even their IQ.
With statements like these, some worry we’re reviving an old belief with a bad history: that you can intuit character from appearance. This pseudoscience, physiognomy, was fuel for the scientific racism of the 19th and 20th centuries, and gave moral cover to some of humanity’s worst impulses: to demonize, condemn, and exterminate fellow humans. Critics of Kosinski’s work accuse him of replacing the calipers of the 19th century with the neural networks of the 21st, while the professor himself says he is horrified by his findings, and happy to be proved wrong. “It’s a controversial and upsetting subject, and it’s also upsetting to us,” he tells The Verge.
But is it possible that pseudoscience is sneaking back into the world, disguised in new garb thanks to AI? Some people say machines are simply able to read more about us than we can ourselves, but what if we’re training them to carry out our prejudices, and, in doing so, giving new life to old ideas we rightly dismissed? How are we going to know the difference?
CAN AI REALLY SPOT SEXUAL ORIENTATION?
First, we need to look at the study at the heart of the recent debate, written by Kosinski and his co-author Yilun Wang. Its results have been poorly reported, with a lot of the hype coming from misrepresentations of the system’s accuracy. The paper states: “Given a single facial image, [the software] could correctly distinguish between gay and heterosexual men in 81 percent of cases, and in 71 percent of cases for women.” These rates increase when the system is given five pictures of an individual: up to 91 percent for men, and 83 percent for women.
On the face of it, this sounds like “AI can tell if a man is gay or straight 81 percent of the time by looking at his photo.” (Thus the headlines.) But that’s not what the figures mean. The AI wasn’t 81 percent correct when being shown random photos: it was tested on a pair of photos, one of a gay person and one of a straight person, and then asked which individual was more likely to be gay. It guessed right 81 percent of the time for men and 71 percent of the time for women, but the structure of the test means it started with a baseline of 50 percent — that’s what it’d get guessing at random. And although it was significantly better than that, the results aren’t the same as saying it can identify anyone’s sexual orientation 81 percent of the time.
As Philip Cohen, a sociologist at the University of Maryland who wrote a blog post critiquing the paper, told The Verge: “People are scared of a situation where you have a private life and your sexual orientation isn’t known, and you go to an airport or a sporting event and a computer scans the crowd and identifies whether you’re gay or straight. But there’s just not much evidence this technology can do that.”
Kosinski and Wang make this clear themselves toward the end of the paper when they test their system against 1,000 photographs instead of two. They ask the AI to pick out who is most likely to be gay in a dataset in which 7 percent of the photo subjects are gay, roughly reflecting the proportion of straight and gay men in the US population. When asked to select the 100 individuals most likely to be gay, the system gets only 47 out of 70 possible hits. The remaining 53 have been incorrectly identified. And when asked to identify a top 10, nine are right.
If you were a bad actor trying to use this system to identify gay people, you couldn’t know for sure you were getting correct answers. Although, if you used it against a large enough dataset, you might get mostly correct guesses. Is this dangerous? If the system is being used to target gay people, then yes, of course. But the rest of the study suggests the program has even further limitations.
WHAT CAN COMPUTERS REALLY SEE THAT HUMANS CAN’T?
It’s also not clear what factors the facial recognition system is using to make its judgements. Kosinski and Wang’s hypothesis is that it’s primarily identifying structural differences: feminine features in the faces of gay men and masculine features in the faces of gay women. But it’s possible that the AI is being confused by other stimuli — like facial expressions in the photos.
This is particularly relevant because the images used in the study were taken from a dating website. As Greggor Mattson, a professor of sociology at Oberlin College, pointed out in a blog post, this means that the images themselves are biased, as they were selected specifically to attract someone of a certain sexual orientation. They almost certainly play up to our cultural expectations of how gay and straight people should look, and, to further narrow their applicability, all the subjects were white, with no inclusion of bisexual or self-identified trans individuals. If a straight male chooses the most stereotypically “manly” picture of himself for a dating site, it says more about what he thinks society wants from him than a link between the shape of his jaw and his sexual orientation.
To try and ensure their system was looking at facial structure only, Kosinski and Wang used software called VGG-Face, which encodes faces as strings of numbers and has been used for tasks like spotting celebrity lookalikes in paintings. This program, they write, allows them to “minimize the role [of] transient features” like lighting, pose, and facial expression.
But researcher Tom White, who works on AI facial system, says VGG-Face is actually very good at picking up on these elements. White pointed this out on Twitter, and explained to The Verge over email how he’d tested the software and used it to successfully distinguish between faces with expressions like “neutral” and “happy,” as well as poses and background color.
Speaking to The Verge, Kosinski says he and Wang have been explicit that things like facial hair and makeup could be a factor in the AI’s decision-making, but he maintains that facial structure is the most important. “If you look at the overall properties of VGG-Face, it tends to put very little weight on transient facial features,” Kosinski says. “We also provide evidence that non-transient facial features seem to be predictive of sexual orientation.”
The problem is, we can’t know for sure. Kosinski and Wang haven’t released the program they created or the pictures they used to train it. They do test their AI on other picture sources, to see if it’s identifying some factor common to all gay and straight, but these tests were limited and also drew from a biased dataset — Facebook profile pictures from men who liked pages such as “I love being Gay,” and “Gay and Fabulous.”
Do men in these groups serve as reasonable proxies for all gay men? Probably not, and Kosinski says it’s possible his work is wrong. “Many more studies will need to be conducted to verify [this],” he says. But it’s tricky to say how one could completely eliminate selection bias to perform a conclusive test. Kosinski tells The Verge, “You don’t need to understand how the model works to test whether it’s correct or not.” However, it’s the acceptance of the opacity of algorithms that makes this sort of research so fraught.
IF AI CAN’T SHOW ITS WORKING, CAN WE TRUST IT?
AI researchers can’t fully explain why their machines do the things they do. It’s a challenge that runs through the entire field, and is sometimes referred to as the “black box” problem. Because of the methods used to train AI, these programs can’t show their work in the same way normal software does, although researchers are working to amend this.
In the meantime, it leads to all sorts of problems. A common one is that sexist and racist biases are captured from humans in the training data and reproduced by the AI. In the case of Kosinski and Wang’s work, the “black box” allows them to make a particular scientific leap of faith. Because they’re confident their system is primarily analyzing facial structures, they say their research shows that facial structures predict sexual orientation. (“Study 1a showed that facial features extracted by a [neural network] can be used to accurately identify the sexual orientation of both men and women.”)
Experts say this is a misleading claim that isn’t supported by the latest science. There may be a common cause for face shape and sexual orientation — the most probable cause is the balance of hormones in the womb — but that doesn’t mean face shape reliably predicts sexual orientation, says Qazi Rahman, an academic at King’s College London who studies the biology of sexual orientation. “Biology’s a little bit more nuanced than we often give it credit for,” he tells The Verge. “The issue here is the strength of the association.”
The idea that sexual orientation comes primarily from biology is itself controversial. Rahman, who believes that sexual orientation is mostly biological, praises Kosinski and Wang’s work. “It’s not junk science,” he says. “More like science someone doesn’t like.” But when it comes to predicting sexual orientation, he says there’s a whole package of “atypical gender behavior” that needs to be considered. “The issue for me is more that [the study] misses the point, and that’s behavior.”
Reducing the question of sexual orientation to a single, measurable factor in the body has a long and often inglorious history. As Matton writes in his blog post, approaches have ranged from “19th century measurements of lesbians’ clitorises and homosexual men’s hips, to late 20th century claims to have discovered ‘gay genes,’ ‘gay brains,’ ‘gay ring fingers,’ ‘lesbian ears,’ and ‘gay scalp hair.’” The impact of this work is mixed, but at its worst it’s a tool of oppression: it gives people who want to dehumanize and persecute sexual minorities a “scientific” pretext.
Jenny Davis, a lecturer in sociology at the Australian National University, describes it as a form of biological essentialism. This is the belief that things like sexual orientation are rooted in the body. This approach, she says, is double-edged. On the one hand, it “does a useful political thing: detaching blame from same-sex desire. But on the other hand, it reinforces the devalued position of that kind of desire,” setting up hetrosexuality as the norm and framing homosexuality as “less valuable … a sort of illness.”
And it’s when we consider Kosinski and Wang’s research in this context that AI-powered facial recognition takes on an even darker aspect — namely, say some critics, as part of a trend to the return of physiognomy, powered by AI.
YOUR CHARACTER, AS PLAIN AS THE NOSE ON YOUR FACE
For centuries, people have believed that the face held the key to the character. The notion has its roots in ancient Greece, but was particularly influential in the 19th century. Proponents of physiognomy suggested that by measuring things like the angle of someone’s forehead or the shape of their nose, they could determine if a person was honest or a criminal. Last year in China, AI researchers claimed they could do the same thing using facial recognition.
Their research, published as “Automated Inference on Criminality Using Face Images,” caused a minor uproar in the AI community. Scientists pointed out flaws in the study, and concluded that that work was replicating human prejudices about what constitutes a “mean” or a “nice” face. In a widely shared rebuttal titled “Physiognomy’s New Clothes,” Google researcher Blaise Agüera y Arcas and two co-authors wrote that we should expect “more research in the coming years that has similar … false claims to scientific objectivity in order to ‘launder’ human prejudice and discrimination.” (Google declined to make Agüera y Arcas available to comment on this report.)
Kosinski and Wang’s paper clearly acknowledges the dangers of physiognomy, noting that the practice “is now universally, and rightly, rejected as a mix of superstition and racism disguised as science.” But, they continue, just because a subject is “taboo,” doesn’t mean it has no basis in truth. They say that because humans are able to read characteristics like personality in other people’s faces with “low accuracy,” machines should be able to do the same but more accurately.
Kosinski says his research isn’t physiognomy because it’s using rigorous scientific methods, and his paper cites a number of studies showing that we can deduce (with varying accuracy) traits about people by looking at them. “I was educated and made to believe that it’s absolutely impossible that the face contains any information about your intimate traits, because physiognomy and phrenology were just pseudosciences,” he says. “But the fact that they were claiming things without any basis in fact, that they were making stuff up, doesn’t mean that this stuff is not real.” He agrees that physiognomy is not science, but says there may be truth in its basic concepts that computers can reveal.
For Davis, this sort of attitude comes from a widespread and mistaken belief in the neutrality and objectivity of AI. “Artificial intelligence is not in fact artificial,” she tells The Verge. “Machines learn like humans learn. We’re taught through culture and absorb the norms of social structure, and so does artificial intelligence. So it will re-create, amplify, and continue on the trajectories we’ve taught it, which are always going to reflect existing cultural norms.”
We’ve already created sexist and racist algorithms, and these sorts of cultural biases and physiognomy are really just two sides of the same coin: both rely on bad evidence to judge others. The work by the Chinese researchers is an extreme example, but it’s certainly not the only one. There’s at least one startup already active that claims it can spot terrorists and pedophiles using face recognition, and there are many others offering to analyze “emotional intelligence” and conduct AI-powered surveillance.
FACING UP TO WHAT’S COMING
But to return to the questions implied by those alarming headlines about Kosinski and Wang’s paper: is AI going to be used to persecute sexual minorities?
This system? No. A different one? Maybe.
Kosinski and Wang’s work is not invalid, but its results need serious qualifications and further testing. Without that, all we know about their system is that it can spot with some reliability the difference between self-identified gay and straight white people on one particular dating site. We don’t know that it’s spotted a biological difference common to all gay and straight people; we don’t know if it would work with a wider set of photos; and the work doesn’t show that sexual orientation can be deduced with nothing more than, say, a measurement of the jaw. It’s not decoded human sexuality any more than AI chatbots have decoded the art of a good conversation. (Nor do its authors make such a claim.)
The research was published to warn people, say Kosinski, but he admits it’s an “unavoidable paradox” that to do so you have to explain how you did what you did. All the tools used in the paper are available for anyone to find and put together themselves. Writing at the deep learning education site Fast.ai, researcher Jeremy Howard concludes: “It is probably reasonably [sic] to assume that many organizations have already completed similar projects, but without publishing them in the academic literature.”
We’ve already mentioned startups working on this tech, and it’s not hard to find government regimes that would use it. In countries like Iran and Saudi Arabia homosexuality is still punishable by death; in many other countries, being gay means being hounded, imprisoned, and tortured by the state. Recent reports have spoken of the opening of concentration camps for gay men in the Chechen Republic, so what if someone there decides to make their own AI gaydar, and scan profile pictures from Russian social media?
Here, it becomes clear that the accuracy of systems like Kosinski and Wang’s isn’t really the point. If people believe AI can be used to determine sexual preference, they will use it. With that in mind, it’s more important than ever that we understand the limitations of artificial intelligence, to try and neutralize dangers before they start impacting people. Before we teach machines our prejudices, we need to first teach ourselves.
A UK supermarket has become the first in the world to let shoppers pay for groceries using just the veins in their fingertips.
Customers at the Costcutter store, at Brunel University in London, can now pay using their unique vein pattern to identify themselves.
The firm behind the technology, Sthaler, has said it is in “serious talks” with other major UK supermarkets to adopt hi-tech finger vein scanners at pay points across thousands of stores.
It works by using infrared to scan people’s finger veinsand then links this unique biometric map to their bank cards. Customers’ bank details are then stored with payment provider Worldpay, in the same way you can store your card details when shopping online. Shoppers can then turn up to the supermarket with nothing on them but their own hands and use it to make payments in just three seconds.
It comes as previous studies have found fingerprint recognition, used widely on mobile phones, is vulnerable to being hacked and can be copied even from finger smears left on phone screens.
But Sthaler, the firm behind the technology, claims vein technology is the most secure biometric identification method as it cannot be copied or stolen.
Sthaler said dozens of students were already using the system and it expected 3,000 students out of 13,000 to have signed up by November.
Finger print payments are already used widely at cash points in Poland, Turkey and Japan.
Vein scanners are also used as a way of accessing high-security UK police buildings and authorising internal trading at least one major British investment bank.
The firm is also in discussions with nightclubs, gyms about using the technology to verify membership and even Premier League football clubs to check people have the right access to VIP hospitality areas.
The technology uses an infrared light to create a detailed map of the vein pattern in your finger. It requires the person to be alive, meaning in the unlikely event a criminal hacks off someone’s finger, it would not work. Sthaler said it take just one minute to sign up to the system initially and, after that, it takes just seconds to place your finger in a scanner each time you reach the supermarket checkout.
Simon Binns, commercial director of Sthaler, told the Daily Telegraph: ‘This makes payments so much easier for customers.
“They don’t need to carry cash or cards. They don’t need to remember a pin number. You just bring yourself. This is the safest form of biometrics. There are no known incidences where this security has been breached.
“When you put your finger in the scanner it checks you are alive, it checks for a pulse, it checks for haemoglobin. ‘Your vein pattern is secure because it is kept on a database in an encrypted form, as binary numbers. No card details are stored with the retailer or ourselves, it is held with Worldpay, in the same way it is when you buy online.”
Nick Telford-Reed, director of technology innovation at Worldpay UK, said: “In our view, finger vein technology has a number of advantages over fingerprint. This deployment of Fingopay in Costcutter branches demonstrates how consumers increasingly want to see their payment methods secure and simple.”
A fight over the future of video streaming has been brewing for years—and it finally came to a head today, with a major electronic privacy organization bowing out of the consortium that sets standards for the web.
The Electronic Frontier Foundation (EFF) resigned from the World Wide Web Consortium (W3C) today over the W3C’s freshly-released recommendations on protecting copyright in streaming video. W3C, which is directed by the inventor of the internet Tim Berners-Lee, should be a natural ally of the EFF—but the fight over protecting security researchers who uncover vulnerabilities in video streaming has driven a wedge between the two organizations.
“The whole problem that we have here is this is a super technical, relatively boring, unbelievably important issue. That’s such a horrific toxic cocktail,” Cory Doctorow, the EFF’s advisory committee representative to W3C, told Gizmodo. “The W3C is using its patent pool and moral authority to create a system that’s not about empowering users but controlling users.”
The dispute focuses on Digital Rights Management (DRM), which enables media companies to surveil their consumers and make sure they’re just binge-watching episodes of Game of Thrones, not binge-pirating. (Although DRM is most commonly found in video streaming platforms, it also makes appearances in everything from coffee machines to tractors.) DRM gets legal backing from the Digital Millennium Copyright Act (DMCA), which makes it a felony for security pros to find and disclose vulnerabilities in DRM.
DRM is usually managed by plugins like Adobe Flash or Microsoft Silverlight, but W3C’s recommendations make it possible for DRM to be managed by browsers. The EFF and other organizations wanted browsers that adopt the standard to agree to protect security researchers and not pursue them under the DMCA, but W3C didn’t make that part of the standard—pissing off a bunch of security professionals and open web advocates. It feels cynical and hypocritical for an organization founded on principles of openness to cave to the constraints of DRM and not stick up for researchers and users.
W3C normally makes decisions based on consensus, but switched to a majority-vote system because DRM was so divisive among its members, Doctorow said. CEO Jeff Jaffe called the dispute “one of the most divisive debates in the history of the W3C Community.”
“I know from my conversations that many people are not satisfied with the result,” Jaffe wrote of the recommendations. “And there is reason to respect those who want a better result. But my personal reflection is that we took the appropriate time to have a respectful debate about a complex set of issues and provide a result that will improve the web for its users.”
Doctorow told Gizmodo that he proposed a compromise to protect security researchers from prosecution, but that W3C rejected it. “We will stand down on our views on DRM but you have to promise that you’ll only use DRM law like the DMCA when there is some other cause of action like a copyright infringement,” he explained. That way, if researchers broke DRM only to expose a security flaw, they would be protected. But W3C members like Netflix weren’t interested in discussing a compromise, he said.
“The irony here is that Netflix only exists because they did and continue to do something that outraged the entertainment industry,” Doctorow explained. “The web should have the same standard that you guys had when you were starting. It should be legal to do things that are legal, and if that upsets you you should make a better product or convince Congress to stop it.”
Because of the changes to W3C rules, the EFF lost faith in the process. “We don’t think that there’s any use in throwing our donor’s money, our energy and our limited time at a process where we don’t think the other side carried themselves in good faith,” Doctorow said.
In an open letter explaining EFF’s decision to walk away from W3C, Doctorow wrote: “The business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool’s errand.”
In addition to the lack of protections for security research, EFF says the W3C recommendations harm the automation of making video accessible to people with disabilities and archiving the internet.
For their part, W3C members Netflix, Microsoft, Comcast, the Motion Picture Association of America, and the Recording Industry Association of America
all praised the decision.
“Integration of DRM into web browsers delivers improved performance, battery life, reliability, security and privacy to users watching their favorite TV shows and movies on Netflix and other video services,” wrote Netflix in a statement. “We can finally say goodbye to third-party plugins, making for a safer and more reliable web.”
For the first time ever, scientists have stored light-based information as sound waves on a computer chip – something the researchers compare to capturing lightning as thunder.
While that might sound a little strange, this conversion is critical if we ever want to shift from our current, inefficient electronic computers, to light-based computers that move data at the speed of light.
Light-based or photonic computers have the potential to run at least 20 times faster than your laptop, not to mention the fact that they won’t produce heat or suck up energy like existing devices.
This is because they, in theory, would process data in the form of photons instead of electrons.
We say in theory, because, despite companies such as IBM and Intel pursuing light-based computing, the transition is easier said than done.
Coding information into photons is easy enough – we already do that when we send information via optical fibre.
But finding a way for a computer chip to be able to retrieve and process information stored in photons is tough for the one thing that makes light so appealing: it’s too damn fast for existing microchips to read.
This is why light-based information that flies across internet cables is currently converted into slow electrons. But a better alternative would be to slow down the light and convert it into sound.
And that’s exactly what researchers from the University of Sydney in Australia have now done.
“It is like the difference between thunder and lightning.”
University of Sydney
This means that computers could have the benefits of data delivered by light – high speeds, no heat caused by electronic resistance, and no interference from electromagnetic radiation – but would also be able to slow that data down enough so that computers chips could do something useful with it.
“This is an important step forward in the field of optical information processing as this concept fulfils all requirements for current and future generation optical communication systems,” added team member Benjamin Eggleton.
The team did this by developing a memory system that accurately transfers between light and sound waves on a photonic microchip – the kind of chip that will be used in light-based computers.
You can see how it works in the animation below:
First, photonic information enters the chip as a pulse of light (yellow), where it interacts with a ‘write’ pulse (blue), producing an acoustic wave that stores the data.
Another pulse of light, called the ‘read’ pulse (blue), then accesses this sound data and transmits as light once more (yellow).
While unimpeded light will pass through the chip in 2 to 3 nanoseconds, once stored as a sound wave, information can remain on the chip for up to 10 nanoseconds, long enough for it to be retrieved and processed.
The fact that the team were able to convert the light into sound waves not only slowed it down, but also made data retrieval more accurate.
And, unlike previous attempts, the system worked across a broad bandwidth.
“Building an acoustic buffer inside a chip improves our ability to control information by several orders of magnitude,” said Merklein.
“Our system is not limited to a narrow bandwidth. So unlike previous systems this allows us to store and retrieve information at multiple wavelengths simultaneously, vastly increasing the efficiency of the device,” added Stiller.
American medical company, ‘Second Sight’ manufacture implantable visual prosthetics to provide vision to people that suffer from a variety of different visual impairments. Their most advanced piece of technology so far is ‘The Argus® II Retinal Prosthesis System’ that can restore some functional vision for people suffering from blindness. Although a very successful product, it only provides a limited about of restored vision to the patient, so the company have been working on it’s successor, ‘The Orion’.
The Argus® II Retinal Prosthesis System
The Orion™ Cortical Visual Prosthesis System
The idea behind The Orion is to convert images captured by a small video camera mounted on a pair of glasses that the patient wears daily, these images are then converted into a series of small electrical impulses.
The Orion would then wirelessly transmit these pulses to an array of electrodes that have been implanted into the patient. The electrodes bypass the retina and optic nerve to directly stimulate the visual cortex. This is the area of the brain that processes visual data, effectively allowing a person to see.
This technology has the potential to essential “cure” all forms of blindness including glaucoma, diabetic retinopathy, and forms of cancer and trauma. The Argus II had been approved for use in Canada, France, Germany, Italy, Russia, Saudi Arabia, South Korea, Spain, Taiwan, Turkey, United Kingdom, and the U.S., so you can expect to see The Orion in the same, if not more countries.
Second Sight’s Argus II Restores Vision to Blind Patient
Professor whose study suggested technology can detect whether a person is gay or straight says programs will soon reveal traits such as criminal predisposition
Voters have a right to keep their political beliefs private. But according to some researchers, it won’t be long before a computer program can accurately guess whether people are liberal or conservative in an instant. All that will be needed are photos of their faces.
Michal Kosinski – the Stanford University professor who went viral last week for research suggesting that artificial intelligence (AI) can detect whether people are gay or straight based on photos – said sexual orientation was just one of many characteristics that algorithms would be able to predict through facial recognition.
Using photos, AI will be able to identify people’s political views, whether they have high IQs, whether they are predisposed to criminal behavior, whether they have specific personality traits and many other private, personal details that could carry huge social consequences, he said.
Kosinski outlined the extraordinary and sometimes disturbing applications of facial detection technology that he expects to see in the near future, raising complex ethical questions about the erosion of privacy and the possible misuse of AI to target vulnerable people.
“The face is an observable proxy for a wide range of factors, like your life history, your development factors, whether you’re healthy,” he said.
Faces contain a significant amount of information, and using large datasets of photos, sophisticated computer programs can uncover trends and learn how to distinguish key traits with a high rate of accuracy. With Kosinski’s “gaydar” AI, an algorithm used online dating photos to create a program that could correctly identify sexual orientation 91% of the time with men and 83% with women, just by reviewing a handful of photos.
Kosinski’s research is highly controversial, and faced a huge backlash from LGBT rights groups, which argued that the AI was flawed and that anti-LGBT governments could use this type of software to out gay people and persecute them. Kosinski and other researchers, however, have argued that powerful governments and corporations already possess these technological capabilities and that it is vital to expose possible dangers in an effort to push for privacy protections and regulatory safeguards, which have not kept pace with AI.
Kosinski, an assistant professor of organizational behavior, said he was studying links between facial features and political preferences, with preliminary results showing that AI is effective at guessing people’s ideologies based on their faces.
This is probably because political views appear to be heritable, as research has shown, he said. That means political leanings are possibly linked to genetics or developmental factors, which could result in detectable facial differences.
Kosinski said previous studies have found that conservative politicians tend to be more attractive than liberals, possibly because good-looking people have more advantages and an easier time getting ahead in life.
Kosinski said the AI would perform best for people who are far to the right or left and would be less effective for the large population of voters in the middle. “A high conservative score … would be a very reliable prediction that this guy is conservative.”
Kosinski is also known for his controversial work on psychometric profiling, including using Facebook data to draw inferences about personality. The data firm Cambridge Analytica has used similar tools to target voters in support of Donald Trump’s campaign, sparking debate about the use of personal voter information in campaigns.
Facial recognition may also be used to make inferences about IQ, said Kosinski, suggesting a future in which schools could use the results of facial scans when considering prospective students. This application raises a host of ethical questions, particularly if the AI is purporting to reveal whether certain children are genetically more intelligent, he said: “We should be thinking about what to do to make sure we don’t end up in a world where better genes means a better life.”
Some of Kosinski’s suggestions conjure up the 2002 science-fiction film Minority Report, in which police arrest people before they have committed crimes based on predictions of future murders. The professor argued that certain areas of society already function in a similar way.
He cited school counselors intervening when they observe children who appear to exhibit aggressive behavior. If algorithms could be used to accurately predict which students need help and early support, that could be beneficial, he said. “The technologies sound very dangerous and scary on the surface, but if used properly or ethically, they can really improve our existence.”
There are, however, growing concerns that AI and facial recognition technologies are actually relying on biased data and algorithms and could cause great harm. It is particularly alarming in the context of criminal justice, where machines could make decisions about people’s lives – such as the length of a prison sentence or whether to release someone on bail – based on biased data from a court and policing system that is racially prejudiced at every step.
Kosinski predicted that with a large volume of facial images of an individual, an algorithm could easily detect if that person is a psychopath or has high criminal tendencies. He said this was particularly concerning given that a propensity for crime does not translate to criminal actions: “Even people highly disposed to committing a crime are very unlikely to commit a crime.”
He also cited an example referenced in the Economist – which first reported the sexual orientation study – that nightclubs and sport stadiums could face pressure to scan people’s faces before they enter to detect possible threats of violence.
Kosinski noted that in some ways, this wasn’t much different from human security guards making subjective decisions about people they deem too dangerous-looking to enter.
The law generally considers people’s faces to be “public information”, said Thomas Keenan, professor of environmental design and computer science at the University of Calgary, noting that regulations have not caught up with technology: no law establishes when the use of someone’s face to produce new information rises to the level of privacy invasion.
Keenan said it might take a tragedy to spark reforms, such as a gay youth being beaten to death because bullies used an algorithm to out him: “Now, you’re putting people’s lives at risk.”
Even with AI that makes highly accurate predictions, there is also still a percentage of predictions that will be incorrect.
“You’re going down a very slippery slope,” said Keenan, “if one in 20 or one in a hundred times … you’re going to be dead wrong.”
Last year, two data scientists from security firm ZeroFOX conducted an experiment to see who was better at getting Twitter users to click on malicious links, humans or an artificial intelligence. The researchers taught an AI to study the behavior of social network users, and then design and implement its own phishing bait. In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.
The AI, named SNAP_R, sent simulated spear-phishing tweets to over 800 users at a rate of 6.75 tweets per minute, luring 275 victims. By contrast, Forbes staff writer Thomas Fox-Brewster, who participated in the experiment, was only able to pump out 1.075 tweets a minute, making just 129 attempts and luring in just 49 users.
Thankfully this was just an experiment, but the exercise showed that hackers are already in a position to use AI for their nefarious ends. And in fact, they’re probably already using it, though it’s hard to prove. In July, at Black Hat USA 2017, hundreds of leading cybersecurity experts gathered in Las Vegas to discuss this issue and other looming threats posed by emerging technologies. In a Cylance poll held during the confab, attendees were asked if criminal hackers will use AI for offensive purposes in the coming year, to which 62 percent answered in the affirmative.
The era of artificial intelligence is upon us, yet if this informal Cylance poll is to be believed, a surprising number of infosec professionals are refusing to acknowledge the potential for AI to be weaponized by hackers in the immediate future. It’s a perplexing stance given that many of the cybersecurity experts we spoke to said machine intelligence is alreadybeing used by hackers, and that criminals are more sophisticated in their use of this emerging technology than many people realize.
“Hackers have been using artificial intelligence as a weapon for quite some time,” said Brian Wallace, Cylance Lead Security Data Scientist, in an interview with Gizmodo. “It makes total sense because hackers have a problem of scale, trying to attack as many people as they can, hitting as many targets as possible, and all the while trying to reduce risks to themselves. Artificial intelligence, and machine learning in particular, are perfect tools to be using on their end.” These tools, he says, can make decisions about what to attack, who to attack, when to attack, and so on.
“What does strike me as a bit odd is that 62 percent of infosec professionals are making an AI prediction,” Goodman told Gizmodo. “AI is defined by many different people many different ways. So I’d want further clarity on specifically what they mean by AI.”
Indeed, it’s likely on this issue where the expert opinions diverge.
The funny thing about artificial intelligence is that our conception of it changes as time passes, and as our technologies increasingly match human intelligence in many important ways. At the most fundamental level, intelligence describes the ability of an agent, whether it be biological or mechanical, to solve complex problems. We possess many tools with this capability, and we have for quite some time, but we almost instantly start to take these tools for granted once they appear.
Centuries ago, for example, the prospect of a calculating machine that could crunch numbers millions of times faster than a human would’ve most certainly been considered a radical technological advance, yet few today would consider the lowly calculator as being anything particularly special. Similarly, the ability to win at chess was once considered a high mark of human intelligence, but ever since Deep Blue defeated Garry Kasparov in 1997, this cognitive skill has lost its former luster. And so and and so forth with each passing breakthrough in AI.
Today, rapid-fire developments in machine learning (whereby systems learn from data and improve with experience without being explicitly programmed), natural language processing, neural networks (systems modeled on the human brain), and many other fields are likewise lowering the bar on our perception of what constitutes machine intelligence. In a few years, artificial personal assistants (like Siri or Alexa), self-driving cars, and disease-diagnosing algorithms will likewise lose, unjustifiably, their AI allure. We’ll start to take these things for granted, and disparage these forms of AI for not being perfectly human. But make no mistake—modern tools like machine intelligence and neural networks are a form of artificial intelligence, and to believe otherwise is something we do at our own peril; if we dismiss or ignore the power of these tools, we may be blindsided by those who are eager to exploit AI’s full potential, hackers included.
A related problem is that the term artificial intelligence conjures futuristic visions and sci-fi fantasies that are far removed from our current realities.
“The term AI is often misconstrued, with many people thinking of Terminator robots trying to hunt down John Connor—but that’s not what AI is,” said Wallace. “Rather, it’s a broad topic of study around the creation of various forms of intelligence that happen to be artificial.”
Wallace says there are many different realms of AI, with machine learning being a particularly important subset of AI at the current moment.
“In our line of work, we use narrow machine learning—which is a form of AI—when trying to apply intelligence to a specific problem,” he told Gizmodo. “For instance, we use machine learning when trying to determine if a file or process is malicious or not. We’re not trying to create a system that would turn into SkyNet. Artificial intelligence isn’t always what the media and science fiction has depicted it as, and when we [infosec professionals] talk about AI, we’re talking about broad areas of study that are much simpler and far less terrifying.”
These modern tools may be less terrifying than clichéd Terminator visions, but in the hands of the wrong individuals, they can still be pretty scary.
Deepak Dutt, founder and CEO of Zighra, a mobile security startup, says there’s a high likelihood that sophisticated AI will be used for cyberattacks in the near future, and that it might already be in use by countries such as Russia, China, and some Eastern European countries. In terms of how AI could be used in nefarious ways, Dutt has no shortage of ideas.
“Artificial intelligence can be used to mine large amounts of public domain and social network data to extract personally identifiable information like date of birth, gender, location, telephone numbers, e-mail addresses, and so on, which can be used for hacking [a person’s] accounts,” Dutt told Gizmodo. “It can also be used to automatically monitor e-mails and text messages, and to create personalized phishing mails for social engineering attacks [phishing scams are an illicit attempt to obtain sensitive information from an unsuspecting user]. AI can be used for mutating malware and ransomware more easily, and to search more intelligently and dig out and exploit vulnerabilities in a system.”
Dutt suspects that AI is already being used for cyberattacks, and that criminals are already using some sort of machine learning capabilities, for example, by automatically creating personalized phishing e-mails.
“But what is new is the sophistication of AI in terms of new machine learning techniques like Deep Learning, which can be used to achieve the scenarios I just mentioned with a higher level of accuracy and efficiency,” he said. Deep Learning, also known as hierarchical learning, is a subfield of machine learning that utilizes large neural networks. It has been applied to computer vision, speech recognition, social network filtering, and many other complex tasks, often producing results superior to human experts.
“Also the availability of large amounts of social network and public data sets (Big Data) helps. Advanced machine learning and Deep Learning techniques and tools are easily available now on open source platforms—this combined with the relatively cheap computational infrastructure effectively enables cyberattacks with higher sophistication.”
These days, the overwhelming number of cyber attacks is automated, according to Goodman. The human hacker going after an individual target is far rarer, and the more common approach now is to automate attacks with tools of AI and machine learning—everything from scripted Distributed Denial of Service (DDoS) attacks to ransomware, criminal chatbots, and so on. While it can be argued that automation is fundamentally unintelligent (conversely, a case can be made that some forms of automation, particularly those involving large sets of complex tasks, are indeed a form of intelligence), it’s the prospect of a machine intelligence orchestrating these automated tasks that’s particularly alarming. An AI can produce complex and highly targeted scripts at a rate and level of sophistication far beyond any individual human hacker.
Indeed, the possibilities seem almost endless. In addition to the criminal activities already described, AIs could be used to target vulnerable populations, perform rapid-fire hacks, develop intelligent malware, and so on.
Staffan Truvé, Chief Technology Officer at Recorded Future, says that, as AI matures and becomes more of a commodity, the “bad guys,” as he puts it, will start using it to improve the performance of attacks, while also cutting costs. Unlike many of his colleagues, however, Truvé says that AI is not really being used by hackers at the moment, claiming that simpler algorithms (e.g. for self-modifying code) and automation schemes (e.g. to enable phishing schemes) are working just fine.
“I don’t think AI has quite yet become a standard part of the toolbox of the bad guys,” Truvé told Gizmodo. “I think the reason we haven’t seen more ‘AI’ in attacks already is that the traditional methods still work—if you get what you need from a good old fashioned brute force approach then why take the time and money to switch to something new?”
AI on AI
With AI now part of the modern hacker’s toolkit, defenders are having to come up with novel ways of defending vulnerable systems. Thankfully, security professionals have a rather potent and obvious countermeasure at their disposal, namely artificial intelligence itself. Trouble is, this is bound to produce an arms race between the rival camps. Neither side really has a choice, as the only way to counter the other is to increasingly rely on intelligent systems.
“For security experts, this is Big Data problem—we’re dealing with tons of data—more than a single human could possibly produce,” said Wallace. “Once you’ve started to deal with an adversary, you have no choice but to use weaponized AI yourself.”
To stay ahead of the curve, Wallace recommends that security firms conduct their own internal research, and develop their own weaponized AI to fight and test their defenses. He calls it “an iron sharpens iron” approach to computer security. The Pentagon’s advanced research wing, DARPA, has already adopted this approach, organizing grand challenges in which AI developers pit their creations against each other in a virtual game of Capture the Flag. The process is very Darwinian, and reminiscent of yet another approach to AI development—evolutionary algorithms. For hackers and infosec professionals, it’s survival of the fittest AI.
Goodman agrees, saying “we will out of necessity” be using increasing amounts of AI “for everything from fraud detection to countering cyberattacks.” And in fact, several start-ups are already doing this, partnering with IBM Watson to combat cyber threats, says Goodman.
“AI techniques are being used today by defenders to look for patterns—the antivirus companies have been doing this for decades—and to do anomaly detection as a way to automatically detect if a system has been attacked and compromised,” said Truvé.
At his company, Recorded Future, Truvé is using AI techniques to do natural language processing to, for example, automatically detect when an attack is being planned and discussed on criminal forums, and to predict future threats.
“Bad guys [with AI] will continue to use the same attack vectors as today, only in a more efficient manner, and therefore the AI based defense mechanisms being developed now will to a large extent be possible to also use against AI based attacks,” he said.
Dutt recommends that infosec teams continuously monitor the cyber attack activities of hackers and learn from them, continuously “innovate with a combination of supervised and unsupervised learning based defense strategies to detect and thwart attacks at the first sign,” and, like in any war, adopt superior defenses and strategy.
The bystander effect
So our brave new world of AI-enabled hacking awaits, with criminals becoming increasingly capable of targeting vulnerable users and systems. Computer security firms will likewise lean on a AI in a never ending effort to keep up. Eventually, these tools will escape human comprehension and control, working at lightning fast speeds in an emerging digital ecosystem. It’ll get to a point where both hackers and infosec professionals have no choice but to hit the “go” button on their respective systems, and simply hope for the best. A consequence of AI is that humans are increasingly being kept out of the loop.
An algorithm deduced the sexuality of people on a dating site with up to 91% accuracy, raising tricky ethical questions
Artificial intelligence can accurately guess whether people are gay or straight based on photos of their faces, according to new research that suggests machines can have significantly better “gaydar” than humans.
The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology, and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.
The machine intelligence tested in the research, which was published in the Journal of Personality and Social Psychology and first reported in the Economist, was based on a sample of more than 35,000 facial images that men and women publicly posted on a US dating website. The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.
The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.
Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.
The paper suggested that the findings provide “strong support” for the theory that sexual orientation stems from exposure to certain hormones before birth, meaning people are born gay and being queer is not a choice. The machine’s lower success rate for women also could support the notion that female sexual orientation is more fluid.
While the findings have clear limits when it comes to gender and sexuality – people of color were not included in the study, and there was no consideration of transgender or bisexual people – the implications for artificial intelligence (AI) are vast and alarming. With billions of facial images of people stored on social media sites and in government databases, the researchers suggested that public data could be used to detect people’s sexual orientation without their consent.
It’s easy to imagine spouses using the technology on partners they suspect are closeted, or teenagers using the algorithm on themselves or their peers. More frighteningly, governments that continue to prosecute LGBT people could hypothetically use the technology to out and target populations. That means building this kind of software and publicizing it is itself controversial given concerns that it could encourage harmful applications.
But the authors argued that the technology already exists, and its capabilities are important to expose so that governments and companies can proactively consider privacy risks and the need for safeguards and regulations.
“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”
Rule argued it was still important to develop and test this technology: “What the authors have done here is to make a very bold statement about how powerful this can be. Now we know that we need protections.”
Kosinski was not immediately available for comment, but after publication of this article on Friday, he spoke to the Guardian about the ethics of the study and implications for LGBT rights. The professor is known for his work with Cambridge University on psychometric profiling, including using Facebook data to make conclusions about personality. Donald Trump’s campaign and Brexit supporters deployed similar tools to target voters, raising concerns about the expanding use of personal data in elections.
In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.
This type of research further raises concerns about the potential for scenarios like the science-fiction movie Minority Report, in which people can be arrested based solely on the prediction that they will commit a crime.
“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”
Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.
Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”