This Guy Keeps Track Of Creepy Uses Of AI

David Dao, PhD student at ETH Zurich, wears many hats. In addition to working on his doctorate, he is the founder of Win the forest now (a decentralized fund using artificial intelligence to measure and reward the sustainable management of nature), and he also owns a rather interesting GitHub page — on his page, it follows some of the most concerning uses of AI (artificial intelligence).

While Dao seems busy with his other businesses and hasn’t updated his page in a few months, it’s still a most interesting collection. We keep hearing about the achievements and potential of artificial intelligence, but AI also comes with many risks and threats. This isn’t the robot revolution or anything in the distant future that we’re talking about, we’re talking about real issues happening right now.

Here is an example: a “gaydar“. That’s right, there’s an AI that claims have more than 90% accuracy to identify if someone is gay just by looking at five photos of their face – far better than human accuracy.

While this raises some interesting questions about the nature of homosexuality, it raises even more concerns about the potential for discrimination. If this type of AI were to become widespread, it would be like having a homosexuality badge on your chest, something that authoritarian regimes and discriminatory groups would be forced to wear.

Some key facial features targeted by the AI. Credit: Stanford University.

Dao has also documented several job AIs intended to analyze CVs and applications and find the best potential employees – but finished discriminatory against women. Amazon used this type of AI and scrapped it after finding systematic bias against women.

“In the case of Amazon, the algorithm quickly learned to favor male applicants over female applicants, penalizing resumes containing the word ‘women’, such as ‘female chess club captain’. It would also have downgraded graduates of two women’s colleges,” reads Dao’s page. Like HireVue Where PredictiveHire also ended up doing the same.

The potential for discrimination abounds. Among other things, Dao mentions a project which tries to assess if you are a criminal by looking at your face, an algorithm used in UK predict based on historical data that ended up biasing students from disadvantaged backgrounds, and a chatbot who began spouting anti-Semitic messages after a day of “learning” from Twitter.

It is natural for AIs to be flawed or imperfect at first. The problems arise when authors are unaware that the data they feed the AI ​​is biased, and when these algorithms are implemented in the real world with underlying problems.

These prejudices are often racist. For example, an image recognition program from Google tagged several blacks like gorillas. Meanwhile, Amazon’s Rekognition tagged darker-skinned women as men 31% of the time, while only doing the same for lighter-skinned women 7% of the time. Amazon is pushing this algorithm despite these issues, which she says are “misleading. Most identification algorithms seem to have trouble darker faces.

Monitoring

Another scary application where AI could be (and is) used is surveillance. Clearview.ai, an unknown start-up, could end privacy as we know it. The company created a facial recognition database, matching photos of people with their names and images from social media. The app is already in use by some law enforcement to obtain the names and addresses of potential suspects, and has even become a toy for the rich to let them spy on clients and appointments.

Another facial recognition system (any view) is used by the Israeli military for surveillance of Palestinians living across the west bank. The system is also used in Israel army checkpoints surrounding occupied Palestine. AIs can also identify people (sometimes) from their Steps and in China, that type of technology is already widely deployed. Some companies even want to use walk analytics to know when and where their employees are traveling.

If that still doesn’t scare you, AIs can also be used for censorship – and at least in China, they’re already being deployed on a large scale. WeChat, a messaging app used by almost a billion people (mainly in China), uses automatic scanning to detect all text and images sent on the platform, in real time. Text and images deemed harmful (including images of Winnie the Pooh) are removed in real time and users are potentially banned or investigated. The system becomes more and more powerful with each image sent via the platform, and censorship is already unavoidable in China.

Perhaps even more worrying is that all of this surveillance could one day be used in a social credit system – where an algorithm determines how worthy you are, for example, of getting a loan, or the severity of your punishment for committing a crime. Again, China is at the forefront and already has such system in place; among other things, the system bans millions of people from traveling (because they have too low a credit rating) and penalizes people who share “harmful” images on social networks. He also judges people based on what type of purchases they make and on their “interpersonal relationships”.

Military AIs

Naturally, this wouldn’t be a complete list without some creepy AI military applications.

Perhaps the most notable incident involving the use of weapons by the AI ​​was the ‘Machine gun with AI’ used to kill Iranian scientist pivot of the country’s nuclear system. Essentially, a machine gun was mounted on the back of a regular truck equipped with an AI satellite system. The system attempted to identify an Iranian scientist and the ‘satellite-controlled’ automatic gun struck him thirteen times from a distance of 150 meters (490 ft), leaving his wife (who was sitting just 25cm away) physically unharmed. The entire system then self-destructed, leaving no significant traces behind. The incident made waves around the world, showing that autonomous or semi-autonomous AI weapons are already good enough to use.

Armed autonomous drones who can act in wars were also used in the Armenian-Azerbaijani conflict, deciding the course of battle by stealing and autonomously destroying key targets and killing people. Autonomous tanks like the Russian Uran-9 were also tested in the Syrian Civil War.

There’s no shortage of scary uses for AI.

From surveillance and discrimination to outright military offensive, the technology seems mature enough to cause real, substantial and potentially lasting damage in society. Authoritarian governments seem eager to use them, as do various interest groups. It’s important to keep in mind that while AI has enormous potential to produce a positive effect in the world, the potential for harmful uses is just as great.

About the author