Deep Tech

DeepTech: AI Learning to Interpret Human Emotion, DeepFakes, Crytographic Voting Systems and More

It Only Sounds Like Science Fiction

Researchers Are Teaching AI to Recognize & Interpret Human Emotion: What Could Possibly Go Wrong?

AI systems are getting good at displaying and reading human emotion. Earlier this week, a Stockholm startup, unveiled Furhat, a “social robot” that generates and projects a lifelike face onto a 3D mannequin head. Furhat can communicate and respond to a wide range of facial expressions. Shout at it, and it shouts back. It can make voice assistants like Amazon’s Alexa easier to talk to and more personable. It would also effectively allow Amazon to read and interpret emotions. Researchers in Hong Kong have compiled large datasets of Celebrity Faces to train systems to detect faces, recognize facial attributes, and map facial landmarks. Experts caution that, given known inherent biases in AI systems, there’s a large margin of error when using it to read emotions. It could be potentially disastrous if AI systems were used in high-stakes decisions, such as border crossings or self-driving cars. The recently developed emotional deep alignment network (DAN) aims to enable self-driving cars to count passengers, estimate their age and gender, and detect and classify facial expressions and emotions. Yet, researchers have used self-driving car dilemmas to reveal that moral choices are not universal. It would be profoundly troubling if an AI system could interpret facial expressions while it manipulates video footage, changing the footage as viewers react.

DeepFakes Get Real

It’s getting harder for humans and machines to spot Deepfake videos, which is especially troubling since AI systems are learning to recognize and respond to human emotions. Analysts caution that deepfakes could erode voter confidence, worsen the spread of misinformation, and undermine national security. Deepfake technology is like photoshop for videos; it’s the same technology used in Star Wars Rogue One to replace Princess Leia. Anyone can use it to create a video of anyone else doing or saying something that they didn’t do or say. Deepfakes used to be easy to detect, and humans can still spot them a bit better than machines, which must be trained. Researchers have used color component disparities and eye-blinking to train machines to sniff out fakes. And the results can be surprisingly accurate: Researchers in Korea were able to detect GANs-created images and human-created fake images with 94% and 74.9% accuracy. We have deepfake technology; we just don’t have the technology to reliably detect fakes and the ability to punish the creators. Skeptical of real world applications? Deepfake technology has already sparked a viral debate over the White House’s Video of Jim Acosta.  

Forget Your Password, Your Browser History Might Be Your Biggest Security Threat

Cyber security researchers have discovered 4 new methods that expose browsing histories to history sniffing attacks. History sniffing is a relatively old technique and works a lot like phishing, both seek to harvest private information using covert channels. Firefox issued a history sniffing bug report 18 years ago, and the US Federal Trade Commission has issued warnings since 2012. While the 4 new methods fit into older cyber security categories, they can profile users’ online activity in seconds: one attack exfiltrated user history at a rate of 3,000 URLs per second, a new record. Chrome, Firefox, Edge, Safari, and various other browsers were vulnerable. The only browser immune to all 4 attacks was the Tor Browser, which doesn’t record history. History sniffing is often used to enhance phishing attacks. A phisher hides attack code in a normal-looking ad. A victim navigates to the page containing the attack code, and without even clicking on the ad, the code automatically ‘sniffs’ the browser history for bank websites. If it finds one, the phisher redirects the victim to a fake login. While cybercrime costs up to $600B a year, it would be truly disastrous if health insurance companies used browsing history in making decisions. And, don’t count on ‘incognito mode’ or ad blockers to keep you safe: accumulated caches can reveal private data even in private sessions, and Apple used ad malware to send users histories to China.

Cryptographic Voting Systems: the Solution to Election Hacking?

America’s voting machines are a lot like the New York City subway system: expensive, old, vulnerable to attack, difficult to fix, and prone to human error. During the midterm elections, voting machines malfunctioned in Texas, and in Wisconsin and Kentucky computer servers exposed voter data to hackers. Part of the problem is that the entire voting machine industry consists of only 3 corporations. Experts warn that technical malfunctions erode public confidence in democratic institutions, especially amid federal investigations of foreign-state election interference. Cryptographic voting systems could reverse that trend. These systems aim to restore voter confidence by assuring voters of 2 things. One, that their vote was counted correctly. Two, that all eligible votes were correctly and equally counted. Scratch & Vote is a simple, paper-based system “designed to minimize cost and complexity”. This system achieves verifiability and ballot secrecy (a combination not previously possible) by using paper ballots, open source software, chromebooks, and iPads. It’s relatively simple: “votes are encrypted and posted on a public bulletin board, along with the voter’s name (or voter identification number) in plaintext. Everyone can see that Alice has voted, though, of course, not what she voted for. The encrypted votes are then anonymized and tallied using publicly verifiable techniques.” But, these cryptographic protocols don’t operate in a vacuum and are only part of a larger system that includes voting machines, software implementations, and election procedures. Unfortunately, these last two components are especially vulnerable to human error.   

How could paperless money possibly worsen global warming?

A new study has found it takes 3x more energy per US$ to mine bitcoin than it does to mine gold. Unlike paper money, which must be printed, most cryptocurrencies are mined, and this process is so energy intensive that scientists have warned Bitcoin production alone could nullify climate change efforts. Financial analysts expect this finding to further damage Bitcoin’s value, which has already fallen nearly 70% since peaking last year. The problem stems from the energy-intensive mining process. To prevent someone from simply making more bitcoins by just copy-pasting, every blockchain transaction is permanently time-stamped in a continuing chain. New bitcoins are ‘issued’ whenever a new block is added to the chain. The issue is that to mine a single bitcoin your computer has to solve a math problem so complex that it uses $26K worth of energy, at least if you’re in South Korea. Given global variations in consumer electricity pricing, it costs considerably less to mine bitcoins in Venezuela ($531). Cryptocurrency may worsen global warming, but Scientists have discovered a way to harvest renewable energy from the sun and outer space–at the same time, in the same device. Tl;dr? Watch this video instead.

Related Articles

Back to top button