Fighting off Mirai with Nematodes
A more recent IoT botnet attack called Linux/IRCTelnet continues to drive the conversation about the troubling state of security on the Internet. Earlier this week, Leo Linsky, a software engineer working at PacketSled, released then later took down the code developed to fight off Mirai-based DDoS attacks. His code builds an anti-worm “nematode” to detect insecure IoT devices and change Telnet credentials that made them vulnerable to Mirai. But there is a reason why the code is no longer publicly available. Nematodes are essentially “good” computer viruses that parallel the cybersecurity equivalent of vigilante justice.
- Linsky’s nematodes are based on the concept of an anti-worm worm detailed by Dave Aitel that infiltrates insecure IoT products and changes default credentials. While this reduces the attack surface as Mirai exploits default/weak passwords, it can also lock out device owners and legitimate administrators.
- Other than technical glitches that might revoke administrators access to their own devices, there is a problem with leaking source code online. Mirai’s security flaws were exposed by Scott Tenaglia of Invincea Labs, who used its leaked code online. This means that hackers can exploit vulnerabilities in Linsky’s code to make a more powerful version of Mirai.
- Lastly, there’s an ethical concern regarding “do-gooder” viruses. Nematodes certainly violate personal privacy and undermine device owner’s ability to actively manage their own security.
Takeaway: Mirai’s attack on Dyn certainly brought attention to security. Vulnerability scanners and nematodes seem like a good solution to combat malware exploiting insecure devices. However, nematodes are not without problems. It could end up like the Welchia worms, which caused major problems back in 2004 when it sought to combat the effects of the Blaster worm. Until industry moves to solidify its security strategy, how should we protect ourselves against malware? And are do-gooder viruses beneficial — or even effective — given their ethical and technical concerns?
Exposing Neural Network’s Black Box
One of the major criticisms of artificial neural networks and deep learning is that no one understands how it works. Two months ago, we reported that researchers at MIT were starting to make sense of how deep neural networks performed so well mathematically. This week, Tao Lei from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a way to extract how neural networks formulate their decisions. This paper can expedite the adoption of deep learning in healthcare where currently doctors rejects insights from machine learning because it lacks justification for its rationale.
- So far, the machine learning community has focused on making predictions and improving its accuracy without much consideration into how such decisions were made.
- The CSAIL paper details an approach that trains two modules (generator and encoder) together for text inputs. The generator picks out text fragments as rationales that are passed through encoders for predictions. The goal is to maximize the score of both components: extracted text segments and accuracy of the prediction.
- For example, in evaluating different beers, the researchers split up the reviews and ratings into attributes — aroma, palate, and appearance — that were picked up by the generator. The encoder than correlated these phrases with correct ratings, thereby identifying what rationale was used for the rating (e.g. correlating “signature Guinness smells” with a beer rating).
Takeaway: The approach described in this research is rather simple, yet the implications are significant. First and foremost, it can help convince those who currently don’t trust machine-learning methods by providing rationales for the algorithm’s decisions. In a broader sense too, Tommi Jaakkola from MIT explains that you may “want to exert some influence in terms of the types of predictions that [machines] should make. How does a layperson communicate with a complex model that’s trained with algorithms that they know nothing about? They might be able to tell you about the rationale for a particular prediction. In that sense it opens up a different way of communicating with the model.”
Quote of the Week
“Maybe some might be skeptical that [self-driving trucks are] happening next week, but if you think 50 years from now there’s still going to be people calling trucks to find out where the truck is when GPS was invented 80 years ago, we know that that’s not possible. The future can’t be that way, and we know we’re marching down the right path.”
Self-driving trucks are coming sooner than you think. When Uber acquired Otto in July, we knew the days of self-driving trucks were near. But just four months later, Uber has already soft-launched Uber Freight, signaling a new revolution in the trucking business. Uber Freight eliminates the brokerage firms that connect trucking companies to its customers. Most importantly, this may be the move that accelerates the transition into the age of autonomous vehicles.
We saw Cargomatic fall in attempts to be “Uber for Trucks,” and Convoy and TugForce are still competing to establish themselves in this market. The most common job in a majority of states is driving trucks. So far, the trucking industry stood immune to globalization and automation. But if anyone can disrupt legacy systems and transform transportation, it would be Uber. This puts trucking companies in an interesting bind: it can potentially cut cost by partnering with Uber Freight, but at the same time, it is also accelerating Uber’s push to replace the entire industry with autonomous trucks by providing them with massive amounts of data from its trucks to improve self-driving vehicle algorithms.
- Tidy text mining — GitHub
- Step-by-sep tutorial to build a modern JS stack from scratch — GitHub
- LPWAN vs. LoRaWAN — IoTAgenda
- Practical advice for big data analysis — Unofficial Google Data Science Blog
- Competitive Landscape for ML — HBR
- What’s Wrong with Big Data — New Humanist