Democratizing Machine Learning
Machine learning is the craze right now. A data engineer at Facebook, Amazon, or Google can earn more than $500k per year, and it is said that a self-driving car engineer is worth $10 million according to Ex-Googler Sebastian Thrun. Clearly there is a demand and a lack of talent supply of data engineering with PhD’s from Stanford, MIT, or CMU. This week Steven Levy offered his view on the work Bonsai and other companies working to offset this talent shortage by “democratizing” machine learning.
- Bonsai’s backend trains the neural network so that any programmer can implement an AI application even without having knowledge of machine learning.
- Bonsai’s goal is to build a layer of abstraction to enable widespread adoption and use of machine learning. If Google’s TensorFlow is the assembly equivalent of programming, then Bonsai aspires to be more like Python.
- As with any layer of abstraction, Bonsai’s cloud-based intelligence engine sacrifices performance and effectiveness. Bonsai’s CEO, Mark Hammond, sees it as a necessary tradeoff to democratize the power of AI.
Takeaway: The most interesting part of this article was not what Steven Levy had to say, but what one of the commenters Bram van Es wrote: “this is good, how? … Is this not the path to idiocy, not educating ourselves, relying on technology we choose not to understand, and be HAPPY about it?” I disagree. Programmers make use of API’s without knowing what happens behind the scenes. Likewise, compilers allowed programmers to focus on higher-level logic to make use of the technology, instead of having to focus on the machine language translation aspect. Widespread use of ML and AI will lead to the creation of more useful things — and there will still be people working on actually learning the algorithms.
+ TechCrunch: Amazon, Google, IBM, Facebook, and Microsoft form a partnership on AI
Bots, Bots, Bots
Six months ago, Microsoft CEO Satya Nadella boldly claimed that bots would be as big as mobile apps. While there are an overabundance of APIs, platforms, and engines for bots, reality hasn’t caught up to the hype just yet. It’s still very early to tell what role bots will have in the future. But here is a summary of what we know and have learned so far:
- Ted Livingston, the CEO of Kik, remains bullish on bots despite the current limitations on the bots’ natural language processing capabilities. He believes that bots can be useful without the conversational element, which will come as NLP advances. Bots are lightweight (no new app to download), easier to discover, and consolidates many actions into a single interaction with the bot. (Medium)
- Varun Singh, the co-founder of stealth, also points to the need of maturing NLP, voice recognition, and messaging platforms to help bots succeed. Mr. Singh believes that the “sweet spot” for bots is in the region of daily use cases that are transactional. Bots will augment apps, not necessarily to replace them. (Medium)
- The team at Conversate, a new AI startup, lists several issues with bot APIs that derail user experience: lack of context, failure management, dialogue optimization, expert knowledge, and accuracy. Out of the five listed above, failure management is an area where current bots struggle heavily. NLP algorithms are not very accurate, and if bots mishandle failed interactions, bot usage will eventually decline. (Medium)
Takeaway: As Ted Livingston points out, chat bots can learn from the success of WeChat. It’s the “low-friction access to apps, common interface, and messaging as the front door to digital experience” that can help bots find its place amongst users.
+ Summary of Chatbots from Leade.rs team (Medium)
Quote of the Week
The main assumption we need to reexamine is that the web browser is the de facto standard user-facing client of the web. It’s simply not the case anymore. The browser is now a platform through which applications are downloaded, compiled and consumed.
– Keith Horwood
Keith Horwood makes the case for reframing the development target as an JS app, not the browser. This is perhaps best illustrated using the diagrams on his post, comparing the old model and the suggested new model.
This decoupled model allows developers to keep the limitations and concerns of each component separate. The problem is that our current dev tools are not well equipped to deal with these multiple application environments. If you are interested in building new tools to alleviate this issue, you can read Keith Horwood’s post here.
- Follow up to last week’s post on security in regards to IoT — Industrial Internet Consortium
- Best thing on the internet right now: combat spammers with a spamming bot of your own — mLooper
- How do Convolutional Neural Networks (so often talked about in the image processing field) actually work? — Broher
- Google’s newest neural network for machine translation — Google Blog
- Keep up to date with the election using DataBot — ProPublica
- Great summary on the evolution of intelligent products — Medium