Thursday, September 21, 2017

Defining "Decentralization"

Classmate Alan Yan provides the following article, "The Meaning of Decentralization," which was just posted on Medium. It's extensive but contains a lot of helpful sections; for example, here's a breakdown of the types of software distribution, included a few paragraphs in.
  • Architectural (de)centralization — how many physical computers is a system made up of? How many of those computers can it tolerate breaking down at any single time?
  • Political (de)centralization — how many individuals or organizations ultimately control the computers that the system is made up of?
  • Logical (de)centralization— does the interface and data structures that the system presents and maintains look more like a single monolithic object, or an amorphous swarm? One simple heuristic is: if you cut the system in half, including both providers and users, will both halves continue to fully operate as independent units?

How to Give Debit Cards To People Without Papers

Classmate Sumit Aggarwal has just sent this interesting look at Finland and how it's using blockchain technology to give refugees a modicum of financial independence. The article, "Finland has a novel way of giving refugees money," is published by the World Economic Forum.

This thesis gives you some idea (but read the full piece to get the entire picture):
The cards mean that the refugees can also receive money – including salaries when they get jobs – and pay bills, without the need to open a bank account. 
The blockchain technology used by MONI doesn’t require a financial intermediary, such as a bank, to process transactions. Instead, transactions are instantaneous between users, and a unique digital record is kept of each one. It’s a cheaper payment system that is highly transparent.

Bitcoin, Eight Years On

If you're still a little confused following our short lecture on blockchain, classmate Sean Lemke has sent along a helpful, brief video from the WSJ -- Bitcoin After Eight Years: More Virtual Than Real?. Sean provides the following additional explanation:
... The interview discusses the SEC’s blocking of Bitcoin being used as the basis of a new ETF offering because of the lack of transparency associated with transactions... The interview also raises some good questions regarding the cryptocurrency’s viability as an investment given its volatility.

Keeping up with Blockchain and Bitcoin

Classmate Jerry Woytash has sent over this recommended source for keeping up with all things Bitcoin and blockchain. Matt Levine, columnist for Bloomberg, publishes a running commentary on news related to both -- as well as a host of other issues -- and is worth bookmarking. You can start with his article from last week "Bitcoin Doubts and Buffett Criticisms" and see his aggregated work here.

Artificial intelligence pioneer says we need to start over

To supplement Professor Brynjolfsson's previous post, classmate Ahmed Bilal has just sent this quick look at Geoff Hinton, pioneer of AI development, who now expresses hesitations about his brainchild. Here's a key paragraph from the piece, "Artificial intelligence pioneer says we need to start over":
Other scientists at the conference said back-propagation still has a core role in AI's future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. "Max Planck said, 'Science progresses one funeral at a time.' The future depends on some graduate student who is deeply suspicious of everything I have said."

Saturday, September 16, 2017

Does AI research need to "throw it all away and start again"?

Geoffrey Hinton has made a number of breakthroughs in neural networks and also had a lot of students and advisees, like Yann LeCun, who made further advances.

Although he's been working in the field for decades, Geoff and his research have gained a lot of well-deserved attention recently as deep learning systems outperformed other approaches on a wide range of tasks, from image recognition to speech.

Much of this success is due to the power of a workhorse algorithm called backpropagation that he helped pioneer.

Many scientists with an amazing contribution like that to their credit would do everything to push it and extend its influence. But Geoff is a modest guy.  He now thinks that AI researchers need to move beyond backpropagation and 'start again".  The key issue is that, while backpropagation works well on supervised learning problems, where we have a lot of labeled training data, it doesn't work for unsupervised learning, which seems to be how we humans learn most things about the world. When I saw him in Toronto last week, Geoff said it was time for a new wave of researchers with different approaches. He repeated (with a wry smile) the old adage "science advances one funeral at a time."

Fortunately, more and more smart young people are flocking into the field, developing and testing new approaches. The biggest conference in the field, NIPS, keeps getting bigger and selling out faster and faster.

I'm optimistic that we will not only find new applications for existing approaches, but also invent a lot of new ones.

Friday, September 15, 2017

The Dark Secret at the Heart of AI

Classmate Subhashree Ringharajan has submitted an article from the MIT Technology Review with this title. Key quote:

As the [AI] technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time. 

Meet Michelangelo

Classmate Jerry Woytash has just sent in an article about Michelangelo, Uber's machine learning platform. There's a lot in here, and the write-up has a lot of technical details, but some key background information is addressed early in the piece:

Michelangelo consists of a mix of open source systems and components built in-house. The primary open sourced components used are HDFSSparkSamzaCassandraMLLibXGBoost, and TensorFlow. We generally prefer to use mature open source options where possible, and will fork, customize, and contribute back as needed, though we sometimes build systems ourselves when open source solutions are not ideal for our use case.
Michelangelo is built on top of Uber’s data and compute infrastructure, providing a data lake that stores all of Uber’s transactional and logged data, Kafka brokers that aggregate logged messages from all Uber’s services, a Samza streaming compute engine, managed Cassandra clusters, and Uber’s in-house service provisioning and deployment tools. 

Who Will Build the Health-Care Blockchain?

Classmate Kenza Sbai has just provided a fascinating look at the question, through a piece published today by the MIT Technology Review ("Who Will Build the Health-Care Blockchain?").

The theme of the article comes in the third paragraph:
Technologists and health-care professionals across the globe see blockchain technology as a way to streamline the sharing of medical records in a secure way, protect sensitive data from hackers, and give patients more control over their information. But before an industry-wide revolution in medical records is possible, a new technical infrastructure—a custom-built “health-care blockchain”—must be constructed.

Monday, September 11, 2017

Carnegie Mellon Artificial Intelligence Beats Top Poker Pros

Tuomas Sandholm (center) and Ph.D. student Noam Brown developed Libratus.

Classmate Weixiang Wang has just given you a good reason to fold. As Carnegie Mellon's website details, Libratus, an AI system designed at CMU has just surpassed human competitors at the poker table. Here's a key insight into how the system was able to self-correct and improve:

“After play ended each day, a meta-algorithm analyzed what holes the pros had identified and exploited in Libratus’ strategy,” Sandholm said. “It then prioritized the holes and algorithmically patched the top three using the supercomputer each night. This is very different than how learning has been used in the past in poker. Typically researchers develop algorithms that try to exploit the opponent’s weaknesses. In contrast, here the daily improvement is about algorithmically fixing holes in our own strategy.”
Sandholm also said that Libratus’ end-game strategy, which was computed live with the Bridges computer for each hand, was a major advance.
“The end-game solver has a perfect analysis of the cards,” he said.

Has disruption from e-commerce run its course?

Classmate Collin Lee has just brought this article to our attention; it speaks to some interesting aspects of the "atoms vs. bits" debate, and the importance of bits ruling atoms in the O2O market. Key quote:

Judging by recent developments, the pure e-commerce model may not be long for the U.S. either. That Amazon and eBay are the only true online shopping sites in the U.S. with any real influence is one piece of evidence. So is the fact that small online retailers are struggling to gain a foothold, while legacy retailers such as Walmart are snapping up online merchants. And yet the most vivid illustration of O2O in action is the escalating turf-war between Amazon and Walmart.
A further short observation, relevant to acquisitions made this summer, rounds out this analysis:

“Just as Walmart is using Bonobos to get access to higher-end consumers and a more technologically savvy way of selling clothes, Amazon is using Whole Foods to get the expertise and physical presence it takes to sell fresh foods,” The New York Times reported. 

Thursday, September 7, 2017

Mark Sagar Made a Baby in His Lab. Now It Plays the Piano.

If the title confuses or shocks you, that's probably because it's supposed to. Published this morning by Bloomberg, "Mark Sagar Made a Baby in His Lab. Now It Plays the Piano." details the work of Sagar in his lab at the University of Auckland. Sagar's mission is to "humanize AI," to make it more intuitively "alive" and less machine- or robot-like. The applications are many, and more basic than you may think. A relevant section of the piece:

[Humanizing AI] has the potential to yield a more symbiotic relationship between humans and machines. While [Sagar] wasn’t the first to this idea, his approach is unique, a synthesis of his early years as a computer scientist and later ones in the world of Hollywood special effects. The face, he’s concluded, is the key to barreling through the uncanny valley and making virtual beings feel truly lifelike... Soul Machines wants to produce the first wave of likable, believable virtual assistants that work as customer service agents and breathe life into hunks of plastic such as’s Echo and Google Inc.’s Home.

Friday, September 1, 2017

Do we need a speedometer for artificial intelligence?

Tom Simonite asks this provocative question in Wired this week. He opens with an anecdote about Microsoft's speech transcription software  how it is now surpassing humans given the same challenges  before expanding into a survey of broader trends:
[The rapid improvement in speech recognition software] is the latest in a string of recent findings that some view as proof that advances in artificial intelligence are accelerating, threatening to upend the economy. Some software has proved itself better than people at recognizing objects such as cars or cats in images, and Google’s AlphaGo software has overpowered multiple Go champions—a feat that until recently was considered a decade or more away. 
So, in sum, the rate of technological progress isn't merely continuing at a breakneck speed  it seems to be accelerating beyond it.

Simonite cites a number of AI monitoring programs, including Stanford's One Hundred Year Study on Artificial Intelligence. The article notes that AI monitors are not merely keeping an eye out for how advances in technology may upend the economy. They're also looking at public awareness and perceptions of AI: how many people know these changes are occurring and what do they think of them.

(Also note: the article includes a few graphs showing Moore's Law manifest in a variety of technologies. The improvement in Google's image recognition software is shown below.)