Failed Algorithms

Isaac Asimov’s Three Laws of Robotics, designed to prevent robots from harming humans:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

How does this work when the robot is a financial adviser? The Securities and Exchange Commission brought cases against two robo-advisers.

Wealthfront Advisers is an online robo-adviser that provides software-based portfolio management, including a tax-loss harvesting program for clients’ taxable accounts. The SEC alleged that Wealthfront falsely represented to its clients that the robot would monitor their accounts to avoid transactions that might trigger a wash sale. The SEC alleged that Wealthfront failed to conduct such monitoring. That made Wealthfront’s representations misleading.

In a separate case, the SEC alleged that Hedgeable Inc., a robo-adviser, misleadingly compared its results to performances of other robo-advisers. According to the SEC, Hedgeable calculated its returns based on a small subset of client accounts. Further it miscalculated its competitors’ trading returns by using approximations based on information on the competitors’ websites.

While the headlines sound groundbreaking because they involved robo-advisers, the two rob-adviser actions were human misconduct, not malfunctioning algorithms. Those algorithms were fairly basic.

Samathur Li Kin-kan is suing a robo-adviser for not being as sophisticated as promised. Tyndaris Investments’ K1 supercomputer was supposed to comb through online sources like real-time news and social media to gauge investor sentiment and make predictions on U.S. stock futures. It would then send instructions to a broker to execute trades, adjusting its strategy over time based on what it had learned.

Li is suing Tyndaris for about $23 million for exaggerating what the supercomputer could do.  It managed to lose $20 million in one day. THe loss was due to a failed stop-loss order. Li’s lawyers argue that the order wouldn’t have been triggered if K1 was as sophisticated as Tyndaris led him to believe.

For how, it’s the humans being blamed for robots’ shortcomings.

Sources: