Thursday, March 16, 2017

From Dodgy Data to Dodgy Policy - Mrs May's Immigration Targets

The TotalData™ value chain is about the flow from raw data to business decisions (including evidence-based policy decisions).

In this post, I want to talk about an interesting example of a flawed data-driven policy. The UK Prime Minister, Theresa May, is determined to reduce the number of international students visiting the UK. This conflicts with the advice she is getting from nearly everyone, including her own ministers.

As @Skapinker explains in the Financial Times, there are a number of mis-steps in this case.
  • Distorted data collection. Mrs May's policy is supported by raw data indicating the number of students that return to their country of origin. These are estimated measurements, based on daytime and evening surveys taken at UK airports. Therefore students travelling on late-night flights to such countries as China, Nigeria, Hong Kong, Saudi Arabia and Singapore are systematically excluded from the data.
  • Disputed data definition. Most British people do not regard international students as immigrants. But as May stubbornly repeated to a parliamentary committee in December 2016, she insists on using an international definition of migration, which includes any students that stay for more than 12 months.
  • Conflating measurement with target. Mrs May told the committee that "the target figures are calculated from the overall migration figures, and students are in the overall migration figures because it is an international definition of migration". But as Yvette Cooper pointed out "The figures are different from the target. ... You choose what to target."
  • Refusal to correct baseline. Sometimes the easiest way to achieve a goal is to move the goalposts. Some people are quick to use this tactic, while others instinctively resist change. Mrs May is in the latter camp, and appears to regard any adjustment of the baseline as backsliding and morally suspect.
If you work with enterprise data, you may recognize these anti-patterns.




David Runciman, Do your homework (London Review of Books Vol. 39 No. 6, 16 March 2017)

Michael Skapinker, Theresa May’s clampdown on international students is a mystery (Financial Times, 15 March 2017)

International students and the net migration target: Should students be taken out? (Migration Observatory, 25 Jun 2015)

Oral evidence: The Prime Minister (House of Commons HC 833, 20 December 2016) 


TotalData™ is a trademark of Reply Ltd. All rights reserved

Thursday, March 09, 2017

Inspector Sands to Platform Nine and Three Quarters

Last week was not a good one for the platform business. Uber continues to receive bad publicity on multiple fronts, as noted in my post on Uber's Defeat Device and Denial of Service (March 2017). And on Tuesday, a fat-fingered system admin at AWS managed to take out a significant chunk of the largest platform on the planet, seriously degrading online retail in the Northern Virginia (US-EAST-1) Region. According to one estimate, performance at over half of the top internet retailers was hit by 20 percent or more, and some websites were completely down.

What have we learned from this? Yahoo Finance tells us not to worry.
"The good news: Amazon has addressed the issue, and is working to ensure nothing similar happens again. ... Let’s just hope ... that Amazon doesn’t experience any further issues in the near future."

Other commentators are not so optimistic. For Computer Weekly, this incident
"highlights the risk of running critical systems in the public cloud. Even the most sophisticated cloud IT infrastructure is not infallible."

So perhaps one lesson is not to trust platforms. Or at least not to practice wilful blindness when your chosen platform or cloud provider represents a single point of failure.

One of the myths of cloud, according to Aidan Finn,
"is that you get disaster recovery by default from your cloud vendor (such as Microsoft and Amazon). Everything in the cloud is a utility, and every utility has a price. If you want it, you need to pay for it and deploy it, and this includes a scenario in which a data center burns down and you need to recover. If you didn’t design in and deploy a disaster recovery solution, you’re as cooked as the servers in the smoky data center."

Interestingly, Amazon itself was relatively unaffected by Tuesday's problem. This may have been because they split their deployment across multiple geographical zones. However, as Brian Guy points out, there are significant costs involved in multi-region deployment, as well as data protection issues. He also notes that this question is not (yet) addressed by Amazon's architectural guidelines for AWS users, known as the Well-Architected Framework.

Amazon recently added another pillar to the Well-Architected Framework, namely operational excellence. This includes such practices as performing operations with code: in other words, automating operations as much as possible. Did someone say Fat Finger?




Abel Avram, The AWS Well-Architected Framework Adds Operational Excellence (InfoQ, 25 Nov 2016)

Julie Bort, The massive AWS outage hurt 54 of the top 100 internet retailers — but not Amazon (Business Insider, 1 March 2017)

Aidan Finn, How to Avoid an AWS-Style Outage in Azure (Petri, 6 March 2017)

Brian Guy, Analysis: Rethinking cloud architecture after the outage of Amazon Web Services (GeekWire, 5 March 2017)

Daniel Howley, Why you should still trust Amazon Web Services even though it took down the internet (Yahoo Finance, 6 March 2017)

Chris Mellor, Tuesday's AWS S3-izure exposes Amazon-sized internet bottleneck (The Register, 1 March 2017)

Shaun Nichols, Amazon S3-izure cause: Half the web vanished because an AWS bod fat-fingered a command (The Register, 2 March 2017)

Cliff Saran, AWS outage shows vulnerability of cloud disaster recovery (Computer Weekly, 6 March 2017)

Sunday, March 05, 2017

Uber's Defeat Device and Denial of Service

Perhaps you already know about Distributed Denial of Service (DDOS). In this post, I'm going to talk about something quite different, which we might call Centralized Denial of Service.

This week we learned that Uber had developed a defeat device called Greyball - a fake Uber app whose purpose was to frustrate investigations by regulators and law enforcement, especially designed for those cities where regulators were suspicious of the Uber model.

In 2014, Erich England, a code enforcement inspector in Portland, Oregon, tried to hail an Uber car downtown in a sting operation against the company. However, Uber recognized that Mr England was a regulator, and cancelled his booking. 

It turns out that Uber had developed algorithms to be suspicious of such people. According to the New York Times, grounds for suspicion included trips to and from law enforcement offices, or credit cards associated with selected public agencies. (Presumably there were a number of false positives generated by excessive suspicion or √úberverdacht.)

But as Adrienne Lafrance points out, if a digital service provider can deny service to regulators (or people it suspects to be regulators), it can also deny service on other grounds. She talks to Ethan Zuckerman, the director of the Center for Civic Media at MIT, who observes that
"Greyballing police may primarily raise the concern that Uber is obstructing justice, but Greyballing for other reasons—a bias against Muslims, for instance—would be illegal and discriminatory, and it would be very difficult to make the case it was going on."
One might also imagine Uber trying to discriminate against people with extreme political opinions, and defending this in terms of the safety of their drivers. Or discriminating against people with special needs, such as wheelchair users.

Typically, people who are subject to discrimination have less choice of service providers, and a degraded service overall. But if there is a defacto monopoly, which is of course where Uber wishes to end up in as many cities as possible, then its denial of service is centralized and more extreme. Once you have been banned by Uber, and once Uber has driven all the other forms of public transport out of existence, you have no choice but to walk.




Mike Isaac, How Uber Deceives the Authorities Worldwide (New York Times, 3 March 2017)

Adrienne LaFrance, Uber’s Secret Program Raises Questions About Discrimination (The Atlantic, 3 March 2017)

Saturday, February 04, 2017

Personalized emails (not)

Here's a sample from my email inbox, which arrived yesterday.

Dear Richard
I know how important your organization's big data strategy is. That's why I want to personally invite you to attend our webinar. 

How does he know? Is he basing his knowledge on big data or extremely small data? I'm curious to know which.

And what is his idea of a personal invitation? Does he think that personalization is achieved by having his email software insert my first name into the first line? Gosh, how very customer-centric!

But at least the email arrived at a civilized time. Unlike the one that arrived as I was getting into bed the other night, from an eCRM system whose idea of personalization didn't extend to checking what time zone I was in. I guess one must be grateful for these small mercies.

Sunday, January 01, 2017

The Unexpected Happens

When Complex Event Processing (CEP) emerged around ten years ago, one of the early applications was real-time risk management. In the financial sector, there was growing recognition for the need for real-time visibility - continuous calibration of positions – in order to keep pace with the emerging importance of algorithmic trading. This is now relatively well-established in banking and trading sectors; Chemitiganti argues that the insurance industry now faces similar requirements.

In 2008, Chris Martins, then Marketing Director for CEP firm Apama, suggested considering CEP as a prospective "dog whisperer" that can help manage the risk of the technology "dog" biting its master.

But "dog bites master" works in both directions. In the case of Eliot Spitzer, the dog that bit its master was the anti money-laundering software that he had used against others.

And in the case of algorithmic trading, it seems we can no longer be sure who is master - whether black swan events are the inevitable and emergent result of excessive complexity, or whether hostile agents are engaged in a black swan breeding programme.  One of the first CEP insiders to raise this concern was John Bates, first as CTO at Apama and subsequently with Software AG. (He now works for a subsidiary of SAP.)

from Dark Pools by Scott Patterson

And in 2015, Bates wrote that "high-speed trading algorithms are an alluring target for cyber thieves".

So if technology is capable of both generating unexpected events and amplifying hostile attacks, are we being naive to imagine we use the same technology to protect ourselves?

Perhaps, but I believe there are some productive lines of development, as I've discussed previously on this blog and elsewhere.


1. Organizational intelligence - not relying either on human intelligence alone or on artificial intelligence alone, but looking for establishing sociotechnical systems that allow people and algorithms to collaborate effectively.

2. Algorithmic biodiversity - maintaining multiple algorithms, developed by different teams using different datasets, in order to detect additional weak signals and generate "second opinions".





John Bates, Algorithmic Terrorism (Apama, 4 August 2010). To Catch an Algo Thief (Huffington Post, 26 Feb 2015)

John Borland, The Technology That Toppled Eliot Spitzer (MIT Technology Review, 19 March 2008) via Adam Shostack, Algorithms for the War on the Unexpected (19 March 2008)

Vamsi Chemitiganti, Why the Insurance Industry Needs to Learn from Banking’s Risk Management Nightmares.. (10 September 2016)

Theo Hildyard, Pillar #6 of Market Surveillance 2.0: Known and unknown threats (Trading Mesh, 2 April 2015)

Neil Johnson et al, Financial black swans driven by ultrafast machine ecology (arXiv:1202.1448 [physics.soc-ph], 7 Feb 2012)

Chris Martins, CEP and Real-Time Risk – “The Dog Whisperer” (Apama, 21 March 2008)

Scott Patterson, Dark Pools - The Rise of A. I. Trading Machines and the Looming Threat to Wall Street (Random House, 2013). See review by David Leinweber, Are Algorithmic Monsters Threatening The Global Financial System? (Forbes, 11 July 2012)

Richard Veryard, Building Organizational Intelligence (LeanPub, 2012)

Related Posts

The Shelf-Life of Algorithms (October 2016)

Thursday, December 29, 2016

Uber Mathematics 3

Where are Uber's real competitors? The obvious answer would be the traditional taxi operators in large cities. Taxi services are usually controlled by city authorities or other regulators, to ensure that the prices are fair, and that the drivers and the vehicles are safe. Taxi drivers in various cities have protested against Uber, arguing that it cheats regulation by using unlicensed drivers to undercut prices. However, regulators (such as the UK CMA) have sometimes decided that consumer interests are best promoted by allowing Uber to compete with established providers.

Uber is therefore selling itself three ways - not only to passengers and drivers but also to regulators. In a sense, this makes it a three-sided platform.

However, as discussed in my earlier posts, some commentators are dubious that Uber can ever be profitable in this competitive space, even with substantial deregulation in its favour. What Uber really wants (they argue) is to persuade city authorities to stop investing in public transport, to stop subsidizing buses and subsidize Uber transport instead. If other competing modes of transport are decommissioned, the Uber business model starts to look quite different - just another privatized yet publicly subsidized monopoly, supposedly independent but effectively underwritten by the government.



All you need to know about Uber (BBC News, 9 July 2015) Uber says TfL cab proposals 'against public interest' (BBC News, 2 October 2015)

Does Uber have an ally in the CMA? (Maclay Murray & Spens, 12 October 2016)

Anne-Sylvaine Chassany, Uber: a route out of the French banlieues (FT, 3 March 2016)

Dave Lee, Is Uber getting too vital to fail? (BBC News, 10 December 2016)


Related Posts
Uber Mathematics (Nov 2016) Uber Mathematics 2 (Dec 2016)

Saturday, December 03, 2016

Uber Mathematics 2

Aside from the discussion of Uber as a two-sided platform, addressed in my post on Uber Mathematics (Nov 2016), there is also a discussion of Uber's overall growth strategy and profitability. @izakaminska has been writing a series of critical articles on FT Alphaville.

There are a few different issues that need to be teased apart here. Firstly, there is the fact that Uber is continually launching its service in more cities and countries. Nobody should expect the service in a new city to be instantly profitable. The total figures that Kaminska has obtained raise further questions - whether some cities are more profitable for Uber than others, whether there is a repeating pattern of investment returns as a city service moves from loss-making into profit. Like many companies in rapid growth phase, Uber has managed to convince its investors that they are funding growth into something that has good prospects of becoming profitable.

Profitability in Silicon Valley seems to be predicated on monopoly, as argued by Peter Thiel, leveraging network effects to establish barriers to entry. This is related to the concept of a retail destination - establishing the illusion that there is only one place to go. Kaminska quotes an opinion by Piccioni and Kantorovich, to the effect that it wouldn't take much to set up a rival to Uber, but this opinion needs to be weighed against the fact that Uber has already seen off a number of competitors, including Sidecar. Sidecar was funded by Richard Branson, who asserted that he was not putting his money into a "winner-takes-all market". It now looks as if he was mistaken, as Om Malik (writing in the New Yorker) respectfully points out.

But is Uber economically sustainable even as a monopoly? Kaminska has raised a number of  questions about the underlying business model, including the increasing need for capital investment which could erode margins further. Meanwhile, Uber will almost certainly leverage its cheapness and popularity with passengers to push for further deregulation. So the survival of this model may depend not only on a continual supply of innocent investors and innocent drivers, but also innocent politicians who fall for the deregulation agenda.



Philip Boxer, Managing over the Whole Governance Cycle (April 2006)

Izabella Kaminska, Why Uber’s capital costs will creep ever higher (FT Alphaville, 3 June 2016). Myth-busting Uber's valuation (FT Alphaville, 1 December 2016). The taxi unicorn’s new clothes (FT Alphaville, 13 September 2016) FREE - REGISTRATION REQUIRED

Om Malik, In Silicon Valley Now, It’s Almost Always Winner Takes All (New Yorker,
30 December 2015)

Brian Piccioni and Paul Kantorovich, On Unicorns, Disruption, And Cheap Rides (BCA, 30 August 2016) BCA CLIENTS ONLY

Peter Sims, Why Peter Thiel is Dead Wrong About Monopolies (Medium, 16 September 2014)

Peter Thiel, Competition Is for Losers (Wall Street Journal, 12 September 2014)



Related Posts Uber Mathematics (Nov 2016) Uber Mathematics 3 (Dec 2016)

Thursday, November 10, 2016

Steering The Enterprise of Brexit

Two contrasting approaches to Brexit from architectural thought leaders.

Dan Onions offers an eleven-step decision plan based on his DASH method, showing the interrelated decisions to be taken on Brexit as a DASH output map.

A decision plan for Brexit (Dan Onions)


A stakeholder map for Brexit (Dan Onions)


Let me now contrast Dan's approach with Simon Wardley's. Simon had been making a general point about strategy and execution on Twitter.
Knowing Simon's views on Brexit, I asked whether he would apply the same principle to the UK Government's project to exit the European Union.







Simon's diagram revolves around purpose. OODA is a single loop, and the purpose is typically unproblematic. This reflects the UK government's perspective on Brexit, in which the purpose is assumed to be simply realising the Will of the People. The Prime Minister regards all interpretation, choice, decision and direction as falling under her control as leader. And according to the Prime Minister's doctrine, attempts by other stakeholders (such as Parliament or the Judiciary) to exert any governance over the process is tantamount to frustrating the Will of the People.

Whereas Dan's notion is explicitly pluralist - trying to negotiate something acceptable to a broad range of stakeholders with different concerns. He characterizes the challenge as complex and nebulous. Even this characterization would be regarded as subversive by orthodox Brexiteers. It is depressing to compare Dan's careful planning with Government insouciance.

Elsewhere, Simon has acknowledged that "acting upon your strategic choices (the why of movement) can also ultimately change your goal (the why of purpose)". Many years ago, I wrote something on what I called Third-Order Requirements Engineering, which suggested that changing the requirements goal led to a change in identity - if your beliefs and desires have changed, then in a sense you also have changed. This is a subtlety that is lost on most conventional stakeholder management approaches. It will be fascinating to see how the Brexit constituency (or for that matter the Trump constituency) evolves over time, especially as they discover the truth of George Bernard Shaw's remark.
"There are two tragedies in life. One is to lose your heart's desire. The other is to gain it."


Dan Onions, An 11 step Decision Plan for Brexit (6 November 2016)

Richard Veryard, Third Order Requirements Engineering (SlideShare)

Based on R.A. Veryard and J.E. Dobson, 'Third Order Requirements Engineering: Vision and Identity', in Proceedings of REFSQ 95, Second International Workshop on Requirements Engineering, (Jyvaskyla, Finland: June 12-13, 1995)

Simon Wardley, On Being Lost (August 2016)

Related Posts: VPEC-T and Pluralism (June 2010)

Tuesday, November 01, 2016

Uber Mathematics

UK Court News. Uber has lost a test case in the UK courts, in which it argued that its drivers were self-employed and therefore not entitled to the minimum wage or any benefits. Why is this ruling not quite as straightforward as it seems? To answer this question, we have to look at the mathematics of two-sided or multi-sided platforms.

Platforms exist in two states - growth and steady-state. A mature steady-state platform maintains a stable and sustainable balance between supply and demand. But to create a platform, you have to build both supply and demand at the same time. Innovative platforms such as Uber are oriented towards expansion and growth - recruiting new passengers and new drivers, and launching in new cities.

New Passengers "Every week in London, 30,000 people download Uber to their phones and order a car for the first time. The technology company, which is worth $60bn, calls this moment “conversion”. It sets great store on the first time you use its service ... With Uber, the feeling should be of plenty, and of assurance: there will always be a driver when you need one." (Knight)
New Drivers "They make it sound so simple: Sign up to drive with Uber and soon you’ll be earning an excellent supplementary income! That’s the central message in Uber’s ongoing multi-platform marketing campaign to recruit new drivers." (McDermott)
New Cities "Uber has deployed its ride-hailing platform in 400 cities around the world since its launch in San Francisco on 31 May 2010, which means that it enters a new market every five days and eight hours. ... To take over a city, Uber flies in a small team, known as “launchers” and hires its first local employee, whose job it is to find drivers and recruit riders." (Knight)

But here's the problem. In order to encourage passengers to rely on the service, Uber needs a surfeit of drivers. If passengers want instant availability of drivers (plenty, assurance, there will always be a driver when you need one), then Uber has to maintain a pool of under-utilized drivers. (Knowles)

Simple mathematics tells us that if Uber takes on far more drivers than it really needs, some of them won't earn very much. Furthermore, people with little experience of this kind of work may underestimate the true costs involved, and may have an unrealistic idea of the amounts they can earn: Uber has no obvious incentive to disillusion them. (This is an example of Asymmetric Information.) Even if the average earnings of Uber drivers are well above the minimum wage, as Uber claims, it is not the average that matters here but the distribution.

The myth is that these are drivers who can choose whether to provide a service or not, so they are free agents. Libertarians wax lyrical about the "gig economy" and the benefits to passengers. However, the UK courts have judged that Uber drivers work under a series of constraints, and are therefore to be classified as "workers" for the purposes of various regulations, including minimum wage and other benefits.

Uber has announced its intention to appeal the UK judgement. But if the judgement stands, what are the implications for Uber? Firstly, Uber's overall costs are likely to increase, and Uber will undoubtedly find a way either to pass these costs onto the passengers or to pass them back to the drivers in some other form. But more interestingly, Uber now has a financial incentive to balance supply and demand more fairly, and to avoid taking on too many drivers.

Uber sometimes argues it is merely a technology company, and is not in the transportation business. Dismissing this argument, the UK courts quoted a previous judgement from the North California District Court:
"Uber does not simply sell software; it sells rides. Uber is no more a 'technology company' than Yellow Cab is a 'technology company' because it uses CB radios to dispatch taxi cabs."
However, Uber's undoubted technological know-how should enable it to develop (and monetize) appropriate technologies and algorithms to manage a two-sided platform in a more balanced way.



Update: similar concerns have been raised about Amazon delivery drivers. I have previously praised Amazon on this blog for its pioneering understanding of platforms, so let's hope that both Amazon and Uber can create platforms that are fair to drivers as well as its customers.


Mr Y Aslam, Mr J Farrar and Others -V- Uber (Courts and Tribunals Judiciary, 28 October 2016)

Sarah Butler, Uber driver tells MPs: I work 90 hours but still need to claim benefits (Guardian, 6 February 2017)

Tom Espiner and Daniel Thomas, What does Uber employment ruling mean? (BBC News, 28 October 2016)

David S. Evans, The Antitrust Economics of Multi-Sided Platform Markets (Yale Journal on Regulation, Vol 20 Issue 2, 2003). Multisided Platforms, Dynamic Competition and the Assessment of Market Power for Internet-Based Firms (CPI Antitrust Chronicle, May 2016)

Sam Knight, How Uber Conquered London (Guardian, 27 April 2016)

Kitty Knowles, 10 of the biggest complaints about Uber – from Uber drivers (The Memo, 5 November 2015)

Barry Levine, Uber opens up its API – and creates a new platform (VentureBeat, 20 August 2014)

John McDermott, I've done the (real) math: No way an Uber driver makes minimum wage (We Are Mel, 17 May 2016)

Hilary Osborne, Uber loses right to classify UK drivers as self-employed (Guardian, 28 October 2016)

Aaron Smith, Gig Work, Online Selling and Home Sharing (Pew Research Center, 17 November 2016)

Ciro Spedaliere, How to start a multi-sided platform (30 June 2015)

Amazon drivers 'work illegal hours' (BBC News, 11 November 2016)

See further discussion with @wimrampen and others on Storify: Uber Mathematics - A Discussion


Related Posts
Uber Mathematics 2 (Dec 2016) Uber Mathematics 3 (Dec 2016)




Updated 6 February 2017

Wednesday, October 26, 2016

The Shelf-Life of Algorithms

@mrkwpalmer (TIBCO) invites us to take what he calls a Hyper-Darwinian approach to analytics. He observes that "many algorithms, once discovered, have a remarkably short shelf-life" and argues that one must be as good at "killing off weak or vanquished algorithms" as creating new ones.

As I've pointed out elsewhere (Arguments from Nature, December 2010), the non-survival of the unfit (as implied by his phrase) is not logically equivalent to the survival of the fittest, and Darwinian analogies always need to be taken with a pinch of salt. However, Mark raises an important point about the limitations of algorithms, and the need for constant review and adaptation, to maintain what he calls algorithmic efficacy.

His examples fall into three types. Firstly there are algorithms designed to anticipate and outwit human and social processes, from financial trading to fraud. Clearly these need to be constantly modified, otherwise the humans will learn to outwit the algorithms. And secondly there are algorithms designed to compete with other algorithms. In both cases, these algorithms need to keep ahead of the competition and to avoid themselves becoming predictable. Following an evolutionary analogy, the mutual adaptation of fraud and anti-fraud tactics resembles the co-evolution of predator and prey.

Mark also mentions a third type of algorithm, where the element of competition and the need for constant change is less obvious. His main example of this type is in the area of predictive maintenance, where the algorithm is trying to predict the behaviour of devices and networks that may fail in surprising and often inconvenient ways. It is a common human tendency to imagine that these devices are inhabited by demons -- as if a printer or photocopier deliberately jams or runs out of toner because it somehow knows when one is in a real hurry -- but most of us don't take this idea too seriously.

Where does surprise come from? Bateson suggests that it comes from an interaction between two contrary variables: probability and stability --
"There would be no surprises in a universe governed either by probability alone or by stability alone."
--  and points out that because adaptations in Nature are always based on a finite range of circumstances (data points), Nature can always present new circumstances (data) which undermine these adaptations. He calls this the caprice of Nature.
"This is, in a sense, most unfair. ... But in another sense, or looked at in a wider perspective, this unfairness is the recurrent condition for evolutionary creativity."

The problem with adaptation being based solely on past experience also arises with machine learning, which generally uses a large but finite dataset to perform inductive reasoning, in a way that is non-transparent to the human. This probably works okay for preventative maintenance on relatively simple and isolated devices, but as devices and their interconnections get more complex, we shouldn't be too surprised if algorithms, whether based on human mathematics or machine learning, sometimes get caught out by the caprice of Nature. Or by so-called Black Swans.

This potential unreliability is particularly problematic in two cases. Firstly, when the algorithms are used to make critical decisions affecting human lives - as in justice or recruitment systems. (See for example, Zeynap Tufekci's recent TED talk.) And secondly, when preventative maintenance has safety implications - from aeroengineering to medical implants.

One way of mitigating this risk might be to maintain multiple algorithms, developed by different teams using different datasets, in order to detect additional weak signals and generate "second opinions". And get human experts to look at the cases where the algorithms strongly disagree.

This would suggest that we maybe shouldn't be too hasty to kill off algorithms with poor efficacy, but sometimes keep them in the interests of algorithmic biodiversity.  (There - now I'm using the evolutionary metaphor.)



Gregory Bateson, "The New Conceptual Frames for Behavioural Research". Proceedings of the Sixth Annual Psychiatric Institute (Princeton NJ: New Jersey Neuro-Psychiatric Institute, September 17, 1958). Reprinted in G. Bateson, A Sacred Unity: Further Steps to an Ecology of Mind (edited R.E. Donaldson, New York: Harper Collins, 1991) pp 93-110

Mark Palmer, The emerging Darwinian approach to analytics and augmented intelligence (TechCrunch, 4 September 2016)

Zeynap Tufekci, Machine intelligence makes human morals more important (TED Talks, Filmed June 2016)


Related Posts
The Transparency of Algorithms (October 2016)