Should Data Scientists Adhere to a Hippocratic Oath?


    The tech industry is having a moment of reflection. Even Mark Zuckerberg and Tim Cook are talking openly about the downsides of software and algorithm mediating “peoples lives”. And while calls for regulation have been met with increased lobbying to block or shape any rules, some people around the industry are entertaining forms of self regulation. One mind swirling around: Should the programmers and data scientists massaging our data sign a kind of digital Hippocratic oath?

    Microsoft liberated a 151-page book last month on the effects of artificial intelligence on civilization that argued “it could make sense” to bind coders to a pledge like that taken by physicians to “first do no harm.” In San Francisco Tuesday, dozens of data scientists from tech corporations, governments, and nonprofits gathered to start drafting an ethics code for their profession.

    The general thought at the meeting was that it’s about period that the people whose the terms of reference of statistical analysis target ads, advise on criminal convicting, and accidentally enable Russian disinformation campaigns woke up to their power, and used it for the greater good.

    “We have to empower the people working on technology to say’ Hold on, this isn’t right, ’” DJ Patil, chief data scientist for the United States under President Obama, told WIRED.( His former White House post is currently vacant .) Patil kicked off the event, called Data For Good Exchange. The attendee list included staff members of Microsoft, Pinterest, and Google.


    Patil envisages data scientists armed with an ethics code throwing themselves against corporate and institutional gears to prevent things like deployment of biased algorithms in criminal justice.

    It’s a vision that appeals to some who investigate data for a living. “We’re in our infancy as a discipline and it falls to us, more than anyone, to shepherd society through the opportunities and challenges of the petabyte world of AI, ” Dave Goodsmith, from endeavor software startup wrote in the busy Slack group for Tuesday’s effort.

    Others are less sure. Schaun Wheeler, a senior data scientist at marketing corporation Valassis followed Tuesday’s deliberations via Slack and a live video stream. He arrived skeptical, and left more so. The draft code looks like a listing of general principles no one would disagree with, he says, and is being launched into an region that lacks authorities or legislation to enforce rules of practice anyway. Although the number of formal training programs for data scientists is developing, many at work today, including Wheeler, are self-taught.

    Tuesday’s debates yielded a list of 20 principles that will be reviewed and released for wider feedback in coming weeks. They include “Bias will exist. Measure it. Scheme for it, ” “Respecting human dignity, ” and “Exercising ethical imagination.” The project’s organizers hope to see 100,000 people sign the final version of the pledge.

    “The tech industry has been criticized recently and I reckon rightfully so for its naive impression that it can fix the world, ” says Wheeler. “The idea you can fix an entire complex problem like data breaches through some kind of ethical code is to engage in that same various kinds of hubris.”

    One topic of debate Tuesday was whether a non-binding, voluntary code would really protect data scientists who dared to raise ethical fears in the workplace. Another was whether that would have much effect.

    Rishiraj Pravahan, a data scientist at AT& T, said he is supportive of the effort to draft an ethics pledge. He described how he after he and a colleague declined to work on research projects involving another company they didn’t think was ethical, their wishings were respected. But other workers were swapped in and the project went ahead anyway.

    Available evidence been shown that tech companies typically take ethical questions to heart before they are sense a direct menace to their balance sheet. Zuckerberg may be presenting sorrow about his company’s control of distributing datum, but it came only after political pressure over Facebook’s role in Russian interference in the 2016 US election.

    Tech companies that make money by providing platforms for others can have additional reason not to be too prescriptive about ethics. Anything that could scare off patrons from building on your platform is risky.

    Microsoft’s manifesto on AI and society discussed a Hippocratic Oath for coders, and an ethical review process for new avail ourselves of AI. But Microsoft President Brad Smith suggests that the company wouldn’t expect customers building AI systems using Microsoft’s cloud services to necessarily fulfill the same standards. “That’s a tremendously important question and one we have not yet answered ourselves, ” he says. “We create Microsoft Word and know people can use it to write good things or horrific things.”

    Privacy activist Aral Balkan highlights the fact that an ethics code like that drafted this week could actually worsen societal damages caused by engineering. He dreads it will be used by corporations as a signal of virtue, while the committee continues business as usual. “What we should be investigating is how we can stop this mass farming of human data for profit, ” he says. He points to the European Union’s General Data Protection Regulation coming into force this year as a better simulate for preventing algorithmic harms.

    Patil was once chief scientist at LinkedIn, but somewhat like Balkan is skeptical of tech companies’ ability to think carefully about the effects of their own personal-data-fueled products. “I don’t think we as a society can rely on that right now because of what we’ve considered around social platforms and the actions of tech companies motivated only by gains, ” he says.

    Longer term, Patil says one of his hopes for the preparation of the proposed programme ethics code flailed out Tuesday is that it helps motivate policy makers to set firmer, but well-considered, restrictions. “I would like to see what happens here start to define what policy looks like, ” he says.

    Ethical Boundaries

    Keeping machine-learning systems within ethical binds has become a hot topic in artificial-intelligence research.

    Silicon Valley’s most addictive products are built on psychology tips-off from one Stanford professor.

    Facebook is trying to fix its fake news trouble by asking consumers which books they find trustworthy.