I’ve just come across a very interesting (if lengthy) paper: Law and Regulation of Artificial Intelligence and Robots – Conceptual Framework and Normative Implications (Nicolas Petit, Professor of Law, University of Liege) – that has promoted me to pause in my learning of how to implement machine learning to spend some time thinking about the privacy and legal implications of doing so.
I’ve written this blog post to put my thoughts to “paper”, to serve as a link farm to resources on this topic and hopefully to spark a discussion in the comments if any readers have any good resources to add.
Elon Musk said way back in 2014 that he fears that AI could destroy the human race. He’s not alone in fearing AI. Joe Public keeps hearing the term “AI” being bandied about, along with “it’s going to steal all of our jobs”, but for the most part relegates the destruction of the human race by AI to the movies.
It’s widely agreed that the fear that AI will steal all our jobs is largely misplaced. A quote from a March 2017 Irish Times article states that “Automation by AI will relatively spare high skill, non-routine occupations (such as architects and senior managers) on the one hand and unskilled workers (such as cleaners and burger-tossers) on the other hand.”
OK, so most of our jobs are safe and those displaced by AI will be able to re-skill and get jobs in new/expanding fields created as AI gains footholds in all industries.
What about the legal implications of AI though?
If an AI is making a decision about whether I can get credit or not based on a black-box algorithm, how do I know that the decision is fair? If an AI decides that, based on my shopping history and income, that I would be willing to pay X for a product but my friend was quoted a much lower price Y for the same product based on her shopping history and income, then have I been discriminated against?
Regulations already exist for some sectors so the financial institution must provide an explanation for exactly why my credit application was refused, but what about newer uses of AI like algorithmic pricing? Who decides when privacy should take precedence over pricing?
When AI is applied to robots, it brings up a whole new slew of questions.
At what point does a robot become an entity with rights, and what kind of rights would apply to such an entity?
When your robot Butler has rights, you better not try knocking him over with a stick (even to get your view count on YouTube up) for fear of a workplace harassment lawsuit.
What if your robot Butler scalds your guest when serving them tea – are you liable for the injury or is the manufacturer of the hardware or the coder of the AI controlling the hardware? What if it goes out to the shopping and causes a car crash by crossing the road in the wrong place? Do you need robot insurance to protect yourself from claims? The Petit article states “The 2017 European Parliament resolution on Civil law rules on robotics…seems to suggest that it is inappropriate to impute liability on humans for acts of autonomous robots, but at the same time calls for compulsory insurance on users”
I don’t know if the answers to all of these questions already exist but I’m excited to try to find out what the state of play is now, and to keep an eye on what is coming down the line.