Wednesday , 15 August 2018
  • Home » International » The realisation that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too.
    Why the tech world needs philosophers


The realisation that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too.
Why the tech world needs philosophers


OurtimeBD.com
05.08.2018

Ryan Jenkins, Assistant professor of Philosophy and a senior fellow at California Polytechnic State University/Gulf News

When it comes to AI and weapons, there needs to be a set of ethical guidelines and policy that are sensitive to various ambiguities

Silicon Valley continues to wrestle with the moral implications of its inventions — often blindsided by the public reaction to them: Google was recently criticised for its work on ‘Project Maven’, a Pentagon effort to develop artificial intelligence (AI), to be used in military drones, with the ability to make distinctions between different objects captured in drone surveillance footage.
The company could have foreseen that a potential end use of this technology would be fully autonomous weapons — so-called “killer robots” — which various scholars, AI pioneers and many of its own employees vocally oppose. Under pressure — including an admonition that the project runs afoul of its former corporate motto, “Don’t Be Evil” — Google said it wouldn’t renew the Project Maven contract when it expires next year.
To quell the controversy surrounding the issue, Google last week announced a set of ethical guidelines meant to steer its development of AI. Among its principles: The company won’t “design or deploy AI” for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. That’s a reassuring pledge.
What’s harder is figuring out, going forward, where to draw the line — to determine what, exactly, “cause” and “directly facilitate” mean, and how those limitations apply to Google projects. To find the answers, Google, and the rest of the tech industry, should look to philosophers, who’ve grappled with these questions for millennia. Philosophers’ conclusions, derived over time, will help Silicon Valley identify possible loopholes in its thinking about ethics.The realisation that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too. We know we ought not lie, but what if it’s done to protect someone’s feelings? We know killing is wrong, but what if it’s done in self-defence? Our language and concepts seem hopelessly Procrustean when applied to our multifarious moral experience. The same goes for the way we evaluate the uses of technology.In the case of Project Maven, or weapons technology, in general, how can we tell whether artificial intelligence facilitates injury or prevents it?
The realisation that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too.
The Pentagon’s aim in contracting with Google was to develop AI to classify objects in drone video footage. In theory, at least, the technology could be used to reduce civilian casualties that result from drone strikes. But it’s not clear whether this falls afoul of Google’s guidelines.
New ethical guidelines
On one hand, the enhanced ability of the drone operator to visually identify humans and, potentially, refrain from targeting them, could mean the AI’s function is to prevent harm, and it would, therefore, fit within the company’s new ethical guidelines. On the other hand, the fact that the AI is a component of an overall weapons system that’s used to attack targets, including humans, could mean the technology is ultimately employed to facilitate harm, and therefore its development runs afoul of Google’s guidelines.
Sorting out causal chains, such as this, is challenging for philosophers and can lead us to jump through esoteric metaphysical hoops. But the exercise is important, because it forces the language we use to be precise and, in cases like this, to determine whether someone, or something, is rightly described as the cause, direct or indirect, of harm. Google appears to understand this, and its focus with causation is appropriate, but its gloss on the topic is incomplete.


Latest Posts



Editor : Nayeemul Islam Khan

Editor-in-Charge : Nasima Khan Monty

Managing Editor : M. Kobaidur Rahman

Office : ENA SHAKUR'S EMARAT 19/3 Bir Uttam Qazi Nuruzzaman Sarak.

West Panthapath (Beside Square Hospital)Dhaka.

Phone : 9666401,8629205 (News & Advertisement),

Fax : 9667654 , Email : news@ourtimebd.com