Technology

Ethics at the fore of AI conversations

As the end of 2018 approached, many artificial intelligence technologies like visual testing, chatbots and language recognition had matured to the point of ubiquity. Two years since SD Times’ “Year of AI,” the conversations around AI and machine learning have shifted further and further away from potential applications and surprising new uses of the tech. Now the topic on the minds of everyone, from developers and analysts, to the layman unsure about how much he trusts self-driving cars, is the ethics of AI — where it is ethical to apply AI, whether the bias of their creators can spoil their decision-making capabilities and whether the people who are displaced from their careers by automation will have an alternative.

In June, after pulling out from a Pentagon-commissioned military AI contract after protest from within, Google laid out specific principles for what they consider ethical AI development, and others have followed suit.

In October, MIT pledged $1 billion towards advancing AI by bringing computing and AI to all fields of study, hiring personnel, and creating educational and research opportunities in the field — in addition to educating students on the responsible and ethical applications of AI.

In the announcement, Stephen A. Schwarzman, CEO of Blackstone and one of the backers of MIT’s initiative said, “We face fundamental questions about how to ensure that technological advancements benefit all — especially those most vulnerable to the radical changes AI will inevitably bring to the nature of the workforce.”

“Autonomous things” sits at number one in Gartner’s list of its top 10 tech predictions for 2019, also from October. With the capabilities of AI only growing (DARPA announced in October that they’re working to improve the “common sense” of machine learning technology), many consider it high time that the workers that might lose their positions to advancing AI be a consideration of the companies producing the technology and not just a philosophical question.

Google is addressing this with its funding, launched in July, of $50 million through its Google.org branch for nonprofits who are preparing for that scenario. This includes training, education and networking to help people gain the skills that it says will be required in a future workforce, and which it say aren’t as common as they need to be, as well as support for workers in low-paying roles that might be made obsolete.

When DARPA announced in August that it would be investing in exploring the ‘third wave’ of artificial intelligence research, the department said that the focus would be on making AI more able to contextualize details and make inferences from far fewer data points by recognizing how its own learning model is structured. The example DARPA gave was in the image recognition of a cat, which instead of relying on thousands upon thousands of images of cats to pick out another cat, as in the training data and example-focused “second wave” of AI, an AI would be able to pick out the cat from noting that the image had fur, whiskers, claws etc.

The MIT-IBM Watson AI Lab announced similar projects in development back in April that focused on training AI to recognize dynamic action in video. While this still relied on one million 3-second clips, which the researchers said proved extremely difficult to curate when trying to account for bias, the end-goal was to train an AI to build analogies and interpret actions and dynamics.

Gartner places a focus on digital ethics and privacy at number nine on its list of predictions for 2019, and as the industry moves towards this new wave, the ethical ramifications of emerging technologies are predicted to start being considered earlier and earlier. “Shifting from privacy to ethics moves the conversation beyond ‘are we compliant’ toward ‘are we doing the right thing,” Gartner wrote.

Leave a Reply

Your email address will not be published. Required fields are marked *