AI In The Real World: What Could Possibly Go Wrong? By Jennifer Lynn
AI relies on algorithms built by enterprises and consultants build, as well as prepackaged algorithms derived from artificial intelligence-as-a-service (AIaaS) vendors.
AI is growing smarter day by day.
Risk and governance issues
At the same time, however, questions about risk and governance are also emerging. The AI revolution has contributed to enterprise executives factoring privacy concerns into their products. In the current climate, determining risk when AI is used in different contexts is crucial. Despite challenges that may be expected of the technologies fueling AI—that is, the widely adopted flavor of machine learning—the enterprise IT market remains as committed as ever to both AI and ML, actively investing time and resources to accelerate development.
For a business leader, there are many privacy issues to consider, with algorithms being used in machine learning on the enterprise side. Privacy concerns are all too familiar in technology companies such as Amazon, Facebook, Netflix, and dozens of other organizations that generate algorithms using facial recognition. Enterprise IT business leaders are aware of the power of facial recognition technology and have been implementing it in novel ways. We already have facial recognition software being used for hiring, lending, and law enforcement.
Privacy concerns and resistance
However, as facial recognition plays an increasing role in security, law enforcement, and more, the privacy concerns are certainly heightening. Technology giant Amazon was asked earlier this year by the American Civil Liberties Union and nearly two dozen additional organizations to stop selling its Rekognition software to law enforcement. The software, which is currently being offered to police departments, has sparked protests from both activists and employees encouraging the company to stop sales.
Studies also indicate that facial recognition may not be accurate. Georgetown University’s Official Report raised serious questions regarding privacy and violations of civil liberties, stating that “half of U.S. adults – more than 117 million people – are in a law enforcement face-recognition network.” The report found that in the United States, one in four law enforcement agencies is able to access face recognition, and that use is almost completely unregulated.
Tesla founder and billionaire Elon Musk expressed a need for tighter AI regulations. In August 2017, Musk tweeted, “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea.”
Yet Dave Kenny, IBM’s senior vice president for Watson and cloud, wrote last year in a letter to Congress: “This technology does not support the fear-mongering commonly associated with the AI debate today. The real disaster would be abandoning or inhibiting cognitive technology before its full potential can be realized. We pay a significant price every day for not knowing what can be known: not knowing what’s wrong with a patient, not knowing where to find critical natural resources, or not knowing where the risks lie in our global economy. It’s time to move beyond fear tactics and refocus the AI dialogue on three priorities I believe are core to this discussion: Intent, skills, and data.”
The ongoing debate for accountability is fueling a movement among regulators, vendors, lawmakers, and independent organizations on how companies can regulate algorithms without hurting innovation. The Artificial Intelligence Caucus was even recently created by the United States Congress to further understanding of AI issues.
Social and cognitive biases that are accidentally or intentionally introduced by the human engineers who code the algorithms can be detrimental to the ethics of artificial intelligence (AI). For example, there have been job-screening systems that use AI to suppress female candidates for certain jobs, simply based on historical hiring data. Clearly, the algorithms that code how human decisions should be made are still not immune from gender bias, which may occur in the workplace or the developers’ human values.
Potential consequences of AI as a service
The way AI will automate enterprise IT jobs could be affected by issues such as having a male-dominated workforce designing the algorithms, a lack of female data scientists, or intersectional thinking when developing algorithms. AI as a Service (AIaaS), where a third party offers AI, can also result in economic and social consequences. There may be implications if a true AIaaS offering was available to just anyone. Social tensions and questions of work productivity with humans can be intense within an AI economy.
The next generation of AIaaS companies are poised to raise many different challenges regarding economic policy. AI certainly has made life easier for humans, but if an algorithm goes wrong, results can end in fatalities, lost revenue, racism, and more.
Yet there still is not much overall that can be done to curb every risk and privacy issue. Today, more and more robots and machines can solve problems with highly complex data, learn, perform, and perfect specific tasks. As AI’s growth and innovation spread globally, and as companies continue generating interest based on its initial success, business leaders continue with the hard work of aggressively rolling out AI deployments organization-wide.
Security concerns and employee misgivings
Security concerns when AI is used in applications also have potential unanticipated risks, and in some cases, engineers themselves do not even endorse them. Earlier this year, Bloomberg reported that a group of influential software engineers from Google’s cloud division refused to create a security feature called the “air gap” for the military. An air gap is a network security measure employed through one or more computers ensuring that a secure computer network is physically isolated from unsecured networks. According to Bloomberg, “the technology would assist Google in winning over sensitive military contracts.”
Rebellion among employees has grown tremendously with technology companies in Silicon Valley. Employees expressed their “black box’ method” concerns to Google in a company letter. “This plan will irreparably damage Google’s brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust,” the employees stated. “This contract puts Google’s reputation at risk and stands in direct opposition to our core values. Building this technology to assist the U.S. Government in military surveillance – and potentially lethal outcomes – is not acceptable.” Bloomberg last week reported that Google has dropped out of competition for the contract.
Bear in mind that reasonable explanations for all the different stakeholders are critical within the IT enterprise when working with advanced AI algorithms and applications. For developers, automated decision-making can be altogether inscrutable. Within the DevOps environment, many enterprise organizations have moved away from the IT back-room maintenance model into a development cycle of customer-facing apps. High security risks can be replicated with automated tasks and as a result, spread by robots or machines.
Differing points of view: threat or opportunity?
Enterprise organizations are both under prepared and unaware of these types of potential vulnerabilities arising from DevOps. The CyberArk Threat Landscape report suggests that organizations risk having their new apps blocked unless they look at security at code level from the get-go. When purchasing any apps that incorporate AI elements, security should be a top priority. With any layer of cloud, safety and security should be built in. Artificial intelligence, machine learning, and deep learning approaches are all fundamentally different ways to program computers. For the public, it’s only human nature to distrust what one cannot understand, and there is still a great deal of AI and ML models underlying applications that are not yet clear. It’s pertinent that trust remains key.
If business leaders and technology experts have their way, artificial intelligence will likely transform our world. Yet at the same time, these same individuals cannot decipher what the transformational effect will lead to. Choosing sides has proven to be a complicated process. Governance in AI creates many challenges, with regulators fearing our world will be controlled by robots.
Theoretical physicist Stephen Hawking stated that there may even be a “robot apocalypse.” He feared consequences of creating something that can match or surpass humans, believing there is no real difference between achievements by a biological brain and achievements by a computer. AI superiority is a known fear. “It would take off on its own and redesign itself at an ever-increasing rate,” he said to the BBC in 2014. “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” At the Centre for the Future of Intelligence AI event, Hawking said, “In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.”
Realizing value with awareness of risks
Enterprises worldwide are just beginning to realize the real value of artificial intelligence. According to Microsoft Azure CTO Mark Russinovich, artificial intelligence remains one of the most promising higher-level machine-learning services. Speaking at GeekWire’s Cloud Tech Summit, Russinovich said, “Companies are taking advantage of AI and ML to automate processes and get insights into operations that they didn’t have before.”
As applications evolve in an AI environment, application and adoption growth of machine learning and AI techniques will deliver greater opportunities. AI algorithms in enterprise IT have the power to create as well as the power to destroy. The potential uses are limitless, but so are the unintended consequences, and mistakes in this space can land any enterprise IT business in the headlines.