AI & RPA
Ever since the discipline of Artificial Intelligence, and indeed the term itself, was created at the 1956 Dartmouth Workshop, AI seemed to have been a perennial non-starter, promising more than it delivered, a concept that found more use in the realm of fiction than in reality.
It’s a question that evokes a variety of responses, depending on industry. A Harvard Business Review study examined four models, with varying levels of autonomy given to Ais. That’s not the case anymore. With the incredible advances in computing power made over the past three decades, AI has now come into its own, and is a part of organizational decision making, creating a new set of challenges – business, developmental, and ethical challenges.
One of the driving questions is how much do organizations trust their AIs?
It’s a question that evokes a variety of responses, depending on industry. A Harvard Business Review study examined four models, with varying levels of autonomy given to Ais.
The challenge, as always, is understanding the possibilities and limitations offered by AI and deep learning. It does not naturally follow that throwing a lot of data at a set of AI algorithms will automatically end up with insights not immediately available to human expertise.
Watch Sonata CEO Srikar Reddy chairing a session on the ethics of AI
Here, at Sonata, we look at the ways in which AI can be trusted to both drive decision making, as well as support decision making.