Big Tech has been on the defensive lately, and for good reason. What was once perceived as a way to foster democracy has given way to algorithmic dystopia. But Facebook's algorithmic dangers are tied to an ad-server-based model we must dismantle. Rant time.
Opacity in AI used to be an academic problem - now it's everyone's problem. In this piece, I define the issues at stake, and how they tie into the ongoing discussion on AI ethics.
I've addressed whether fairness can be measured - but can it be automated? These are central questions as we contend with the real world consequences of algorithmic bias.
By now, most AI practitioners acknowledge the universal prevalence of bias, and the problem of bias in AI modeling. But what about fairness? Can fairness be measured via quantifiable metrics? Some say no - but this is where the debate gets interesting.
A review of 'Global Initiative on Data Security' led me to an exchange with a company doing business in China. With new 5G security issues on the horizon, it's a good time to reflect on the implications of "back doors," ethical AI, and where the responsibility lies.
IBM Watson's health initiative underscored the limitations of applied cognitive computing. But can a fresh wave of cognitive, conversational AI solutions like Amelia from IPsoft succeed with a different type of offering?
No, you can't program your AI for empathy or ethics. But you can certainly confront the problem of bias. In part two of revisiting AI ethics, we examine how bias, data management, and privacy should be addressed.
AI ethics are having a hard time keeping up with AI. Academic debates may be interesting, but organizations need a practical AI ethics framework. Where do we go from here?
Big data gets all the hype. Small data is perceived as inadequate for today's in vogue algorithms. But by overlooking small data, are enterprises missing a superior source of insight?
Enterprises crave personalized data, but protecting privacy is non-negotiable. Anonymizing the data brings limitations. Can differential privacy help?
AI evangelists pay lip service to solving AI bias - perhaps through better algorithms or other computational means. But is this viable? Is bias in AI inevitable?
Explainability has moved from an academic debate to a significant barrier to AI adoption. A slew of new tools and approaches are intended to address this problem - but will they close the explainability gap?
We've established that unethical AI hurts protected classes - but it doesn't stop there. Across industries and regions, unethical AI can impact the entire population. Here's some questions to consider.