

Neil Raden
Friday rant - Facebook's disinformation spreading, ad-server-economy must go
Big Tech has been on the defensive lately, and for good reason. What was once perceived as a way to foster democracy has given way to algorithmic dystopia. But Facebook's algorithmic dangers are tied to an ad-server-based model we must dismantle. Rant time.
The problem of algorithmic opacity, or "What the heck is the algorithm doing?"
Opacity in AI used to be an academic problem - now it's everyone's problem. In this piece, I define the issues at stake, and how they tie into the ongoing discussion on AI ethics.
Can fairness be automated with AI? A deeper look at an essential debate
I've addressed whether fairness can be measured - but can it be automated? These are central questions as we contend with the real world consequences of algorithmic bias.
Can we measure fairness? A fresh look at a critical AI debate
By now, most AI practitioners acknowledge the universal prevalence of bias, and the problem of bias in AI modeling. But what about fairness? Can fairness be measured via quantifiable metrics? Some say no - but this is where the debate gets interesting.
Musings on China's 'Global Initiative on Data Security' and the problem of security "back doors"
A review of 'Global Initiative on Data Security' led me to an exchange with a company doing business in China. With new 5G security issues on the horizon, it's a good time to reflect on the implications of "back doors," ethical AI, and where the responsibility lies.
The ups and downs of cognitive computing, from Watson to Amelia
IBM Watson's health initiative underscored the limitations of applied cognitive computing. But can a fresh wave of cognitive, conversational AI solutions like Amelia from IPsoft succeed with a different type of offering?
Revisiting ethical AI, part two - on data management, privacy, and the misunderstood topic of bias
No, you can't program your AI for empathy or ethics. But you can certainly confront the problem of bias. In part two of revisiting AI ethics, we examine how bias, data management, and privacy should be addressed.
Revisiting ethical AI - where do organizations need to go next?
AI ethics are having a hard time keeping up with AI. Academic debates may be interesting, but organizations need a practical AI ethics framework. Where do we go from here?
Does small data provide sharper insights than big data? Keeping an eye open, and an open mind
Big data gets all the hype. Small data is perceived as inadequate for today's in vogue algorithms. But by overlooking small data, are enterprises missing a superior source of insight?
The fragility of privacy - can differential privacy help with a probabilistic approach?
Enterprises crave personalized data, but protecting privacy is non-negotiable. Anonymizing the data brings limitations. Can differential privacy help?
AI inevitability - can we separate bias from AI innovation?
AI evangelists pay lip service to solving AI bias - perhaps through better algorithms or other computational means. But is this viable? Is bias in AI inevitable?
The explainability problem - can new approaches pry open the AI black box?
Explainability has moved from an academic debate to a significant barrier to AI adoption. A slew of new tools and approaches are intended to address this problem - but will they close the explainability gap?
Unethical AI unfairly impacts protected classes - and everybody else as well
We've established that unethical AI hurts protected classes - but it doesn't stop there. Across industries and regions, unethical AI can impact the entire population. Here's some questions to consider.