Discussion about this post

User's avatar
Sifu Dai's avatar

For the uninitiated ;)

"When I last gave talks about AI ethics, around 2018, my sense was that AI development was taking place alongside the abandonment of responsibility in two dimensions. Firstly, and following on from what was already happening in ‘big data’, the world stopped caring about where AI got its data — fitting in nicely with ‘surveillance capitalism. And secondly, contrary to what professional organisations like BCS and ACM had been preaching for years, the outcomes of AI algorithms were no longer viewed as the responsibility of their designers — or anybody, really.

‘Explainable AI’ and some ideas about mitigating bias in AI were developed in response to this, and for a while this looked promising — but unfortunately, the data responsibility issue has not gone away, and the major developments in AI since then have made responsible engineering only more difficult."

https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end/

Expand full comment
Bryan Alexander's avatar

"This problem of ‘intellligence’ is linked to the political implications of AI as we shall see, suffice it to say here that the paradox is emerging is that as humans become role-playing machines, the machines are becoming generalised problem-solvers displacing what humans had evolved to become." Nicely said.

Reminds me a bit of Phil Dick's meditations on humans acting robotic and robots being more human.

Expand full comment
11 more comments...

No posts