"This problem of ‘intellligence’ is linked to the political implications of AI as we shall see, suffice it to say here that the paradox is emerging is that as humans become role-playing machines, the machines are becoming generalised problem-solvers displacing what humans had evolved to become." Nicely said.
Reminds me a bit of Phil Dick's meditations on humans acting robotic and robots being more human.
Your diagnosis of how liberal elites can absorb populism sounds right, from backing away from climate action (you know my thoughts on this) on down. I'm not sure they'll relearn persuasion without having their parties torn down and rebuilt. (Here in the US we're still too wedding to the moronic celebrity cult)
I'm curious what you think of big firms' desire to not only own AI, but to use it to maintain their market strength etc.
I am wondering what there is to think about here. Big capital acts rationally in this respect as all institutional structures do. If a moral compass appears to emerge it is only ever because either it has no cost attached to it or a simulacrum of it provides cover for some competitive advantage. Anything bigger than the person has no soul, no being for itself. The interesting question is whether AI behaves like a thing-in-itself or a thing-for-itself. If the former, we just have another thing to deal with in the total system. If the latter, we have a thing with a soul. The Vatican probably has teams of philosopher-theologians working on that one, hoping not to make the mistakes they were said to have made with Galileo Galilei.
1. As usual highly thought provoking: why not develop these ideas into a book? If Huntingdon & Fukuyame can do it, why not you?
2. Pending your book, are there any serious books looking at some of these issues in addition to Mearscheimer that you can recommend? For surely social media is not a proper forum to develop serious ideas…
The reason I cannot do a book are several-fold: a) I have no time currently, b) only other intellectuals read books now and spare time is better used to support action in the world (like, for example, the WPB or the health and welfare of my family), c) I have no formal status within the system I am critiquing and media and publishing structures are locked into systems that require formal status to maintain 'credibility' even when those with formal status are idiots and d) no significant publisher would be interested in a system-critique that fundamentally criticises the classes to which they belong and sell. I have no recommendations. There are many people who are thinking intelligently about such things but their thinking is a process that cannot be distilled into a text. The world now moves so fluidly and unexpectedly that anything written at Point A will be redundant at Point B and the moral compass that underpins such processual analysis would be little more than statements of the obvious. Social media at least permits the heads of new processual thinking and reveals the moral compass or lack of moral compass of the thinker in real time.
"When I last gave talks about AI ethics, around 2018, my sense was that AI development was taking place alongside the abandonment of responsibility in two dimensions. Firstly, and following on from what was already happening in ‘big data’, the world stopped caring about where AI got its data — fitting in nicely with ‘surveillance capitalism. And secondly, contrary to what professional organisations like BCS and ACM had been preaching for years, the outcomes of AI algorithms were no longer viewed as the responsibility of their designers — or anybody, really.
‘Explainable AI’ and some ideas about mitigating bias in AI were developed in response to this, and for a while this looked promising — but unfortunately, the data responsibility issue has not gone away, and the major developments in AI since then have made responsible engineering only more difficult."
At the moment I simply do not believe that there can be 'responsible engineering' in the way most activists claim it. Responsible engineering generally means that the bridge does not fall down or the plane fall out of the sky. In other words, the thing does what it is designed to do. Responsible engineering went out of the window with AI a long time ago because it is closer to pure science that becomes applied in order to become technology than simple technology that is the application of knowledge of the material world. Pure science has no moral compass. Technology only has some sense of moral compass or otherwise attached to it when someone decides on its use. AI is in the unusual position of being amorally developed to the point where deciding on its use after the fact does not take place under conditions where the user honestly has much control over what has been created. Responsible science would perhaps have drawn attention to the implications of the technology before the applications but science cannot be responsible here any more than in the drive to nuclear fission or genetic manipulation. Still, power in society can dictate largely how we use nuclear fission or genetic manipulation and we can make moral judgments about that. AI is different. Power in society is already losing control of it. We are where we are and all the earnest and worried liberal intellectuals in the world cannot change what we are now on the cusp of - a free-floating technology whose drive towards becoming a 'thing-for-itself' and whose democratic availability to huge numbers of separate actors without any chance of control (at least outside China) evade all reasonable definitions of responsible engineering. We are going to have to leap ahead now and consider responsible responses to irresponsible usage of things created by engineering.
Interesting post! But I would disagree that societies are more complex than they were prior to the abandonment of representative democracy and the installation of economically and politically centralized technocratic dictatorships in the 1970s, in fact, the automations of so many cognitive tasks, computers, and telecommunications have actually made it much easier to perform so many requisite "management" tasks, so its in some big relevant ways actually far less complex than it was
We will have to agree to disagree on that one but my reasoning arises not from looking at particular processes (where you are correct) but the total system. What is happening is that the 'average' human being today (taken in global aggregate but more strongly to the degree that they are closer to the heartland of Western society) is faced with a) more sensory input and b) more choices than at any time in history. This should theoretically be 'better' if you have a model of the individual as perfectly rational actor with infinite capacity (as an AI may eventually be expected to be) but the human being has evolved in a different time frame from a machine and becomes overwhelmed by inputs and choices unless they are either very self-disciplined, exceptionally intelligent or cocooned or any of these together. This is not the complexity of which I speak but the fact that every individual decision now has effects that create complexity increasingly beyond the capacity of any administrative system to manage. The law of unintended consequences becomes exercised normally rather than exceptionally too. The complexities of migration, health provision, the judicial system, major projects such as transport or IT become impossible to manage and scandals appear because of lack of attention to details arising out of ... complexity. A rules-based service (such as pensions) or a business with the sole objective of managing a closed system to deliver profit are one set of things but the State and society are not such systems. First, the State-society relies on rules being accepted as a matter of belief with the hidden sanction of terror. Second, the State-society has lost the fire-breaks of national borders and customary practices so that individual decision-making has become chaotic. This chaos strips away faith in the State-society and draws the State-society ever closer to a terror it cannot actually impose any more. Components within the total system remain highly ordered, possibly over-ordered and oppressive to individuals, but everything around those units is becoming more complex with the complexity leading to fragmentation and so more complexity in a vicious cycle. After a whiole, little works as it should and things worsen because demoralisation sets in (as in Brezhnev's Russia for different reasons). Finally, chaos, disorder and fragmentation intrude politically (populism being the type case) precisely because the macro-environment can no longer be ordered even as the micro-environments of specific government services (increasingly costly to maintain because of the complexity and cost) and of businesses retain the power to resist through technology. The real worry is when the revolt arising out of complexity and increasingly against complexity starts to enter into the stable micro-components that hold things together for most people.
Hi, thanks for the very interesting reply! In my opinion, thee complexity issues you identify, along with the related problems faced by the systems you reference, are not inherent to modernity but are pathologies of deeply centralized political and economic power. Traditional populism, such as the Jacksonians’ efforts and the many different decentralized and discreet grassroots movements of the Populist and Progressive Eras, demonstrated an ability to handle systemic problems precisely because they operated within frameworks of decentralization. The redundancy and diffusion of economic activity, science, and governmental functions, along with diffusion of decision making and in-general policy variability in these eras largely nullified the systemic risks and unintended consequences of complexity.
For example, Jacksonians emphasized decentralized banking systems, which prevented overreach and created regional checks and balances; more importantly, it both geographically and societally diffused both access to capital and decision making related to is deployment. Also, the many grassroots organizations during the Populist and Progressive Eras developed tailored solutions to local and regional issues. These conditions generated adaptability and resilience, avoiding the cascading failures that centralized systems tend to have
A contemporary parallel can be seen in China's political and economic decentralization from the 1980s until recent years. While lacking the vote, China's decentralized system had local trade protectionism, partially fragmented capital markets, and policy variability and while it lacked the vote it had a not insignificant amount of democratic governance structures and traditional populism because local party structures enabled genuine intraparty deliberation, reflecting a broad socioeconomic spectrum. This created a system with redundant governance layers and diffusion of power, allowing for effective problem solving even in a complex environment. These examples show that decentralized systems, with their capacity for redundancy and adaptability, can effectively manage complexity in ways centralized systems fail to do
Hi, sorry, one additional reply: you wrote: "the 'average' human being today (taken in global aggregate but more strongly to the degree that they are closer to the heartland of Western society) is faced with a) more sensory input and b) more choices than at any time in history."
But the overwhelming sensory input and vast amount of choices you talk about are not inherent to modern society but are largely a result of deliberate choices made by a hyper-centralized system that is utilizing a centralized telecommunications network that it controls centrally. A decentralized information ecosystem and a variable regulatory environment would, at least in some cases and many other partially, organically inhibit this stuff because it would allow local communities to shape their environments in ways that project their wants and circumstances
For example, if the Old Republic was still in effect, local politics and regulatory functions could actively inhibit the saturation of information and choices by enforcing limits on monopolistic media practices, making local market interventions to enable local and regional journalism, and/or regulating digital platforms in such a way so as to inhibit this....
"This problem of ‘intellligence’ is linked to the political implications of AI as we shall see, suffice it to say here that the paradox is emerging is that as humans become role-playing machines, the machines are becoming generalised problem-solvers displacing what humans had evolved to become." Nicely said.
Reminds me a bit of Phil Dick's meditations on humans acting robotic and robots being more human.
Your diagnosis of how liberal elites can absorb populism sounds right, from backing away from climate action (you know my thoughts on this) on down. I'm not sure they'll relearn persuasion without having their parties torn down and rebuilt. (Here in the US we're still too wedding to the moronic celebrity cult)
I'm curious what you think of big firms' desire to not only own AI, but to use it to maintain their market strength etc.
I am wondering what there is to think about here. Big capital acts rationally in this respect as all institutional structures do. If a moral compass appears to emerge it is only ever because either it has no cost attached to it or a simulacrum of it provides cover for some competitive advantage. Anything bigger than the person has no soul, no being for itself. The interesting question is whether AI behaves like a thing-in-itself or a thing-for-itself. If the former, we just have another thing to deal with in the total system. If the latter, we have a thing with a soul. The Vatican probably has teams of philosopher-theologians working on that one, hoping not to make the mistakes they were said to have made with Galileo Galilei.
Tim, a few points
1. As usual highly thought provoking: why not develop these ideas into a book? If Huntingdon & Fukuyame can do it, why not you?
2. Pending your book, are there any serious books looking at some of these issues in addition to Mearscheimer that you can recommend? For surely social media is not a proper forum to develop serious ideas…
The reason I cannot do a book are several-fold: a) I have no time currently, b) only other intellectuals read books now and spare time is better used to support action in the world (like, for example, the WPB or the health and welfare of my family), c) I have no formal status within the system I am critiquing and media and publishing structures are locked into systems that require formal status to maintain 'credibility' even when those with formal status are idiots and d) no significant publisher would be interested in a system-critique that fundamentally criticises the classes to which they belong and sell. I have no recommendations. There are many people who are thinking intelligently about such things but their thinking is a process that cannot be distilled into a text. The world now moves so fluidly and unexpectedly that anything written at Point A will be redundant at Point B and the moral compass that underpins such processual analysis would be little more than statements of the obvious. Social media at least permits the heads of new processual thinking and reveals the moral compass or lack of moral compass of the thinker in real time.
For the uninitiated ;)
"When I last gave talks about AI ethics, around 2018, my sense was that AI development was taking place alongside the abandonment of responsibility in two dimensions. Firstly, and following on from what was already happening in ‘big data’, the world stopped caring about where AI got its data — fitting in nicely with ‘surveillance capitalism. And secondly, contrary to what professional organisations like BCS and ACM had been preaching for years, the outcomes of AI algorithms were no longer viewed as the responsibility of their designers — or anybody, really.
‘Explainable AI’ and some ideas about mitigating bias in AI were developed in response to this, and for a while this looked promising — but unfortunately, the data responsibility issue has not gone away, and the major developments in AI since then have made responsible engineering only more difficult."
https://www.bcs.org/articles-opinion-and-research/does-current-ai-represent-a-dead-end/
At the moment I simply do not believe that there can be 'responsible engineering' in the way most activists claim it. Responsible engineering generally means that the bridge does not fall down or the plane fall out of the sky. In other words, the thing does what it is designed to do. Responsible engineering went out of the window with AI a long time ago because it is closer to pure science that becomes applied in order to become technology than simple technology that is the application of knowledge of the material world. Pure science has no moral compass. Technology only has some sense of moral compass or otherwise attached to it when someone decides on its use. AI is in the unusual position of being amorally developed to the point where deciding on its use after the fact does not take place under conditions where the user honestly has much control over what has been created. Responsible science would perhaps have drawn attention to the implications of the technology before the applications but science cannot be responsible here any more than in the drive to nuclear fission or genetic manipulation. Still, power in society can dictate largely how we use nuclear fission or genetic manipulation and we can make moral judgments about that. AI is different. Power in society is already losing control of it. We are where we are and all the earnest and worried liberal intellectuals in the world cannot change what we are now on the cusp of - a free-floating technology whose drive towards becoming a 'thing-for-itself' and whose democratic availability to huge numbers of separate actors without any chance of control (at least outside China) evade all reasonable definitions of responsible engineering. We are going to have to leap ahead now and consider responsible responses to irresponsible usage of things created by engineering.
Am in agreement, with the caveat that Singularity is not the concern, but rather Chaos.
Interesting post! But I would disagree that societies are more complex than they were prior to the abandonment of representative democracy and the installation of economically and politically centralized technocratic dictatorships in the 1970s, in fact, the automations of so many cognitive tasks, computers, and telecommunications have actually made it much easier to perform so many requisite "management" tasks, so its in some big relevant ways actually far less complex than it was
We will have to agree to disagree on that one but my reasoning arises not from looking at particular processes (where you are correct) but the total system. What is happening is that the 'average' human being today (taken in global aggregate but more strongly to the degree that they are closer to the heartland of Western society) is faced with a) more sensory input and b) more choices than at any time in history. This should theoretically be 'better' if you have a model of the individual as perfectly rational actor with infinite capacity (as an AI may eventually be expected to be) but the human being has evolved in a different time frame from a machine and becomes overwhelmed by inputs and choices unless they are either very self-disciplined, exceptionally intelligent or cocooned or any of these together. This is not the complexity of which I speak but the fact that every individual decision now has effects that create complexity increasingly beyond the capacity of any administrative system to manage. The law of unintended consequences becomes exercised normally rather than exceptionally too. The complexities of migration, health provision, the judicial system, major projects such as transport or IT become impossible to manage and scandals appear because of lack of attention to details arising out of ... complexity. A rules-based service (such as pensions) or a business with the sole objective of managing a closed system to deliver profit are one set of things but the State and society are not such systems. First, the State-society relies on rules being accepted as a matter of belief with the hidden sanction of terror. Second, the State-society has lost the fire-breaks of national borders and customary practices so that individual decision-making has become chaotic. This chaos strips away faith in the State-society and draws the State-society ever closer to a terror it cannot actually impose any more. Components within the total system remain highly ordered, possibly over-ordered and oppressive to individuals, but everything around those units is becoming more complex with the complexity leading to fragmentation and so more complexity in a vicious cycle. After a whiole, little works as it should and things worsen because demoralisation sets in (as in Brezhnev's Russia for different reasons). Finally, chaos, disorder and fragmentation intrude politically (populism being the type case) precisely because the macro-environment can no longer be ordered even as the micro-environments of specific government services (increasingly costly to maintain because of the complexity and cost) and of businesses retain the power to resist through technology. The real worry is when the revolt arising out of complexity and increasingly against complexity starts to enter into the stable micro-components that hold things together for most people.
Hi, thanks for the very interesting reply! In my opinion, thee complexity issues you identify, along with the related problems faced by the systems you reference, are not inherent to modernity but are pathologies of deeply centralized political and economic power. Traditional populism, such as the Jacksonians’ efforts and the many different decentralized and discreet grassroots movements of the Populist and Progressive Eras, demonstrated an ability to handle systemic problems precisely because they operated within frameworks of decentralization. The redundancy and diffusion of economic activity, science, and governmental functions, along with diffusion of decision making and in-general policy variability in these eras largely nullified the systemic risks and unintended consequences of complexity.
For example, Jacksonians emphasized decentralized banking systems, which prevented overreach and created regional checks and balances; more importantly, it both geographically and societally diffused both access to capital and decision making related to is deployment. Also, the many grassroots organizations during the Populist and Progressive Eras developed tailored solutions to local and regional issues. These conditions generated adaptability and resilience, avoiding the cascading failures that centralized systems tend to have
A contemporary parallel can be seen in China's political and economic decentralization from the 1980s until recent years. While lacking the vote, China's decentralized system had local trade protectionism, partially fragmented capital markets, and policy variability and while it lacked the vote it had a not insignificant amount of democratic governance structures and traditional populism because local party structures enabled genuine intraparty deliberation, reflecting a broad socioeconomic spectrum. This created a system with redundant governance layers and diffusion of power, allowing for effective problem solving even in a complex environment. These examples show that decentralized systems, with their capacity for redundancy and adaptability, can effectively manage complexity in ways centralized systems fail to do
Hi, sorry, one additional reply: you wrote: "the 'average' human being today (taken in global aggregate but more strongly to the degree that they are closer to the heartland of Western society) is faced with a) more sensory input and b) more choices than at any time in history."
But the overwhelming sensory input and vast amount of choices you talk about are not inherent to modern society but are largely a result of deliberate choices made by a hyper-centralized system that is utilizing a centralized telecommunications network that it controls centrally. A decentralized information ecosystem and a variable regulatory environment would, at least in some cases and many other partially, organically inhibit this stuff because it would allow local communities to shape their environments in ways that project their wants and circumstances
For example, if the Old Republic was still in effect, local politics and regulatory functions could actively inhibit the saturation of information and choices by enforcing limits on monopolistic media practices, making local market interventions to enable local and regional journalism, and/or regulating digital platforms in such a way so as to inhibit this....
Tim I fully understand: sad nonetheless…